Ansible provisioning

Published: 6 years ago web dev

Ansible is a great, simple to use tool for managing servers. Here's my provisioning script that can be used across Debian dedicated and virtual servers. It does a lot of the standard things like setting up the firewall, creating a non-root user and copying across SSH keys.

Initial setup

Firstly, I'm briefly going to go over setting up the machine that will issue the Ansible commands. I'll be using Debian as the command host.

Don't install Ansible directly via apt as you may end up with a really old version. I ended up with 1.5.4.

Instead add the ppa, as per the Ansible docs:

sudo apt-get update
sudo apt-get install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible

Create an SSH key for Ansible SSH connections: ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

Add the public key to the remote host(s): ssh-copy-id -i ansible [-p x] user@host

Hosts

Kinda confusingly host files in Ansible refer to the clients Ansible playbooks run on. Anyway, by default the /etc/ansible/hosts file is used, but for our provisioning script we will define a separate host file with a single group and pass it to ansible-playbook via the -i arg.

In my case I need to use a slightly more verbose synax as I have multiple hosts under one IP address, so it looks as follows:

[provisioning]
testct1 ansible_ssh_port=x ansible_ssh_host=host.dns.name
testct2 ansible_ssh_port=x ansible_ssh_host=host.dns.name

[provisioning:vars]
ansible_ssh_private_key_file=~/.ssh/ansible

About adding hosts to source control

You don't really want to add your hosts file to source control (at least not a public one) so I exclude hosts via a .gitignore but I also create a hosts.template to remind myself, and potentially others using the playbook, what it requires.

About the provisioning script

The provisioning script will take care of the following:

  • Running package updates and upgrades
  • Installing a number of default packages
  • Setting up iptables with a default inbound ruleset only allowing port 22
  • Creating a non-root user and adding their public key
  • Setting up dotfiles for the new user

Ansible playbooks are recommended to be idempotent (running the playbook once or repeatedly will return the same results) so because of this, we have to do a bit of extra work with checking whether certain things are already done or not.

The provisioing script

---
- hosts: provisioning
  remote_user: root

  tasks:
  - name: run apt-get update and upgrade
    apt:
      update_cache: yes
      cache_valid_time: 3600
      upgrade: safe

  - name: check to see if the prevention of autosaving iptables-persistent rules has already run
    shell: debconf-show iptables-persistent
    register: iptables_no_autosave_check
    changed_when: false

  - name: prevent autosave of existing iptables rules when installing iptables-persistent
    shell: echo iptables-persistent iptables-persistent/autosave_v4 boolean false | debconf-set-selections && iptables-persistent iptables-persistent/autosave_v6 boolean false | debconf-set-selections
    when: iptables_no_autosave_check.stdout == ""

  - name: install packages
    apt:
      name: "{{ item }}"
    with_items: "{{ PACKAGES }}"

  - name: check if iptables rules already copied
    stat:
      path: /etc/iptables/rules.v4
    register: iptables_rules_exist_check

  - name: copy iptables rules
    copy:
      src: ./files/iptables-rules.txt
      dest: /etc/iptables/rules.v4
    when: iptables_rules_exist_check.stat.exists == False or iptables_rules_exist_check.stat.size == 0

  - name: apply iptables rules now
    shell: iptables-restore < /etc/iptables/rules.v4
    when: iptables_rules_exist_check.stat.exists == False or iptables_rules_exist_check.stat.size == 0

  - name: create new user
    user:
      name: "{{ USERNAME }}"
      password: "{{ PASSWORD }}"
      shell: /bin/bash
      update_password: on_create
      groups: "sudo"
      append: yes

  - name: add ssh keys for new user
    authorized_key:
      user: "{{ USERNAME }}"
      key: "{{ lookup('file', item) }}"
    with_items: "{{ PUBLIC_KEYS }}"

  - name: check if dotfiles are already cloned
    stat:
      path: /home/{{ USERNAME }}/dotfiles
    register: dotfiles_exist_check

  - name: git clone dotfiles
    git:
      repo: "{{ DOTFILES_REPO }}"
      dest: /home/{{ USERNAME }}/dotfiles
    become: true
    become_user: "{{ USERNAME }}"
    when: dotfiles_exist_check.stat.exists == False

  - name: install dotfiles
    shell: /home/{{ USERNAME }}/dotfiles/install.sh
    become: true
    become_user: "{{ USERNAME }}"
    when: dotfiles_exist_check.stat.exists == False

Also available on GitHub.

Some explanation

One good thing with Ansible is that most tasks are self-explanatory. The only thing is a little weird is all the iptables checks that are in place. We use iptables-persistent to restore iptables rules on reboot but when we run the playbook, we don't want to overwrite iptables rules if they've already been set up. So we check if rules.v4 (part of iptables-persistent) exists and whether its empty. If it doesn't exist or is empty then the default rules get copied across in a separate task.

An extra complication is that when installing iptables-persistent it autosaves your existing rules. This means on initial run rules.v4 isn't actually empty! If you installed interactively you can choose whether to do this or not but you can't do that when installing via Ansible. To get around this we preset the configuration option that enables autosaving to false. We also have a task to check if this option has already been set so we don't run it everytime.

Eventually it may make sense to put iptables related tasks into their own include file or even their own role as it does make the playbook a little messy.

One other thing to note, I have ca-certificates as a package which probably shouldn't be there. I was having issues downloading anything over SSL and updating the trusted certificates seemed to fix it. I need to look further into why that was happening though.

Storing passwords and other sensitive data

You don't want to store your passwords in plaintext and indeed if you use the user module, you have to provide a sha512 hash anyway. But even storing that unencrypted is a Bad Idea, espeically if you're going to publish your playbooks via source control.

So let's look at one way to do this using Ansible Vault.

We're going to take advantage of the group_vars paradigm and further split our variables down into two files - one encrypted, one not. Incidentally, this is about to be a concise version of this very useful tutorial on DigitalOcean.

We want a group_vars directory structure as follows:

d group_vars
    d provisioning
        f vars
        f vault

I'm assuming the group is provisioning obviously. Anyway vars holds all your non-encrypted vars AND a reference to the encrypted ones. E.g.

---
USERNAME: "BOB"
PASSWORD: "{{ VAULT_PASSWORD }}"

Why a reference? Well it makes sense if you ever need to go greping for those variables. They won't show up in the encrypted version.

Our non-encrypted vault file would look as follows:

---
VAULT_PASSWORD: [sha hash here]

Incidentally, to generate the hash run mkpasswd --method=sha-512.

Once you've finished editing vault, you can encrypt it by running: ansible-vault encrypt vault. This will replace vault with an encrypted version. As it's encrypted it is up to you whether you want to put this in source control or not. I've decided against it so have added vault to a .gitignore file. I also created a vault.template which IS added to source control to remind me what goes in this file.

Now if you wanted to you could store your Vault encrypting password in a file (and exclude it from source control too) but that would kind of defeat the purpose of encrypting our user's password in the first place. So instead, we will just ask Ansible to ask us for the Vault password when we run the playbook. Like so:

ansible-playbook generic_provisioning.yml -i hosts --ask-vault-pass

Things for the future

The main thing I am looking to add next is installation and configuration of unattended-upgrades. I may also consider splitting out some of the functionality into roles. I'd also want to test on more cloud providers, I have only tested on DO and my own server so far.