2025-12-27 23:07:47 +01:00
2026-01-02 11:26:21 +01:00

Ansible Bootstrap

An Ansible playbook for automating Linux system bootstrap in an Infrastructure-as-Code manner. It uses the Arch Linux ISO as a foundational tool to provide an efficient and systematic method for the automatic deployment of a variety of Linux distributions on designated target systems, ensuring a standardized setup across different platforms.

Most roles are adaptable for use with systems beyond Arch Linux, requiring only that the target system can install the necessary package manager (e.g. dnf for RHEL-based systems). A replacement for the arch-chroot command may also be required; set system.features.chroot.tool accordingly.

Table of Contents

  1. Supported Platforms
  2. Compatibility Notes
  3. Configuration Model
  4. Variable Reference
  5. How to Use the Playbook
  6. Security
  7. Operational Notes
  8. Safety

1. Supported Platforms

Distributions

system.os Distribution system.version
almalinux AlmaLinux 8, 9, 10
alpine Alpine Linux latest (rolling)
archlinux Arch Linux latest (rolling)
debian Debian 10, 11, 12, 13, unstable
fedora Fedora 40, 41, 42, 43
opensuse openSUSE Tumbleweed latest (rolling)
rhel Red Hat Enterprise Linux 8, 9, 10
rocky Rocky Linux 8, 9, 10
ubuntu Ubuntu latest
ubuntu-lts Ubuntu LTS latest
void Void Linux latest (rolling)

Hypervisors

Hypervisor hypervisor.type
libvirt libvirt
Proxmox VE proxmox
VMware vmware
Xen xen
Bare metal none

2. Compatibility Notes

  • rhel_iso is required for system.os: rhel.
  • RHEL installs should use system.filesystem: ext4 or system.filesystem: xfs (not btrfs).
  • For RHEL 8 specifically, prefer ext4 over xfs if you hit installer/filesystem edge cases.
  • custom_iso: true skips ArchISO validation and pacman preparation; your installer image must already provide required tooling.
  • On non-Arch installers, set system.features.chroot.tool (arch-chroot, chroot, or systemd-nspawn) explicitly as needed.

3. Configuration Model

The project uses two dict-based variables:

  • system for host/runtime/install configuration
  • hypervisor for virtualization backend configuration

These are normal Ansible variables and belong in host/group vars. You can define them in inventory host entries, group_vars/*, or host_vars/*. Dictionary variables are merged across scopes (group_vars -> host_vars) by project config (hash_behaviour = merge), so you can set shared values like system.filesystem once in group vars and override only host-specific keys per host.

Variable Placement

Location Scope Typical use
group_vars/all.yml All hosts Shared defaults like hypervisor, system.filesystem, boot_iso
group_vars/<group>.yml Group Environment or role-specific defaults
host_vars/<host>.yml Single host Host-specific overrides
Inventory inline host vars Single host Inline definitions for quick setup

Example Inventory

all:
  vars:
    system:
      filesystem: btrfs
    boot_iso: "local:iso/archlinux-x86_64.iso"
    hypervisor:
      type: proxmox
      url: pve01.example.com
      username: root@pam
      password: CHANGE_ME
      host: pve01
      storage: local-lvm

  children:
    bootstrap:
      hosts:
        app01.example.com:
          ansible_host: 10.0.0.10
          system:
            type: virtual
            os: debian
            version: "12"
            name: app01.example.com
            id: 101
            cpus: 2
            memory: 4096
            balloon: 0
            network:
              bridge: vmbr0
              ip: 10.0.0.10
              prefix: 24
              gateway: 10.0.0.1
              dns:
                servers: [1.1.1.1, 1.0.0.1]
                search: [example.com]
            disks:
              - size: 40
              - size: 120
                mount:
                  path: /data
                  fstype: xfs
            users:
              - name: ops
                password: CHANGE_ME
                keys:
                  - "ssh-ed25519 AAAA..."
            root:
              password: CHANGE_ME
            luks:
              enabled: true
              passphrase: CHANGE_ME
              auto: true
              method: tpm2
              tpm2:
                pcrs: "7"
            features:
              firewall:
                enabled: true
                backend: firewalld
                toolkit: nftables

4. Variable Reference

4.1 Core Variables

These top-level variables sit outside the system/hypervisor dictionaries.

Variable Type Description
boot_iso string Path to the boot ISO image (required for virtual installs).
rhel_iso string Path to the RHEL ISO (required when system.os: rhel).
custom_iso bool Skip ArchISO validation and pacman setup. Default false.
thirdparty_tasks string Drop-in task file included during environment setup. Default dropins/preparation.yml.

4.2 system Dictionary

Top-level host install/runtime settings. Use these keys under system.

Key Type Default Description
type string virtual virtual or physical
os string empty Target distribution (see table)
version string empty Version selector for distro families
filesystem string empty btrfs, ext4, or xfs
name string inventory hostname Final hostname
timezone string Europe/Vienna System timezone (tz database name)
locale string en_US.UTF-8 System locale
keymap string us Console keymap (vconsole.conf)
id int/string empty VMID (required for Proxmox)
cpus int 0 vCPU count
memory int 0 Memory in MiB
balloon int 0 Balloon memory in MiB
path string empty Hypervisor folder/path (libvirt/vmware)
packages list [] Additional packages installed post-reboot
network dict see below Network configuration
disks list [] Disk layout (see Multi-Disk Schema)
users list [] User accounts (see below)
root dict see below Root account settings
luks dict see below Encryption settings
features dict see below Feature toggles

system.network

Key Type Default Description
bridge string empty Hypervisor network/bridge name
vlan string/int empty VLAN tag
ip string empty Static IP (omit for DHCP)
prefix int empty CIDR prefix for static IP
gateway string empty Default gateway (static only)
dns.servers list [] DNS resolvers (must be a YAML list)
dns.search list [] Search domains (must be a YAML list)
interfaces list [] Multi-NIC config (overrides flat fields above)

When interfaces is empty, the flat fields (bridge, ip, prefix, gateway, vlan) are auto-wrapped into a single-entry interfaces[] list. When interfaces is set, it takes precedence and the flat fields are back-populated from interfaces[0] for backward compatibility. Each interfaces[] entry supports: name, bridge (required), vlan, ip, prefix, gateway.

system.users

A list of user account dictionaries. Credentials for the first user are prompted interactively by default via vars_prompt in main.yml, but can be supplied via inventory, vars files, or -e for non-interactive runs.

Key Type Default Description
name string empty Username created on target (required)
password string empty User password (also used for sudo)
keys list [] SSH public keys for authorized_keys
sudo string empty Custom sudoers rule (optional, per-user)

system.root

Key Type Default Description
password string empty Root password

system.luks

LUKS container, unlock, and initramfs-related settings.

Key Type Default Allowed Description
enabled bool false true/false Enable encrypted root workflow
passphrase string empty any string Passphrase used for format/open/enroll
mapper string SYSTEM_DECRYPTED mapper name Mapper name under /dev/mapper
auto bool true true/false Auto-unlock behavior toggle
method string tpm2 tpm2, keyfile Auto-unlock backend when auto=true
keysize int 64 positive int Keyfile size (bytes) for keyfile mode
options string discard,tries=3 crypttab opts Additional crypttab/kernel options
type string luks2 cryptsetup type LUKS format type
cipher string aes-xts-plain64 cipher name Cryptsetup cipher
hash string sha512 hash name Cryptsetup hash
iter int 4000 positive int PBKDF iteration time (ms)
bits int 512 positive int Key size (bits)
pbkdf string argon2id pbkdf name PBKDF algorithm
urandom bool true true/false Use urandom during key generation
verify bool true true/false Verify passphrase during format

system.luks.tpm2

TPM2-specific policy settings used when system.luks.method: tpm2.

Key Type Default Allowed Description
device string auto auto or device path TPM2 device selector
pcrs string/list empty PCR expression PCR binding policy (e.g. "7" or "0+7")

system.features

Feature toggles for optional system configuration.

Key Type Default Allowed Description
cis.enabled bool false true/false Enable CIS hardening role
selinux.enabled bool true true/false SELinux management
firewall.enabled bool true true/false Enable firewall role actions
firewall.backend string firewalld firewalld, ufw Firewall service backend
firewall.toolkit string nftables nftables, iptables Packet filtering toolkit
ssh.enabled bool true true/false SSH service/package management
zstd.enabled bool true true/false zstd related tuning
swap.enabled bool true true/false Swap setup toggle
banner.motd bool false true/false MOTD banner management
banner.sudo bool true true/false Sudo banner management
chroot.tool string arch-chroot arch-chroot, chroot, systemd-nspawn Chroot wrapper command

4.3 hypervisor Dictionary

Key Type Description
type string libvirt, proxmox, vmware, xen, or none
url string Proxmox/VMware API host
username string API username
password string API password
host string Proxmox node name
storage string Proxmox/VMware storage identifier
datacenter string VMware datacenter
cluster string VMware cluster
certs bool TLS certificate validation for VMware
ssh bool VMware: enable SSH on guest and switch connection to SSH

4.4 VMware Guest Operations

When hypervisor.type: vmware uses the vmware_tools connection, these Ansible connection variables are required.

Variable Description
ansible_vmware_tools_user Guest OS username for guest operations
ansible_vmware_tools_password Guest OS password for guest operations
ansible_vmware_guest_path VM inventory path (/datacenter/vm/folder/name)
ansible_vmware_host vCenter/ESXi hostname
ansible_vmware_user vCenter/ESXi API username
ansible_vmware_password vCenter/ESXi API password
ansible_vmware_validate_certs Enable/disable TLS certificate validation

4.5 Multi-Disk Schema

system.disks[0] is always the OS disk. Additional entries define data disks.

Key Type Description
size number Disk size in GB (required for virtual installs)
device string Explicit block device (required for physical data disks)
mount.path string Mount point (for additional disks)
mount.fstype string btrfs, ext4, or xfs
mount.label string Optional filesystem label
mount.opts string Mount options (default: defaults)

Virtual install example:

system:
  disks:
    - size: 80
    - size: 200
      mount:
        path: /data
        fstype: xfs
        label: DATA
        opts: defaults,noatime
    - size: 300
      mount:
        path: /backup
        fstype: ext4

Physical install example (device paths required):

system:
  type: physical
  disks:
    - device: /dev/sda
      size: 120
    - device: /dev/sdb
      size: 500
      mount:
        path: /data
        fstype: ext4

4.6 Advanced Partitioning Overrides

Use these only when you need to override the default partition layout logic.

Variable Description Default
partitioning_efi_size_mib EFI system partition size in MiB 512
partitioning_boot_size_mib Separate /boot size in MiB (when used) 1024
partitioning_separate_boot Force a separate /boot partition auto-derived
partitioning_boot_fs_fstype Filesystem for /boot when separate auto-derived
partitioning_use_full_disk Consume remaining VG space for root LV true

5. How to Use the Playbook

5.1 Prerequisites

  • Ansible installed on the control machine.
  • Inventory file with target systems defined and variables configured.
  • Disposable/non-production targets (the playbook enforces production-safety checks).

5.2 Running the Playbook

Execute the playbook using ansible-playbook, ensuring that all necessary variables are defined either in the inventory, in a vars file, or passed via -e. Credentials (root_password, user_name, user_password, user_public_key) are prompted interactively unless supplied through inventory or extra vars.

ansible-playbook -i inventory.yml main.yml
ansible-playbook -i inventory.yml main.yml -e @vars_example.yml

5.3 Example Usage

Use the bundled example files as starting points for new inventories:

  • inventory_example.yml -- Proxmox virtual setup
  • inventory_libvirt_example.yml -- libvirt virtual setup
  • inventory_baremetal_example.yml -- bare-metal physical setup
  • vars_example.yml -- shared variable overrides
  • vars_baremetal_example.yml -- bare-metal variable overrides
# Proxmox example
ansible-playbook -i inventory_example.yml main.yml

# libvirt example
ansible-playbook -i inventory_libvirt_example.yml main.yml

# Custom inventory with separate vars file
ansible-playbook -i inventory.yml main.yml -e @vars_example.yml

6. Security

To protect sensitive information such as passwords, API keys, and other confidential variables (e.g. hypervisor.password, system.luks.passphrase), use Ansible Vault instead of plaintext inventory files.

7. Operational Notes

  • For virtual installs, system.cpus, system.memory, and system.disks[0].size are required and validated.
  • For physical installs, sizing is derived from the detected install drive; set installer access (ansible_user/ansible_password) when the installer environment differs from the prompted user credentials.
  • system.network.dns.servers and system.network.dns.search must be YAML lists.
  • hypervisor.type selects backend-specific provisioning and cleanup behavior.
  • Guest tools are selected automatically by hypervisor: qemu-guest-agent for libvirt/proxmox, open-vm-tools for vmware.
  • With system.luks.method: tpm2 on virtual installs, the virtualization role enables a TPM2 device where supported (libvirt/proxmox/vmware).
  • With LUKS enabled on non-Arch targets, provisioning uses an ESP (512 MiB), a separate /boot (1 GiB), and the encrypted root; adjust sizes via partitioning_efi_size_mib and partitioning_boot_size_mib if needed.
  • For VMware, hypervisor.ssh: true enables SSH on the guest and switches the connection to SSH for the remaining tasks.
  • Molecule is scaffolded with a delegated driver and a no-op converge for lint-only validation.

8. Safety

This playbook intentionally aborts if it detects a non-live/production target. It also refuses to touch pre-existing VMs and only cleans up VMs created in the current run.

Always run lint after changes:

ansible-lint
Description
An Ansible playbook for automating system bootstrap processes in an Infrastructure-as-Code manner, utilizing ArchISO as the foundational tool.
Readme 2.2 MiB
Languages
Jinja 97.8%
Shell 2.2%