Go to file
laszlojau 7ec16a8d53
Keep service backups under /etc/systemd/system (#324)
Signed-off-by: laszlojau <49835454+laszlojau@users.noreply.github.com>
2024-04-05 13:54:57 -07:00
.github Ansible Galaxy support (#281) 2024-01-03 10:00:08 -08:00
collections Added ansible.posix to collections requirements.yml file (#180) 2023-11-08 16:11:37 -08:00
meta Ansible Galaxy support (#281) 2024-01-03 10:00:08 -08:00
playbook Only setup/cleanup yaml config for servers (#272) 2023-12-06 13:55:32 -08:00
roles Keep service backups under /etc/systemd/system (#324) 2024-04-05 13:54:57 -07:00
.ansible-lint Add Vagrantfile for local testing 2023-11-08 13:42:11 -08:00
.gitattributes Reword README (#245) 2023-11-10 12:11:59 -08:00
.gitignore Ansible Galaxy support (#281) 2024-01-03 10:00:08 -08:00
.yamllint Role tweaks (#268) 2023-12-04 09:46:45 -08:00
CHANGELOG.md Ansible Galaxy support (#281) 2024-01-03 10:00:08 -08:00
LICENSE Fixes #11: Add Apache 2.0 LICENSE file. 2020-05-12 15:29:41 -05:00
README.md Update minimum ansible version (#282) 2023-12-26 11:12:48 -08:00
Vagrantfile Only setup/cleanup yaml config for servers (#272) 2023-12-06 13:55:32 -08:00
ansible.cfg Add HA option, change to yaml inventory, cleanup 2023-11-08 10:23:56 -08:00
galaxy.yml Ansible Galaxy support (#281) 2024-01-03 10:00:08 -08:00
inventory-sample.yml feat add custom registries_config_yaml for private-registry (#319) 2024-04-02 12:24:23 -07:00


Build a Kubernetes cluster using K3s via Ansible

Author: https://github.com/itwars
Current Maintainer: https://github.com/dereknola

Easily bring up a cluster on machines running:

  • Debian
  • Ubuntu
  • Raspberry Pi OS
  • RHEL Family (CentOS, Redhat, Rocky Linux...)
  • SUSE Family (SLES, OpenSUSE Leap, Tumbleweed...)
  • ArchLinux

on processor architectures:

  • x64
  • arm64
  • armhf

System requirements

The control node must have Ansible 8.0+ (ansible-core 2.15+)

All managed nodes in inventory must have:

  • Passwordless SSH access
  • Root access (or a user with equivalent permissions)

It is also recommended that all managed nodes disable firewalls and swap. See K3s Requirements for more information.


First copy the sample inventory to inventory.yml.

cp inventory-sample.yml inventory.yml

Second edit the inventory file to match your cluster setup. For example:


If needed, you can also edit vars section at the bottom to match your environment.

If multiple hosts are in the server group the playbook will automatically setup k3s in HA mode with embedded etcd. An odd number of server nodes is required (3,5,7). Read the official documentation for more information.

Setting up a loadbalancer or VIP beforehand to use as the API endpoint is possible but not covered here.

Start provisioning of the cluster using the following command:

ansible-playbook playbook/site.yml -i inventory.yml


A playbook is provided to upgrade K3s on all nodes in the cluster. To use it, update k3s_version with the desired version in inventory.yml and run:

ansible-playbook playbook/upgrade.yml -i inventory.yml

Airgap Install

Airgap installation is supported via the airgap_dir variable. This variable should be set to the path of a directory containing the K3s binary and images. The release artifacts can be downloaded from the K3s Releases. You must download the appropriate images for you architecture (any of the compression formats will work).

An example folder for an x86_64 cluster:

$ ls ./playbook/my-airgap/
total 248M
-rwxr-xr-x 1 $USER $USER  58M Nov 14 11:28 k3s
-rw-r--r-- 1 $USER $USER 190M Nov 14 11:30 k3s-airgap-images-amd64.tar.gz

$ cat inventory.yml
airgap_dir: ./my-airgap # Paths are relative to the playbook directory

Additionally, if deploying on a OS with SELinux, you will also need to download the latest k3s-selinux RPM and place it in the airgap folder.

It is assumed that the control node has access to the internet. The playbook will automatically download the k3s install script on the control node, and then distribute all three artifacts to the managed nodes.


After successful bringup, the kubeconfig of the cluster is copied to the control node and merged with ~/.kube/config under the k3s-ansible context. Assuming you have kubectl installed, you can confirm access to your Kubernetes cluster with the following:

kubectl config use-context k3s-ansible
kubectl get nodes

If you wish for your kubeconfig to be copied elsewhere and not merged, you can set the kubeconfig variable in inventory.yml to the desired path.

Local Testing

A Vagrantfile is provided that provision a 5 nodes cluster using Vagrant (LibVirt or Virtualbox as provider). To use it:

vagrant up

By default, each node is given 2 cores and 2GB of RAM and runs Ubuntu 20.04. You can customize these settings by editing the Vagrantfile.

Need More Features?

This project is intended to provide a "vanilla" K3s install. If you need more features, such as:

  • Private Registry
  • Advanced Storage (Longhorn, Ceph, etc)
  • External Database
  • External Load Balancer or VIP
  • Alternative CNIs

See these other projects: