Go to file
Derek Nola 45289ba7d9
Add support for Rocky, bump default install version (#238)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2023-11-09 12:56:38 -08:00
.github Add support for Service Envs (#237) 2023-11-09 12:30:18 -08:00
collections Added ansible.posix to collections requirements.yml file (#180) 2023-11-08 16:11:37 -08:00
playbook Add Upgrade Playbook (#236) 2023-11-09 10:56:47 -08:00
roles Add support for Rocky, bump default install version (#238) 2023-11-09 12:56:38 -08:00
.ansible-lint Add Vagrantfile for local testing 2023-11-08 13:42:11 -08:00
.gitignore Reorganize server tasks, copy config to local 2023-11-08 10:23:56 -08:00
.yamllint Fixes #3: Fix linting issues, add ansible-lint and yamllint configuration. 2020-05-12 16:00:32 -05:00
LICENSE Fixes #11: Add Apache 2.0 LICENSE file. 2020-05-12 15:29:41 -05:00
README.md Add Upgrade Playbook (#236) 2023-11-09 10:56:47 -08:00
Vagrantfile Add support for Rocky, bump default install version (#238) 2023-11-09 12:56:38 -08:00
ansible.cfg Add HA option, change to yaml inventory, cleanup 2023-11-08 10:23:56 -08:00
inventory-sample.yml Add support for Rocky, bump default install version (#238) 2023-11-09 12:56:38 -08:00

README.md

Under Construction 🚧

Build a Kubernetes cluster using k3s via Ansible

Author: https://github.com/itwars

K3s Ansible Playbook

Build a Kubernetes cluster using Ansible with k3s. The goal is easily install a Kubernetes cluster on machines running:

  • Debian
  • Ubuntu
  • CentOS
  • ArchLinux

on processor architecture:

  • x64
  • arm64
  • armhf

System requirements

Deployment environment must have Ansible 2.4.0+

All nodes in inventory must have:

  • Passwordless SSH access
  • Root access (or a user with equivalent permissions)

It is also recommended that all nodes disable firewalls and swap. See K3s Requirements for more information.

Usage

First copy the sample inventory to inventory.yml.

cp inventory-sample.yml inventory.yml

Second edit the inventory file to match your cluster setup. For example:

k3s_cluster:
  children:
    server:
      hosts:
        192.16.35.11
    agent:
      hosts:
        192.16.35.12
        192.16.35.13

If needed, you can also edit vars section at the bottom to match your environment.

If multiple hosts are in the server group the playbook will automatically setup k3s in HA mode with embedded etcd. An odd number of server nodes is required (3,5,7). Read the offical documentation below for more information and options. https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/ Using a loadbalancer or VIP as the API endpoint is preferred but not covered here.

Start provisioning of the cluster using the following command:

ansible-playbook playbook/site.yml -i inventory.yml

Upgrading

A playbook is provided to upgrade k3s on all nodes in the cluster. To use it, update k3s_version with the desired version in inventory.yml and run:

ansible-playbook playbook/upgrade.yml -i inventory.yml

Kubeconfig

After successful bringup, the kubeconfig of the cluster is copied to the control-node and set as default (~/.kube/config). Assuming you have kubectl installed, you to confirm access to your Kubernetes cluster use the following:

kubectl get nodes

Local Testing

A Vagrantfile is provided that provision a 5 nodes cluster using LibVirt or Virtualbox and Vagrant. To use it:

vagrant up

By default, each node is given 2 cores and 2GB of RAM and runs Ubuntu 20.04. You can customize these settings by editing the Vagrantfile.