Compare commits

...

411 Commits
v0.1.3 ... main

Author SHA1 Message Date
paradon 19c24bd503
Add scan for running control nodes when choosing primary control node (#219)
Signed-off-by: Thomas Matysik <thomas@matysik.co.nz>
2024-01-26 15:15:15 -05:00
fragpit 0c0d3bb38d
kubectl commands on node must use short name (#220)
Co-authored-by: Igor Tretyak <itretyak@ptsecurity.com>
2024-01-26 15:09:58 -05:00
davidg cfd9400edf
Containerd registries config not live (#222)
I found a bug where my custom containerd registries config wasn't live,
despite the correct `notify` handlers being specified in the
'Ensure containerd registries file exists' task.

This change fixes that by ensuring the handlers get triggered.
2024-01-26 15:08:18 -05:00
Devin Buhl 6b258763be
Update k3s killall and uninstall scripts (#217)
* Update k3s killall and uninstall scripts

* Update k3s-uninstall.sh.j2

* Update k3s-uninstall.sh.j2
2023-12-09 09:01:16 -05:00
fragpit b87991cc28
Compare `kubectl get nodes` with ansible_hostname, not ansible_fqdn/i… (#212)
Co-authored-by: Igor Tretyak <itretyak@ptsecurity.com>
2023-12-09 08:32:47 -05:00
matteyeux 37fda0a953
add support for experimental option "prefer-bundled-bin" (#214) 2023-10-27 11:22:57 -04:00
Xan Manning 37cca2e487
Merge pull request #208 from matteyeux/main
Create registries.yaml if k3s_registries.mirrors or k3s_registries.configs are not None
2023-06-17 11:36:15 +01:00
Xan Manning 41b938c8e7
Merge pull request #207 from PyratLabs/static-pods
fix: static pods should be deployed to all control nodes
2023-06-17 11:34:52 +01:00
matteyeux cc64737bdc Create registries.yaml only if k3s_registries.mirrors or k3s_registries.configs are not empty 2023-06-01 14:34:17 +02:00
Devin Buhl 3f1d2da21b
fix: static pods should be deployed to all control nodes
Signed-off-by: Devin Buhl <devin@buhl.casa>
2023-05-31 20:39:15 -04:00
Xan Manning 44635027ce
chore(changelog): update with latest releases 2023-05-17 21:11:04 +01:00
Daniel Brennand de1bd094e5
Fix(tests): Resolve Ansible Lint warnings and fix Molecule tests on GitHub Actions (#202)
* fix(ansible-lint): FQDN and `name`

* fix(ansible-lint): add `name` and FQDN for module call

* fix(ansible-lint): add `name` to tasks and FQDN for module

* fix(ansible-lint): add task `name` and FQDN for module calls

* fix(ansible-lint): last `include_tasks`

* fix(ansible-lint): add task names and FQDN

* refactor: `Ensure` to `Run`

* [skip ci]refactor: add exist and seperate ensure installed node task, mention build cluster

* [skip ci]refactor: Pipe seperator

* [skip ci]refactor: run

* refactor: remove quotes as other files don't use them

For templated vars in task name

* [skip ci]refactor: task names, use `Run`

* [skip ci]refactor: use variable name in task name

* [skip ci]refactor: task names

* [skip ci]refactor: add service mgr in task name

* [skip ci]refactor: add task names and module FQDNs

* [skip ci]refactor: fix task name

* [skip ci]refactor: add -

* [skip ci]refactor: include task names and FQDNs

* [skip ci]refactor: add task names and FQDNs

* [skip ci]: ignore `name[template]`

* refactor: `when` clause for `block` should be before `block`

* fix: https://github.com/ansible-community/molecule/issues/3883

* refactor: molecule lint command was removed in version `5.0.0`

Use separate CI job step to run linting instead.

* [skip ci]refactor: noqa for command tasks

Subject to change

* refactor: use Ubuntu 22.04

Suspect issues with Molecule tests are related to cgroups v2.
2023-05-13 09:49:39 -04:00
Daniel Brennand 0cc1e48902
Refactor/remove-secret-encryption-experimental (#201)
* refactor: `secrets-encryption` is no longer experimental

Resolves #200

* docs(fix): typo

* docs(refactor): update CHANGELOG

* fix: add `until`

* docs(refactor): modify changelog refactor
2023-05-02 15:48:34 -04:00
Xan Manning 13db5d26f8
Merge branch 'main' of github.com:PyratLabs/ansible-role-k3s into main 2022-11-15 17:50:21 +00:00
Xan Manning 3f200f2bd7
docs(changelog): updated for v3.3.1 release 2022-11-15 17:50:09 +00:00
Xan Manning 404491c938
Merge pull request #198 from Jonaprince/patch-1
Fixes #197 fix length indentation in registry.yaml
2022-11-15 17:48:07 +00:00
Jonaprince 75b40675d8
Fixes #197 fix length indentation in registry.yaml
Fix the issue of bad indentation in rewrite rules when using registry pull through cache
2022-11-14 10:19:52 +01:00
Xan Manning 80e4debcd4
docs(changelog): updated for v3.3.0 2022-09-11 11:27:30 +01:00
Xan Manning c28e03b97f
Merge pull request #193 from PyratLabs/fix/ensure-release-check-can-be-debugged
fix(version): ensure log output provided when version lookup fails
2022-09-11 11:24:49 +01:00
Xan Manning 01616dcd96
fix(systemd): updated unit file 2022-09-11 10:35:44 +01:00
Xan Manning 8410d2c402
WIP(molecule): snapshotter defaulted 2022-09-04 14:31:17 +01:00
Xan Manning a6b209abdb
fix(molecule): skip post checks for now 2022-09-02 18:59:19 +01:00
Xan Manning e9ddc8738a
fix(post-check): shorten node check delay to 5 seconds 2022-09-02 18:49:42 +01:00
Xan Manning 1d29570fc9
fix(molecule): skip post checks on hadb 2022-09-02 18:22:20 +01:00
Xan Manning 561d67cd08
fix(version): ensure log output provided when version lookup fails 2022-09-02 18:09:23 +01:00
Xan Manning dae3eb928e
Merge pull request #194 from PyratLabs/fix/linting
fix(linting): ensure tests pass
2022-09-02 18:08:49 +01:00
Xan Manning 21fe3bccbf
feat(post-checks): add option to skip post-checks 2022-09-02 18:02:06 +01:00
Xan Manning 25a17b8511
fix(linting): ensure tests pass 2022-09-01 20:39:17 +01:00
Xan Manning d38f344937 chore: update changelog for release 2022-06-17 15:41:16 +00:00
Xan Manning 78cf2c1866
Merge pull request #185 from PyratLabs/feat/alpine-support 2022-06-17 16:18:52 +01:00
Xan Manning e774918812 fix: disable native snapshotter for standalone 2022-06-17 08:27:24 +00:00
Xan Manning 6f1cb8e904 fix: systemd env vars and openrc service file 2022-06-15 22:31:49 +00:00
Xan Manning e6cb2a91e8 fix: autodeploy fix 2022-06-15 21:36:14 +00:00
Xan Manning 5bebced657 fix: control plane start retries 2022-06-15 21:03:39 +00:00
Xan Manning c1341eb62c feat(gha): remove fail-fast on ci 2022-06-15 20:33:33 +01:00
Xan Manning 13ed1336d9 fix: service handler missing from ansible handler 2022-06-15 20:28:13 +01:00
Xan Manning 5f560137f4 fix(alpine): testing in molecule and rename service 2022-06-15 18:45:54 +01:00
Xan Manning 910b611058 WIP(alpine): trying to find a container image that supports openrc 2022-06-15 15:14:59 +01:00
Xan Manning f3640e5c9f WIP(molecule): default image no longer prebuilt to support alpine 2022-06-15 15:14:59 +01:00
Xan Manning 291b7763b4
Merge pull request #190 from PyratLabs/niklasweimann-main
Niklasweimann main
2022-06-15 15:12:01 +01:00
Xan Manning 86a9f25325 fix(cluster-token): cluster tokens can now be specified without breaking configurations where cluster tokens are auto-generated 2022-05-29 18:55:01 +01:00
Niklas Weimann 503e3ccc3f Fix check for k3s_token_location 2022-05-16 11:28:24 +02:00
Xan Manning 818676e449 docs(changelog): release notes for 3.1.2 2022-05-02 17:55:25 +01:00
Xan Manning 87551613d4
Merge pull request #184 from PyratLabs/fix/molecule-tests
fix(molecule): fix tests by ensuring fqcn is specified
2022-05-02 17:14:18 +01:00
Xan Manning 03bc3aec5b fix(molecule): fix tests by ensuring fqcn is specified 2022-05-02 17:13:12 +01:00
Xan Manning e20195fe56 chore(release): update changelog 2022-02-18 14:16:56 +00:00
Xan Manning 4387b3d12e
Merge pull request #179 from eaglesemanation/debian11-nftables
fix: Support nftables for Debian 11
2022-02-18 14:14:58 +00:00
Xan Manning dc0f8c3a83 fix(molecule): fixed testing with load-balancers 2022-02-17 20:43:24 +00:00
Vladimir Romashchenko d1f61bf866
fix: Support nftables for Debian 11 2022-02-15 15:57:43 -05:00
Xan Manning 6550071e43 chore(release): updated release notes for version bump 2022-01-30 14:08:16 +00:00
Xan Manning 594606d420
Merge pull request #177 from kossmac/main
use basename of url for items in k3s_server_manifests_urls and k3s_se…
2022-01-30 14:04:37 +00:00
Karsten Kosmala 1475d1724d
add missing bracket
Co-authored-by: Xan Manning <244186+xanmanning@users.noreply.github.com>
2022-01-30 12:08:34 +01:00
Karsten Kosmala 80eca60031
add missing bracket
Co-authored-by: Xan Manning <244186+xanmanning@users.noreply.github.com>
2022-01-30 12:08:22 +01:00
Karsten Kosmala 424145881c use basename of url for items in k3s_server_manifests_urls and k3s_server_pod_manifests_urls if filename is not provided
Signed-off-by: Karsten Kosmala <kosmala@cosmocode.de>
2022-01-20 11:13:32 +01:00
Xan Manning 3be9eff967
Merge pull request #174 from xlejo/fix_become_documentation
Rename `k3s_become_for_all` for `k3s_become`.
2022-01-07 19:24:49 +00:00
Alejo Diaz 410a5bf009
Rename `k3s_become_for_all` for `k3s_become`. 2022-01-07 13:34:40 -03:00
Xan Manning 252b87bf65 chore(changelog): updates for release 3.0.1 2022-01-06 20:57:23 +00:00
Xan Manning 1fa910f931 fix(readme): typo 2022-01-06 20:53:53 +00:00
Xan Manning 2e5dd3cc07 docs(readme): note about ansible_python_interpreter 2022-01-06 20:52:48 +00:00
Xan Manning e7693c5d2f
Merge pull request #173 from xlejo/add_become_to_pre_checks_packages
Adding become to pre checks packages
2022-01-06 20:50:14 +00:00
Alejo Diaz 4f0bb3f9a7
Adding become to pre checks packages
To make sure that system packages are found with `which` in
distributions like Debian for example.
2022-01-06 09:49:26 -03:00
Xan Manning 473f3943d2
Merge pull request #170 from PyratLabs/v3_release
V3 Release
2022-01-02 22:21:45 +00:00
Xan Manning 7e9292c01b fix(become): only one variable neeed for become 2022-01-02 22:20:14 +00:00
Xan Manning a88d27d2ae feat: Remove Docker install tasks 2022-01-02 22:19:51 +00:00
Xan Manning 377565de96 fix(airgap): moved from vars to defaults 2022-01-02 21:10:48 +00:00
Xan Manning 3be75a8296
Merge pull request #165 from crutonjohn/feat/air-gap
Feature: Air Gap Installation
2022-01-02 20:38:59 +00:00
Xan Manning b9b2a8e054 chore(changelog): release notes for changelog 2021-12-27 13:19:32 -05:00
Andrew Chen 59af276c72 fix typo 2021-12-27 13:19:32 -05:00
Xan Manning 2f7d6af51d chore(changelog): updates 2021-12-27 13:19:32 -05:00
Xan Manning 20468734a0 fix(systemd): templating error for environment vars 2021-12-27 13:19:32 -05:00
Xan Manning e983629167 fix(gha): do not continue on error 2021-12-27 13:19:32 -05:00
Xan Manning 0873fc4977 fix(rootless): attempt to resolve rootless issues in debian #161 2021-12-27 13:19:32 -05:00
Xan Manning 0fa1ef29a9 fix(start): annoying behaviour where k3s won't start as a single node in ha etcd #152 2021-12-27 13:19:32 -05:00
Xan Manning e457854046 fix(validation): distribution and version for packages 2021-12-27 13:19:32 -05:00
Xan Manning cc8ba00de2 feat(validate): package check for iptables on debian 2021-12-27 13:19:32 -05:00
Xan Manning 592b294ad8 fix(systemd): tests can continue on error, missing create function on lineinfile 2021-12-27 13:19:32 -05:00
Xan Manning 9349c9456d feat(systemd): added molecule tests for #164 2021-12-27 13:19:32 -05:00
Xan Manning 582a696918 feat(systemd): unit file allows environment variables to be defined #164 2021-12-27 13:19:32 -05:00
Xan Manning 987bc700a1 docs(readme): missing variable documentation 2021-12-27 13:19:32 -05:00
janar153 d9d8bbeece Update main.yml 2021-12-27 13:19:32 -05:00
janar153 5288de9db1 Update main.yml 2021-12-27 13:19:32 -05:00
Xan Manning df51a8aaec fix(molecule): fix rockylinux test, add debian11 fix snapshotter 2021-12-27 13:19:32 -05:00
Xan Manning a4cbc4d68d chore(changelog): release notes for changelog 2021-12-23 08:48:10 +00:00
Xan Manning 41a13ca2f7
Merge pull request #169 from andrewtheguy/fixtypo
fix typo
2021-12-23 08:34:02 +00:00
Andrew Chen ce4ad4dc0b fix typo 2021-12-22 22:18:42 -08:00
Xan Manning 58f4de5481 chore(changelog): updates 2021-12-20 21:58:00 +00:00
Xan Manning c287bef9cd
Merge pull request #167 from PyratLabs/multiple-bugfixes-and-features
Multiple bugfixes and features
2021-12-20 21:46:53 +00:00
Xan Manning 59f0a2152e fix(systemd): templating error for environment vars 2021-12-20 21:34:15 +00:00
Xan Manning 191d51bce6 fix(gha): do not continue on error 2021-12-20 21:17:43 +00:00
Xan Manning 2a282c0ae2 fix(rootless): attempt to resolve rootless issues in debian #161 2021-12-20 21:14:23 +00:00
Xan Manning 677db09b4a fix(start): annoying behaviour where k3s won't start as a single node in ha etcd #152 2021-12-20 21:06:10 +00:00
Xan Manning 4c20fd3f0b fix(validation): distribution and version for packages 2021-12-20 20:18:38 +00:00
Xan Manning 1eaeba67b5 feat(validate): package check for iptables on debian 2021-12-19 21:41:59 +00:00
Xan Manning 09abfd2cba fix(systemd): tests can continue on error, missing create function on lineinfile 2021-12-19 19:13:48 +00:00
Xan Manning ccfa561be0 feat(systemd): added molecule tests for #164 2021-12-19 19:02:31 +00:00
Xan Manning 0c77eb143d feat(systemd): unit file allows environment variables to be defined #164 2021-12-19 18:59:42 +00:00
Xan Manning 4269e25e6b
Merge pull request #166 from PyratLabs/multiple-bugfixes-and-features
fix(molecule): fix rockylinux test, add debian11 fix snapshotter
2021-12-19 18:39:31 +00:00
Xan Manning dd341f6f10 docs(readme): missing variable documentation 2021-12-18 23:08:36 +00:00
Xan Manning 01b914985a Merge branch 'main' into multiple-bugfixes-and-features 2021-12-18 23:06:39 +00:00
Xan Manning 0f143962a1
Merge pull request #163 from janar153/main
Added option to chnage K3s updates API url
2021-12-18 23:06:11 +00:00
Xan Manning 80f591cba4 fix(molecule): fix rockylinux test, add debian11 fix snapshotter 2021-12-18 23:04:24 +00:00
Curtis John dd3c460bfa
feat(airgap): skip evaluations that aren't relevant to airgap
checking release version and tasks that depend on that check do not need to function since we won't
be aware of the version in an airgapped deployment
2021-12-15 16:43:51 -05:00
Curtis John 825ed3ad37
docs(readme): user warning regarding use of airgap install 2021-12-15 12:23:44 -05:00
Curtis John f7c0c8783a
feat(airgap): airgap should not verify version information
in an air gapped environment the machine will not be able to check sha checksums or information
around the binary so we should ignore the tasks in that scenario
2021-12-15 12:15:25 -05:00
Curtis John 8243baa3d9
feat(airgap): airgap should not verify version information
in an air gapped environment the machine will not be able to check sha checksums or information
around the binary so we should ignore the tasks in that scenario
2021-12-15 12:14:24 -05:00
Curtis John 25d40cec52
style(airgap): task name should reflect action taken 2021-12-15 12:11:25 -05:00
Curtis John 779968ca0a
chore(airgap): remove unused var 2021-12-15 12:08:56 -05:00
Curtis John b8727a1c92
chore(airgap): noting future work 2021-12-14 17:45:20 -05:00
Curtis John 4bcf3ea9c4
fix(airgap): hotwire k3s version var to end of binary name
this is to allow the role to proceed as if the binary was downloaded as expected from the web
2021-12-14 17:33:31 -05:00
Curtis John e88f3bb056
feat(airgap): init airgap feature
airgap installs allow users to deploy k3s in a situation where the server is not internet connected
and therefore unable to download anything externally
2021-12-14 17:16:19 -05:00
janar153 29658aeb2e
Update main.yml 2021-11-12 12:24:23 +02:00
janar153 33a18bb517
Update main.yml 2021-11-12 12:23:55 +02:00
Xan Manning ea413afa3a chore(release): updated changelog 2021-10-10 14:17:27 +01:00
Xan Manning da13cc696a docs(quickstart): fixed permissions issue seen in #157 2021-10-10 14:10:52 +01:00
Xan Manning db3f7da362 fix(uninstall): deprecated drain flag removed in 1.22
fixes #159
2021-10-10 14:07:04 +01:00
Xan Manning 765fbf2e9b chore(release): bump version 2021-09-08 19:23:45 +01:00
Xan Manning c47688e05c
Merge pull request #150 from PyratLabs/feat/feature-flag-checks
feat: check for etcd-s3-bucket config and added ipv6 documentation
2021-09-08 19:19:05 +01:00
Xan Manning 3274c7e6e0 feat: check for etcd-s3-bucket config and added ipv6 documentation 2021-09-08 19:12:33 +01:00
Xan Manning 25ca0ed8f7
Merge pull request #149 from onedr0p/main
feat: implement config.yaml.d
2021-09-08 19:03:17 +01:00
Devin Buhl 0384dfcb4f
feat: implement config.yaml.d 2021-09-06 08:54:33 -04:00
Devin Buhl 207fbbd41a
feat: implement config.yaml.d 2021-09-06 08:47:37 -04:00
Devin Buhl 9db46b536d
feat: implement config.yaml.d 2021-09-06 08:46:49 -04:00
Xan Manning 83290e050c chore: version bump 2021-08-18 21:13:04 +01:00
Xan Manning 189f2baf23
Merge pull request #142 from PyratLabs/fix-k3s_runtime_config
Fix: Define registration address from node-ip
2021-08-18 21:08:42 +01:00
Xan Manning 077c9a3fd6 bugfix: k3s_runtime_config 2021-08-18 20:44:06 +01:00
Xan Manning 1780b5a20f Merge branch 'main' of github.com:PyratLabs/ansible-role-k3s into main 2021-08-14 14:18:39 +01:00
Xan Manning cc86f35d9b version bump 2021-08-14 14:18:29 +01:00
Xan Manning dc2bd28e10
Merge pull request #139 from abelfodil/main
Add advertised address
2021-08-14 14:16:40 +01:00
Xan Manning f198b45d58 used combined configuration from vars.yaml, removed duplicated task for control plane 2021-08-14 14:04:56 +01:00
Anes Belfodil c0ec5ca930
Add advertised_address 2021-08-09 17:53:28 -04:00
Xan Manning 8c0c586607 Updated CHANGELOG for release 2021-07-24 18:02:07 +01:00
Xan Manning 3b26d24212
Merge pull request #138 from PyratLabs/bugfix-token_path_required
Updated systemd template to use token when joining a cluster
2021-07-24 18:00:09 +01:00
Xan Manning ba113bcd05 Fix primary control node delegation 2021-07-24 17:38:45 +01:00
Xan Manning e90448f40b Updated systemd template to use token when joining a cluster 2021-07-24 17:21:31 +01:00
Xan Manning 4e713918a7 Version bump 2021-07-21 20:34:10 +01:00
Xan Manning 3b5c6e6ff5
Merge pull request #136 from Yajo/patch-1
fix: do ignore etcd member count when uninstalling
2021-07-21 20:29:31 +01:00
Xan Manning d2968d5f42
Merge pull request #135 from Yajo/fix-jinja2-native
fix: restore clustering and avoid failure with jinja2_native=true
2021-07-21 20:28:31 +01:00
Yajo 4b42a9bf49 fix: restore clustering feature
For some weird reason, string booleans were set on `k3s_control_node` and `k3s_primary_control_node`, making their behavior non-obvious (for python `bool("false") == True`).

This fixes that problem, and BTW restores the ability to create clusters, which got lost with this bug.

After running the role against a cluster, see:

```sh
❯ ansible -i inventories/testing.yaml k8s_node -m command -ba 'kubectl get node'
vm0 | CHANGED | rc=0 >>
NAME   STATUS   ROLES                       AGE     VERSION
vm0    Ready    control-plane,etcd,master   9m19s   v1.21.2+k3s1
vm2 | CHANGED | rc=0 >>
NAME   STATUS   ROLES                       AGE     VERSION
vm2    Ready    control-plane,etcd,master   9m22s   v1.21.2+k3s1
vm1 | CHANGED | rc=0 >>
NAME   STATUS   ROLES                       AGE     VERSION
vm1    Ready    control-plane,etcd,master   9m22s   v1.21.2+k3s1
```

Now, after the patch:

```sh
❯ ansible -i inventories/testing.yaml k8s_node -m command -ba 'kubectl get node'
vm0 | CHANGED | rc=0 >>
NAME   STATUS   ROLES                       AGE    VERSION
vm0    Ready    control-plane,etcd,master   2m2s   v1.21.2+k3s1
vm1    Ready    control-plane,etcd,master   58s    v1.21.2+k3s1
vm2    Ready    control-plane,etcd,master   80s    v1.21.2+k3s1
vm1 | CHANGED | rc=0 >>
NAME   STATUS   ROLES                       AGE    VERSION
vm0    Ready    control-plane,etcd,master   2m2s   v1.21.2+k3s1
vm1    Ready    control-plane,etcd,master   58s    v1.21.2+k3s1
vm2    Ready    control-plane,etcd,master   80s    v1.21.2+k3s1
vm2 | CHANGED | rc=0 >>
NAME   STATUS   ROLES                       AGE    VERSION
vm0    Ready    control-plane,etcd,master   2m2s   v1.21.2+k3s1
vm1    Ready    control-plane,etcd,master   58s    v1.21.2+k3s1
vm2    Ready    control-plane,etcd,master   80s    v1.21.2+k3s1
```

@Tecnativa TT2541
2021-07-21 12:37:17 +00:00
Jairo Llopis 142b40f428
fix: do ignore etcd member count when uninstalling
Otherwise, when completely uninstalling the etcd-enabled cluster, it fails with:

```
TASK [xanmanning.k3s : Check the conditions when embedded etcd is defined] ***************************************
fatal: [vm0]: FAILED! => {
    "assertion": "(((k3s_controller_list | length) % 2) == 1)",
    "changed": false,
    "evaluated_to": false,
    "msg": "Etcd should have a minimum of 3 defined members and the number of members should be odd. Please see notes about HA in README.md"
}
fatal: [vm1]: FAILED! => {
    "assertion": "(((k3s_controller_list | length) % 2) == 1)",
    "changed": false,
    "evaluated_to": false,
    "msg": "Etcd should have a minimum of 3 defined members and the number of members should be odd. Please see notes about HA in README.md"
}
fatal: [vm2]: FAILED! => {
    "assertion": "(((k3s_controller_list | length) % 2) == 1)",
    "changed": false,
    "evaluated_to": false,
    "msg": "Etcd should have a minimum of 3 defined members and the number of members should be odd. Please see notes about HA in README.md"
}
```
2021-07-21 12:56:09 +01:00
Yajo 05e62b6344 fix: avoid failure with jinja2_native=true
If you run the role on an ansible configured with that setting, it will fail with:

    fatal: [vm0]: FAILED! => {"msg": "Unexpected templating type error occurred on ({% for host in ansible_play_hosts_all %}\n{% filter string %}\n{% filter replace('\\n', ' ') %}\n{{ host }}\n@@@\n{{ hostvars[host].ansible_host | default(hostvars[host].ansible_fqdn) }}\n@@@\nC_{{ hostvars[host].k3s_control_node }}\n@@@\nP_{{ hostvars[host].k3s_primary_control_node | default(False) }}\n{% endfilter %}\n{% endfilter %}\n@@@ END:{{ host }}\n{% endfor %}): sequence item 4: expected str instance, bool found"}
2021-07-19 09:26:57 +00:00
Xan Manning 0c084531d2
Merge pull request #133 from Yajo/patch-1
fix: typo
2021-07-16 20:24:53 +01:00
Jairo Llopis b8539cd82e fix: typo 2021-07-16 09:21:55 +00:00
Xan Manning 2da5738452 Updated README with current k3s supported OS 2021-06-22 20:39:38 +01:00
Xan Manning 8dab5e6f26 Bumped up Ansible version for testing 2021-06-22 20:29:49 +01:00
Xan Manning 7607bfb7a9 Updated test images 2021-06-22 20:28:23 +01:00
Xan Manning f46450319b Update changelog 2021-05-30 21:05:03 +01:00
Xan Manning 10d11c63ec
Merge pull request #126 from mrobinsn/main
Case insensitive control node lookup
2021-05-30 21:00:45 +01:00
Michael Robinson 3006716f66
Case insensitive control node lookup 2021-05-29 14:26:50 -06:00
Xan Manning 730edbf6cb Skip downloads in check-mode 2021-05-27 19:31:28 +01:00
Xan Manning e5b9e5a78a Updated CHANGELOG and molecule tests 2021-05-27 18:13:55 +00:00
Xan Manning c36c026783
Merge pull request #124 from onedr0p/manifest-urls
feat: add support for specifying URLs in templates
2021-05-27 17:55:56 +01:00
ᗪєνιη ᗷυнʟ e7374757fa
fix: task item name 2021-05-27 11:58:45 -04:00
ᗪєνιη ᗷυнʟ 51de880c0f
fix: use k3s_server_pod_manifests_dir for static pod urls 2021-05-27 11:57:42 -04:00
Devin Buhl b7210af4e9
fix: update README 2021-05-26 18:11:12 -04:00
Devin Buhl 2e629838f1
feat: add support for specifying URLs in templates 2021-05-26 18:07:22 -04:00
Xan Manning 7f0eb60a14
Merge pull request #120 from bjw-s/staticpods
Allow control plane static pods
2021-05-26 18:05:25 +01:00
Bᴇʀɴᴅ Sᴄʜᴏʀɢᴇʀs 32c68ea949
Update README.md 2021-05-26 13:38:00 +02:00
Bᴇʀɴᴅ Sᴄʜᴏʀɢᴇʀs d834ca15b0
Merge branch 'main' into staticpods 2021-05-26 09:57:58 +02:00
Xan Manning 6bff9b9981
Merge pull request #119 from onedr0p/patch-1
fix: only deploy templates on primary controller
2021-05-26 08:54:38 +01:00
Bᴇʀɴᴅ Sᴄʜᴏʀɢᴇʀs da7d8c67d9
Rename vars, path
Signed-off-by: Bᴇʀɴᴅ Sᴄʜᴏʀɢᴇʀs <me@bjw-s.dev>
2021-05-26 09:52:34 +02:00
Bᴇʀɴᴅ Sᴄʜᴏʀɢᴇʀs 1bbba04230
Allow control plane static pods
Signed-off-by: Bᴇʀɴᴅ Sᴄʜᴏʀɢᴇʀs <me@bjw-s.dev>
2021-05-26 09:43:07 +02:00
ᗪєνιη ᗷυнʟ 82085cb80b
fix: remove run_once 2021-05-25 19:23:13 -04:00
ᗪєνιη ᗷυнʟ 07fe0e2964
fix: update readme 2021-05-25 18:43:32 -04:00
ᗪєνιη ᗷυнʟ 2243766695
fix: k3s_primary_control_node 2021-05-25 18:39:48 -04:00
ᗪєνιη ᗷυнʟ ef99954177
fix: only deploy k3s_server_manifests_dir on primary controller 2021-05-25 18:38:07 -04:00
Xan Manning 50fa321e7e Fix templating error 2021-05-15 20:47:32 +01:00
Xan Manning 4d5d5b2838 Updated documentation to remove deprecated playbook structures #115 2021-05-15 18:47:27 +01:00
Xan Manning 7bb9f6d8b4 Update changelog 2021-05-13 18:19:55 +01:00
Xan Manning f220fce08f Version-compare test 2021-05-13 17:43:29 +01:00
Xan Manning 2b7fd373f0
Merge pull request #114 from anjia0532/k3s_private_registry
Support k3s private registry configuration
2021-05-13 16:56:23 +01:00
赵安家 d563dcca05 style(k3s): change code style
change code style
2021-05-08 18:39:19 +08:00
赵安家 075ef165c5 fix(k3s): fix k3s's private-registry configuration not exist
fix k3s's private-registry configuration not exist
2021-05-07 18:29:01 +08:00
赵安家 c9e2b619d1 feat(k3s): support k3s's private-registry configuration
rancher doc url https://rancher.com/docs/k3s/latest/en/installation/private-registry/
2021-05-07 17:56:45 +08:00
赵安家 21fa8b048f build(gitignore): modify .gitignore
added .idea dir to .gitignore file
2021-05-07 17:55:56 +08:00
Xan Manning a298ea0985 Update CHANGELOG.md 2021-05-06 21:11:04 +01:00
Xan Manning ea03eaa9dd
Merge pull request #113 from angelnu/patch-1
Unmount CSI plugin folder to avoid data lost on uninstall
2021-05-06 18:52:52 +01:00
Vegetto 5305eb3758
Unmount CSI plugin folder
Fixed upstream - see https://github.com/k3s-io/k3s/issues/3264
2021-05-04 23:20:46 +02:00
Xan Manning 87c56dbe64 Updated CHANGELOG 2021-05-01 08:53:11 +01:00
Xan Manning d2ca503432
Merge pull request #112 from anjia0532/fix_kubectl_get_nodes_result_stdout_error
fixed kubectl_get_nodes_result.stdout error
2021-04-30 17:34:25 +01:00
AnJia 91d456ccad
fixed kubectl_get_nodes_result.stdout error
os ubuntu  amd64 16.04 LTS
ansible 2.9.20
python version 2.7

```
 FAILED! => {"msg": "The conditional check 'item in kubectl_get_nodes_result.stdout' failed. The error was: error while evaluating conditional (item in kubectl_get_nodes_result.stdout): 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/home/rancher/.ansible/roles/xanmanning.k3s/tasks/teardown/drain-and-remove-nodes.yml': line 39, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n    - name: Ensure uninstalled nodes are removed\n      ^ here\n"}

```
2021-04-30 14:31:33 +08:00
Xan Manning 2432a7d25f Fix autodeploy test 2021-04-18 20:46:31 +01:00
Xan Manning f4fcd2897d Update test sequence for check mode, fixed check mode on cgroup test 2021-04-18 20:39:06 +01:00
Xan Manning 60da06e137
Merge pull request #110 from bdronneau/patch-1
docs(readme): typo on ansible
2021-04-18 16:12:14 +01:00
Bastien Dronneau 9a13d67468
docs(readme): typo on ansible 2021-04-18 10:30:08 +02:00
Xan Manning 03b29cb09d
Merge pull request #103 from PyratLabs/feature-cgroup_checks
Add cgroup checks, add Ansible v2.9.16 support
2021-04-10 21:13:23 +01:00
Xan Manning 265b529bb6 Update README.md, requirements.txt and CHANGELOG.md 2021-04-10 21:03:38 +01:00
Xan Manning 55f1f09f3a Added ansible version tests 2021-04-10 19:15:41 +01:00
Xan Manning 23054c76f6 Updated command modules to use cmd argument 2021-04-10 12:31:18 +01:00
Xan Manning e5c69ec894 Add cgroup checks 2021-04-03 20:42:44 +01:00
Xan Manning a3c4d9cfae Updated CHANGELOG 2021-03-22 18:00:05 +00:00
Xan Manning efca6fcbbc
Merge pull request #101 from mbwmbw1337/main
docs(readme): correct typo of k4s_registration_address
2021-03-22 17:47:36 +00:00
Michael Williams 6a70a85ef2
docs(readme): correct typo of k4s_registration_address
update the "k4s_registration_address" to "k3s_registration_address"

Closes #100

[skip ci]

Signed-off-by: Michael Williams <Michael.Williams@glexia.com>
2021-03-22 06:34:56 -04:00
Xan Manning 4326f4497d Renamed k3s_control_node_address -> k3s_registration_address 2021-03-14 16:29:14 +00:00
Xan Manning 85576d62ed Merge branch 'main' of github.com:PyratLabs/ansible-role-k3s into main 2021-03-06 16:52:27 +00:00
Xan Manning 94a153892e Bugfix, missing become on cluser token check 2021-02-28 17:36:30 +00:00
Xan Manning a8c5cd4407 Bugfix, missing become on cluser token check 2021-02-28 17:35:23 +00:00
Xan Manning 15141e9d86
Merge pull request #96 from PyratLabs/feature-tidy_up_tasks
Cluster-init checks added, tidy up of task format
2021-02-28 17:12:01 +00:00
Xan Manning 1d93c2115d Cluster-init checks added, tidy up of task format 2021-02-28 16:48:23 +00:00
Xan Manning 62b2d7cb36 Typo bugfixes 2021-02-27 19:02:49 +00:00
Xan Manning 05242ba232
Merge pull request #92 from PyratLabs/feature-systemd_documentation
systemd unit ordering + documentation
2021-02-16 19:11:04 +00:00
Xan Manning c2348df1ea updated changelog 2021-02-16 18:23:23 +00:00
Xan Manning 5b6242ecca Example of requiring a service 2021-02-16 18:14:10 +00:00
Xan Manning f6e009f1fd Add documentation for CNI and custom args 2021-02-16 18:02:30 +00:00
Xan Manning 7e4a16e167 Pre-documentation work
- Restructured when and asserts
  - Standardise molecule call, add systemd config
2021-02-16 16:53:49 +00:00
Xan Manning c80898d92a Bugfix: #91, missing update to minumum ansible version var 2021-01-31 12:40:08 +00:00
Xan Manning 5555bd3d9b Bugfix, missing `k3s_start_on_boot` to control `systemd.enabled` added. 2021-01-30 17:57:50 +00:00
Xan Manning 2c12436226 Bugfixes
- Added uninstall task to remove hard-linked files #88
  - Fixed missing become for `systemd` operations tasks. #89
  - Added `k3s_start_on_boot` to control `systemd.enabled`.
2021-01-30 17:23:31 +00:00
Xan Manning d4d24aec79
Merge pull request #87 from PyratLabs/feature-ansible_2.9_support
Add Ansible 2.9 support, instructions for k3s upgrade
2021-01-24 20:00:02 +00:00
Xan Manning 43b5359160 Update Changelog 2021-01-24 19:42:35 +00:00
Xan Manning e026d2a4a7 Added 2.9 Ansible support 2021-01-24 18:21:51 +00:00
Xan Manning fc1149ac9e Update CHANGELOG 2021-01-23 11:35:22 +00:00
Xan Manning 3716774cc9
Merge pull request #85 from Diaoul/patch-1
Fix check nodes ready without flannel
2021-01-23 11:33:51 +00:00
Xan Manning 1b4d3dd9dd Altered nodeploy test to remove flannel 2021-01-23 11:14:22 +00:00
Antoine Bertin c169cb8937
Fix check nodes ready without flannel
Fixes #84
2021-01-22 00:28:53 +01:00
Xan Manning e954ba13c4 Bugfix: Docker check still failing on "false" 2021-01-10 16:35:20 +00:00
Xan Manning 8f0e9f22af
Merge pull request #82 from PyratLabs/bugfix-armv6l_support
Added support for armv6l (RPi ZeroW)
2021-01-02 16:51:47 +00:00
Xan Manning 216af14fe1 Updated Changelog 2021-01-02 16:37:31 +00:00
Xan Manning a2e035cd1c Bugfix registry 2021-01-02 16:33:31 +00:00
Xan Manning 6d1a5f812b Updated Molecule requirements 2021-01-02 16:18:15 +00:00
Xan Manning 75504b08b4 Added support for armv6l (RPi ZeroW) 2021-01-02 16:14:24 +00:00
Xan Manning e7c714424c
Tiidy up and refactoring of tasks (#80)
* Tidy up and refactoring of tasks

  - `k3s_config_dir` derived from `k3s_config_file`, reused throughout the role
    to allow for easy removal of "Rancher" references #73.
  - `k3s_token_location` has moved to be in `k3s_config_dir`.
  - Tasks for creating directories now looped to caputure configuration from
    `k3s_server` and `k3s_agent` and ensure directories exist before k3s
    starts, see #75.
  - Server token collected directly from token file, not symlinked file
    (node-token).
  - `k3s_runtime_config` defined in `vars/` for validation and overwritten in
    tasks for control plane and workers.
  - Removed unused references to GitHub API.

* set_fact now uses FQCN

* re-pin molecule<3.2

* Command module now uses FQCN

* Added package checks for #72

* Reorder task files

  - Docker tasks moved into a separate directory for ease of removal #67
  - Bugfix: Control plane on alternate port didn't work.
  - Validation tasks grouped

* Fix Fedora tests

* Add optional documentation links to validations steps #76

* Removed jmespath requirement

* Fix issue with data collection

* Release candidate
2020-12-21 19:14:52 +00:00
Xan Manning ef6c579336
Merge pull request #79 from PyratLabs/feature-update_uninstall_scripts
Uninstall scripts now in-line with upstream
2020-12-19 14:26:18 +00:00
Xan Manning 99c22dceab Uninstall scripts now in-line with upstream
Fixes #74
Addresses #73 - move rancher reference to vars/
2020-12-19 14:05:41 +00:00
Xan Manning 151d36d19b
Merge pull request #78 from PyratLabs/fix-documentation
Documentation fixes
2020-12-19 13:28:46 +00:00
Xan Manning 06fac01266 Pin molecule<3.2 2020-12-19 13:09:46 +00:00
Xan Manning 01a8313dd9 Documentation fixes
- Removed Disclaimer
  - Fixed a Typo
  - Removing references to Rancher
  - Removing references to Docker
2020-12-19 11:26:08 +00:00
Xan Manning e25edbef3c rework documentation, change github link, replace deprecated variables 2020-12-16 11:02:15 +00:00
Xan Manning a067a97f38
Merge pull request #68 from PyratLabs/feature-embedded_etcd_ga_support
Embedded Etcd now no longer Experimental
2020-12-12 18:38:33 +00:00
Xan Manning e7ba779c91 Update PR template 2020-12-12 18:26:30 +00:00
Xan Manning e4059661ab Reduce GitHub Actions testing matrix 2020-12-12 18:12:54 +00:00
Xan Manning 1d40c4d2c9 Migration from Travis-CI to GitHub Actions 2020-12-12 16:21:17 +00:00
Xan Manning 34e2af3d47 Set embedded Etcd as stable, deprecate docker 2020-12-12 14:27:59 +00:00
Xan Manning 5d3524d729 Fix link to documentation 2020-12-05 22:01:43 +00:00
Xan Manning 4afc2c8a5a Fixed data-dir configuration and draining of nodes. Added documentation. 2020-12-05 21:56:28 +00:00
Xan Manning 21adf94627 Updated issue template and collection yml 2020-11-30 21:57:58 +00:00
Xan Manning fa73be4921 Fixed a number of typos in the README.md 2020-11-30 08:41:56 +00:00
Xan Manning 976fe8c0ca Resolve merge conflict 2020-11-29 20:31:22 +00:00
Xan Manning ebf32dbd99 v2 pre-release 2020-11-29 20:10:42 +00:00
Xan Manning cc59955b28 Merge branch 'v1_release' into main 2020-11-14 14:15:52 +00:00
Xan Manning ddbf7a71a8 Updated terminology to remove references to "master" 2020-11-14 14:15:19 +00:00
Xan Manning 603cabdb39 Merge branch 'master' into v1_release 2020-11-14 13:56:07 +00:00
Xan Manning aea68db6c5 README corrected with premature information 2020-11-14 13:55:55 +00:00
Martin Friedrich f9461f1951 Cherry-picked PR from v1 2020-11-11 20:54:14 +00:00
Xan Manning 58db02a967 Merge branch 'v1_release' 2020-11-11 20:51:36 +00:00
Xan Manning 66ee539862
Merge pull request #63 from networkpanic/feature/rpi-cluster
adding retries to restart k3s handler
2020-11-11 20:50:53 +00:00
Xan Manning a2075a7a76 Fix travis, removed wireguard due to external dependency issue 2020-11-11 20:49:26 +00:00
Martin Friedrich dd40e73d6c
remove trailing whitespace 2020-11-11 15:47:29 +01:00
Martin Friedrich dc571c375b adding retries to restart k3s handler 2020-11-11 09:41:26 +01:00
Xan Manning 8c791cb611 Change terminology of tasks to remove "master" 2020-11-10 19:01:05 +00:00
Xan Manning a99087c7f6 Remove "master" from README.md 2020-11-10 18:30:38 +00:00
Xan Manning 29c4936807 Create v1.x release branch prior to v2.x release 2020-11-10 08:43:52 +00:00
Xan Manning 1f74a599ee
Merge pull request #60 from networkpanic/feature/rpi-cluster
add advertise_ip support
2020-10-26 18:32:16 +00:00
Martin Friedrich 4ed0727411
add missing host-gw to flannel backend comment 2020-10-26 11:24:11 +01:00
Martin Friedrich edc98a6d6e
add advertise address to readme 2020-10-26 10:43:39 +01:00
Martin Friedrich 04375f5e39
add support for advertise ip, this was needed to advertise using the internal-ip's of my nodes 2020-10-26 10:43:39 +01:00
Xan Manning 170bf5995f Merge conflict resolved: archlinux support 2020-10-23 16:40:35 +01:00
Xan Manning a8dd9acdb9 ArchLinux support added by @networkpanic 2020-10-23 16:34:19 +01:00
Xan Manning e473064f61
Merge pull request #59 from networkpanic/feature/archlinux
Archlinux support
2020-10-23 16:31:59 +01:00
Xan Manning 35b037c7ee Pre-FQCN breakage 2020-10-23 16:31:21 +01:00
Martin Friedrich e5133c1f73
add archlinux support, fixed drain invoked on uninstall by adding --delete-local-data 2020-10-23 14:43:58 +02:00
Xan Manning 3d2b74c816 Slight tidy up of playbooks in default molecule test 2020-10-22 19:30:40 +01:00
Xan Manning 57b9a2a0be Moved to file based config, pre-FQCN, pre-update to documentation 2020-10-22 19:26:15 +01:00
Xan Manning 61f706acb9 Merge branch 'master' into role_v2 2020-10-22 11:59:34 +01:00
Xan Manning 93b95a9813
Merge pull request #58 from PyratLabs/bugfix-k3s_node_data_dir
k3s_node_data_dir now set in templates
2020-10-22 11:45:41 +01:00
Xan Manning 292c726b07 Split out repeating tasks 2020-10-21 17:22:41 +01:00
Xan Manning f3173f193f Merge branch 'bugfix-k3s_node_data_dir' into role_v2 2020-10-19 20:35:32 +01:00
Xan Manning 6e29200d9a Attempt to fix #57 - k3s_node_data_dir set in templates 2020-10-19 20:32:53 +01:00
Xan Manning 9b800d9fba moving to file-based config 2020-10-19 20:26:12 +01:00
Xan Manning 36a2f24a9d Merge branch 'master' into role_v2 2020-10-18 18:01:41 +01:00
Xan Manning 23cdd3edda Fix missing --disable flags mentioned in #56 2020-10-18 17:58:32 +01:00
Xan Manning a93403d312 Restructuring for config file based deployment 2020-10-18 17:41:00 +01:00
Xan Manning 45a41f895b Restructure for validation checks 2020-10-17 18:27:52 +01:00
Xan Manning c63d984301 Refactoring tests for Molecule v3. 2020-10-17 16:31:04 +01:00
Xan Manning 72638e8e3d Bugfix: Disable k3s default kube proxy option missing 2020-09-26 18:49:11 +01:00
Xan Manning 9a15d8eddf
Merge pull request #55 from onedr0p/patch-3
Implement setting multiple k3s_tls_san
2020-09-26 17:37:11 +01:00
Xan Manning 062c459b00
Merge pull request #54 from onedr0p/patch-2
Implement option to disable kube-proxy
2020-09-26 17:36:50 +01:00
Xan Manning d52cda1d10
Merge pull request #52 from onedr0p/patch-1
Implement installing specific k3s commit
2020-09-26 17:36:27 +01:00
Xan Manning 57f9631265 Converting molecule tests to v3 2020-09-26 15:51:41 +01:00
ᗪєνιη ᗷυнʟ 6cf09c8efa
implement k3s_tls_san iterable in systemd service
keeps support for non-array values
2020-09-24 10:21:48 -04:00
ᗪєνιη ᗷυнʟ f39f228f39
k3s_tls_san readme changes
this can be a list and iterated over in the systemd service
2020-09-24 10:16:12 -04:00
ᗪєνιη ᗷυнʟ 2bb556f1da
add k3s_disable_kube_proxy to readme 2020-09-24 08:38:03 -04:00
ᗪєνιη ᗷυнʟ 564d693e9d
add disable-kube-proxy to systemd servie 2020-09-24 08:36:48 -04:00
ᗪєνιη ᗷυнʟ d4c38f59cc
fix extra spaces 2020-09-23 10:48:20 -04:00
ᗪєνιη ᗷυнʟ b06d1635f1
fix english 2020-09-23 10:03:09 -04:00
ᗪєνιη ᗷυнʟ 647d6026e4
move commit example a few newlines down 2020-09-23 09:59:42 -04:00
ᗪєνιη ᗷυнʟ 7dd8a3f8ff
add example for specific version 2020-09-23 09:55:44 -04:00
ᗪєνιη ᗷυнʟ ddfc73586c
add commit example to k3s_release_version 2020-09-23 09:45:51 -04:00
ᗪєνιη ᗷυнʟ b16f142c21
Override facts when commit hash is specified 2020-09-23 09:43:31 -04:00
Xan Manning 4b4a49bdd5 Merge branch 'master' of github.com:PyratLabs/ansible-role-k3s 2020-09-22 20:31:06 +01:00
ᗪєνιη ᗷυнʟ c447fcec39 A number of enhancements for v1.19 release.
- Added option to skip validation checks #47
  - Add SELinux support in containerd #48
  - Added check for Etcd member count #46
  - Moved token to a file #50
  - Added Etcd snapshot configuration options #49
2020-09-22 20:30:50 +01:00
Xan Manning 4dd827c2a7
Merge pull request #44 from onedr0p/patch-3
set want and after to network-online.target in systemd file
2020-09-21 21:10:40 +01:00
Xan Manning 1438ddde69
Merge pull request #43 from onedr0p/patch-2
Set LimitNOFILE to 1048576 in k3s systemd file
2020-09-21 20:45:34 +01:00
Xan Manning d0e209d866
Merge pull request #42 from onedr0p/patch-1
Option to enable debug flag
2020-09-21 20:45:21 +01:00
ᗪєνιη ᗷυнʟ c99c9bf67f
set want and after to network-online.target in systemd file 2020-09-21 14:38:51 -04:00
ᗪєνιη ᗷυнʟ 36d44bc1af
move debug to before server and agent flags 2020-09-21 13:44:11 -04:00
ᗪєνιη ᗷυнʟ cc0c686e61
Set LimitNOFILE to 1048576
https://github.com/containerd/containerd/issues/3201
2020-09-21 08:39:55 -04:00
ᗪєνιη ᗷυнʟ 7ea82ed749
add k3s_debug to readme 2020-09-21 08:31:07 -04:00
ᗪєνιη ᗷυнʟ 0129ec3e5c
add debug flag service file 2020-09-21 08:29:13 -04:00
Xan Manning ab48e3a173 Change delay to 5 seconds for secondary masters startup task to complete 2020-09-18 12:09:56 +01:00
Xan Manning 175b90ecb0 Added support for Etcd, removed DQLite support. See #41 2020-09-17 21:01:20 +01:00
Xan Manning c743df868b Fixing ansible-linting, exclude name check for Travis-CI
This release also fixes:

  - #38 : removing the --disable-agent option. Please use node taints.
  - #39 : clarified where jmespath should be installed in README.md
2020-09-15 18:20:23 +01:00
Xan Manning 230aaa110c Bugfix, bind address is for listener 2020-08-01 14:17:20 +01:00
Xan Manning 1f8429a77b
Merge pull request #36 from PyratLabs/release-hardlink_check_mode
Release hardlink + check mode
2020-07-26 08:29:56 +01:00
Xan Manning b412858b30 Fix merge conflict 2020-07-25 20:51:31 +01:00
Xan Manning d8a348923a Merge branch 'feature-symlink_to_hardlink_release' into release-hardlink_check_mode 2020-07-25 20:49:55 +01:00
Xan Manning 0bfbaa302e Fix uninstall 2020-07-25 20:42:26 +01:00
Xan Manning d53102dda3 Check mode support added 2020-07-25 17:39:01 +01:00
Xan Manning 809e9cd73c Releasable feature for hardlinks 2020-07-25 14:03:53 +01:00
Xan Manning d2a34546cf Potential fix for #35 2020-07-25 12:27:39 +01:00
Xan Manning 504b84a8b6 Use --disable rather than --no-deploy, fix issue #33 2020-07-16 12:49:31 +01:00
Xan Manning 3a6b411430 Added support for args, private registries. Fixes #32 2020-07-04 13:24:10 +01:00
Xan Manning f454334b42
Merge pull request #28 from pedrohdz/control-node-restart-k3s
Restart k3s service unit on file change
2020-06-06 15:05:43 +01:00
Xan Manning 2c0afbca42 Restart k3s service unit on file change 2020-06-06 14:30:40 +02:00
Xan Manning 9d04e315ae
Merge pull request #29 from clrxbl/patch-1
Become superuser to solve "Access denied"
2020-05-31 10:50:18 +01:00
Michael f90cc5ca18
Privilege escalation to solve "Access denied"
```
FAILED! => {"attempts": 3, "changed": false, "msg": "Unable to enable service k3s: Failed to enable unit: Access denied\n"}
```

The task never sets become to true, hence failing due to lack of permissions on the user that is executing it by default.
2020-05-30 23:40:05 +02:00
Xan Manning 848a5457ff Add option for unsupported single node with database backend. Issue #27 2020-05-30 15:16:20 +01:00
Xan Manning 6090071982 Bugfix, issue with HA build for joining new nodes 2020-05-25 17:57:43 +01:00
Xan Manning 23ba527bc2 Bugfix, broke clustering with v1.6.2 2020-05-25 17:11:45 +01:00
Xan Manning 9524b07df0 Fix joining nodes to an existing cluster 2020-05-25 16:25:09 +01:00
Xan Manning 141b6f2018 Numerous bug fixes to do with permissions and regressions.
Fix issue #25, check k3s_bind_address for readiness check
Fix issue #24, become for tasks that require root
2020-05-20 19:55:33 +01:00
Xan Manning 5ce8dec6ff Added the ability to set k3s_release_version as a release channel 2020-05-18 20:45:48 +01:00
Xan Manning e3301a59e4 Updated state tasks to dynamic include rather thn static import
This is an initial attempt to address issue #22, I have also included a
task to drain the node before deleting it.
2020-05-18 19:53:03 +01:00
Xan Manning 02e12e61a8 Bugfix: minimum version for secrets encryption reverted 2020-05-17 20:43:39 +01:00
Xan Manning b42ffade29 Fixes to variable checks 2020-05-17 11:40:53 +01:00
Xan Manning 26467de186 Unknown issue with k3s-uninstall.sh exiting with 1.
The script looks to be completing without error in my testing as well as
in the original issue so I am forcing an exit with 0 until the cause can
be found.

Fixes #23
2020-05-16 21:19:48 +01:00
Xan Manning aa1a0a9620 Added option to provision multiple standalone k3s
Fixes #21
2020-05-16 20:18:20 +01:00
Xan Manning 9b8cf85489
Merge pull request #20 from nolte/fix/permissions-fail
add permission become check
2020-05-10 18:26:56 +01:00
nolte df44053349 add a permission check for write the systemd k3s config 2020-05-09 21:07:34 +02:00
Xan Manning 681cd981ab Updated README.md to remove compulsory become:true 2020-04-24 12:06:18 +01:00
Xan Manning c5a8f03b35
Merge pull request #19 from SimonHeimberg/become
variables to activate become for some actions
2020-04-24 12:04:23 +01:00
SimonHeimberg acedb08a1f variables to activate become for some steps 2020-04-22 16:42:45 +02:00
Xan Manning bcb81e7c7d
Merge pull request #18 from t-nelis/readme-racher-typo
Fix typo in README: "Racher" -> "Rancher"
2020-04-09 13:15:14 +01:00
Thibault Nélis 9bace4a62f Fix typo in README: "Racher" -> "Rancher" 2020-04-08 23:58:20 +02:00
Xan Manning e93b438ee0 Added secrets encryption at rest option 2020-03-28 12:58:58 +00:00
Xan Manning f684f6d907 A retry has been added to task controlling secondary master startup.
Fixes #17

There appeared to be a race condition where starting all secondary
masters all at once would cause the k3s service to fail on a number of
the other masters. A retry has been added to the task to attempt to
bring them all up until they stop failing.
2020-03-07 16:15:41 +00:00
Xan Manning f709caf371 Skip final checks when no-flannel option is used.
Fixes #16

This is because without a CNI, nodes will never be ready and the task
will fail. You need to deploy your choice of CNI manually (such as
Calico) then check the state of the cluster using `kubectl get nodes`.
2020-03-07 14:23:09 +00:00
Xan Manning 2c09d4711b
Merge pull request #15 from PyratLabs/tidy_up_additional_validation
Variable check for local storage path
2020-02-28 07:39:33 +00:00
Xan Manning 9dcfa954f9 Variable check for local storage path 2020-02-27 20:10:28 +00:00
Xan Manning 554fada914
Merge pull request #14 from PyratLabs/carpenike-master
Carpenike master
2020-02-27 20:10:00 +00:00
Xan Manning 12d01c2a60 Added tests and variable validation 2020-02-27 18:46:59 +00:00
Xan Manning 84bf657f1c Merge branch 'master' of github.com:carpenike/ansible-role-k3s into carpenike/master 2020-02-27 18:16:49 +00:00
Xan Manning 241dc24d59
Merge pull request #11 from onedr0p/state-uninstall
Add state-uninstalled
2020-02-27 07:54:11 +00:00
Ryan Holt 3f6ce99369
rephrase option to cloud controller
Signed-off-by: Ryan Holt <ryan@ryanholt.net>
2020-02-26 21:17:51 -05:00
Ryan Holt db96168491
added example for kubelet_args in README
Signed-off-by: Ryan Holt <ryan@ryanholt.net>
2020-02-26 21:16:52 -05:00
Ryan Holt c473f932c4
added kubelet args key
Signed-off-by: Ryan Holt <ryan@ryanholt.net>
2020-02-26 18:03:08 -05:00
Xan Manning 56b2d7bc03 Fixed path in k3s-uninstall.sh - my bad 2020-02-26 21:52:56 +00:00
Xan Manning 75fd17aac8 Slightly updated tasks and added validation checks
1. Now does not remove prerequisite packages, lvm2 was included in
these packages (not good when you use LVM2 for real).
  2. Added a bit more idempotency to the shell scripts - only delete if
it exists.
  3. Check that the process isn't running and binaries are gone.
2020-02-26 20:56:05 +00:00
Devin Buhl 5f7ff27f17
Fix 301 lint issue in uninstall-docker-amazon 2020-02-25 15:42:40 -05:00
Devin Buhl a1e52fb660
fixed 301 lint issue in uninstall-k3s.yml 2020-02-25 15:41:29 -05:00
Devin Buhl e7c787e10f
Fix other lint issue 2020-02-25 15:25:23 -05:00
Devin Buhl 8d0ee69012
Fix other yaml lint issue 2020-02-25 15:08:17 -05:00
Devin Buhl fd7498303d
Fix first YAML lint issue 2020-02-25 15:07:05 -05:00
Devin Buhl be85c9ccc5 state uninstalled 2020-02-25 12:39:34 -05:00
Devin Buhl 9bbf5fd746 add uninstall state 2020-02-25 12:29:39 -05:00
Devin Buhl c4547306ce
add option to specify local storage path (#10) 2020-02-25 08:48:09 +00:00
Xan Manning 31debb2f5d Fix Travis-CI build 2020-02-22 14:33:12 +00:00
Xan Manning f82f90aae0 Clearer licensing, included LICENSE.txt 2020-02-22 12:34:35 +00:00
Xan Manning 5517671477
Merge pull request #9 from PyratLabs/feature_better_checksum
Feature better checksum
2020-02-10 20:42:10 +00:00
Xan Manning 1f19e2b302 Updated flannel backend flag checks 2020-02-09 16:03:41 +00:00
Xan Manning 218b9d64c9 Slightly more robust selection of checksum from GitHub 2020-02-09 15:00:59 +00:00
Xan Manning 3da7599eab
Merge pull request #8 from jdmarble/master
Use correct checksums for arm downloads
2020-02-01 12:51:23 +00:00
James D. Marble 044ed5512c Use correct checksums for arm downloads
I attempted to install on arm64 and armhf. Both fail because the
[checksum filter](e07903a5cf/tasks/build/download-k3s.yml (L21))
finds the first line with "k3s". On the arm checksum files,
the first lines are for "k3s-airgap-images-arm64.tar" and "k3s-airgap-images-arm.tar"
so the wrong checksum is grabbed.

I attempted to fix this with a more specific filter:
`select('search', 'k3s'+k3s_arch_suffix)`.
This works for both arm architectures,
but fails for amd64 because the key is simply "k3s" and not "k3s-amd64".

The solution I settled on is not ideal for future proofing,
but works for now at least.
2020-01-31 21:10:55 -08:00
Xan Manning e07903a5cf Fixed issue with SUSE docker installation 2020-01-21 22:33:11 +00:00
Xan Manning 04a92ee956 Reducing the number of tests in travis-ci for faster jobs 2020-01-19 16:49:21 +00:00
Xan Manning 927fd41036 Fixed dockerfile for high availability loadbalancer using HAProxy 2020-01-18 00:17:23 +00:00
Xan Manning df253b504a
Merge pull request #6 from PyratLabs/multi_master_support
Auto-deploy templates, HA support now possible.
2020-01-13 22:07:52 +00:00
Xan Manning c5b6dcd7fa Fixed control nodes to match nginx template in test 2020-01-13 21:57:45 +00:00
Xan Manning e3ce213bc0 Testing auto-deploy on multi-master 2020-01-13 21:32:31 +00:00
Xan Manning c8fb27ecd1
Merge pull request #5 from nolte/feature/add_manifests
Add Support for Auto-Deploying Manifests
2020-01-13 19:09:57 +00:00
Xan Manning 3ef36b841f
Merge branch 'multi_master_support' into feature/add_manifests 2020-01-13 19:09:45 +00:00
Xan Manning 3a1c7e7b35 Added workflow for Database backed and DQLite HA 2020-01-13 19:08:37 +00:00
Xan Manning 7e7cf2b97d Moved HA testing to a new scenario 2020-01-12 12:50:03 +00:00
nolte 5331e22425 fix path, missing prefix 2020-01-11 23:51:52 +01:00
Xan Manning 09fc37e6ec Fixed provisioning of multi-master, need to test LB with k3s_control_node_address 2020-01-11 22:42:29 +00:00
Xan Manning c3ae2b79eb Added database container and proved connectivity. Logic needs to be changed for HA. 2020-01-11 19:20:52 +00:00
nolte 2d0dc8db69
Update molecule/default/templates/00-ns-monitoring.yml.j2
Co-Authored-By: Xan Manning <xan.manning@gmail.com>
2020-01-11 20:04:26 +01:00
nolte a73a1fbdef
Update molecule/default/playbook-auto-deploying-manifests.yml
Co-Authored-By: Xan Manning <xan.manning@gmail.com>
2020-01-11 20:04:11 +01:00
nolte b896e90704
Update tasks/build/preconfigure-k3s-auto-deploying-manifests.yml
Co-Authored-By: Xan Manning <xan.manning@gmail.com>
2020-01-11 20:03:58 +01:00
nolte 2e03ea2e6f
Update tasks/build/preconfigure-k3s-auto-deploying-manifests.yml
Co-Authored-By: Xan Manning <xan.manning@gmail.com>
2020-01-11 20:03:29 +01:00
nolte 227b24c117
Update defaults/main.yml
Co-Authored-By: Xan Manning <xan.manning@gmail.com>
2020-01-11 20:03:16 +01:00
nolte 1dd9297de4 change template path for molecule test 2020-01-11 19:39:11 +01:00
nolte cb13c5b473 create manifests directory if not exists 2020-01-11 18:56:48 +01:00
nolte 2aedce0359 add first draft for running molecule test with auto manifests deployments 2020-01-11 18:03:47 +01:00
nolte b89f2f3acd remove trailing spaces 2020-01-11 15:58:58 +01:00
nolte 2b646e4e4f update task documentation and add new config parameters to the Readme 2020-01-11 15:44:28 +01:00
nolte 2307546be2 add support place k8s manifests to the nodes 2020-01-11 15:10:19 +01:00
Xan Manning 734e49a7e5 Documentation, and validation logic for HA configuration added. 2020-01-11 12:31:23 +00:00
Xan Manning da427f1518 Added new state "downloaded" - improved getting latest version 2019-12-28 15:50:17 +00:00
Xan Manning f2a3f75f08 Added some validation steps, fixed issue with checksum, introducing rootless
as an option, however this is experimental in both K3s and this role.
2019-12-22 18:54:25 +00:00
Xan Manning fe688dfc70 Changed workflow to include state (allows for build and operate
workflows)
2019-12-21 10:34:33 +00:00
Xan Manning 717de81c7f Build-operate workflow trial - allow for stop-starting cluster. 2019-12-20 19:41:20 +00:00
Xan Manning e8e5dbf45a
Merge pull request #4 from quulah/fix-sha256sum-parsing
Parse checksum without shell usage
2019-12-11 14:47:10 +00:00
Miika Kankare c5cdc745e5
Parse checksum without shell usage 2019-12-11 15:17:05 +02:00
Xan Manning 99c103a14f Fixed regression with AmazonLinux Docker install, increased coverage of
testing Docker installation as Fedora was missing python-dnf dependency.
2019-12-09 19:46:25 +00:00
Xan Manning ec61e0b4ce Improved Docker support for SUSE/openSUSE. Notes about control host requirements 2019-12-09 13:53:42 +00:00
Xan Manning 26a3b2eef0 Added extra no-deploy options for v1.0.0 2019-12-04 19:10:05 +00:00
Xan Manning 8f3b2428c8 Added experimental options to ansible role:
1. Ability to specify control host address, for connecting to a control plane
     provisioned outside of the role.
  2. Ability to specify the control host token, again for connecting to
     a control plane provisioned outside of the role.
  3. Included upstream changes from @nolte to define KubeConfig file
     permissions.
2019-12-04 17:17:15 +00:00
Xan Manning 2b8f354a88 Updated service unit template for neater output 2019-11-03 15:35:32 +00:00
Xan Manning d81d41e709 Updated Meta to reflect currently supported platforms 2019-11-03 10:56:42 +00:00
Xan Manning 9295347b6d Merging in branch for providing additional options for running k3s. 2019-11-02 22:46:35 +00:00
Xan Manning 5e39160ed9 Added a number of extra options to configure K3s in systemd unit file.
Testing:
  - Added docker networking, ensure that test output is verbose.
  - Fix build for AmazonLinux 2
  - No-deploy flag test added
2019-11-02 22:19:33 +00:00
Xan Manning 1282da8cfa Removed failing test, works in Vagrant but not docker. 2019-10-27 00:12:02 +01:00
Xan Manning 6e9566d5eb Fixed initial support for 0.10.0, added molecule tests in Travis-CI 2019-10-26 22:24:20 +01:00
Xan Manning efc703541c Updated for 0.10.0, adding molecule testing with Travis-CI 2019-10-26 22:23:17 +01:00
Xan Manning 2327d0433d Added new options for Flannel interfaces, tested on openSUSE LEAP 15.1 2019-09-29 18:11:05 +01:00
Xan Manning f077120580 Tested against Debian Buster, confirmed working. 2019-06-15 17:44:09 +01:00
Xan Manning 43275f5d63
Merge pull request #2 from abdennour/patch-1
static import
2019-05-16 19:19:05 +01:00
abdennour 07661f7df8
static import
include_tasks is used to import tasks according to a condition that relies on a dynamic value (facts).
2019-05-13 06:54:54 +03:00
Xan Manning 389974d7d3
Merge pull request #1 from jdmarble/patch-1
Add support for armv7l arch
2019-04-25 08:48:22 +01:00
James D. Marble 3e83e3c301
Add support for armv7l arch
I was receiving this error when running the task on my [Odroid HC1 running Armbian](https://www.armbian.com/odroid-hc1/):

```
TASK [xanmanning.k3s : Ensure target host architecture information is set as a fact] **************************************************************************
fatal: [odroid]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'armv7l'\n\nThe error appears to have been in '/home/jdmarble/.ansible/roles/xanmanning.k3s/tasks/download-k3s.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Ensure target host architecture information is set as a fact\n  ^ here\n"}
```

I worked around the problem by overriding `k3s_arch_lookup` in my play book:

```yaml
---

- hosts: all
  roles:
    - role: xanmanning.k3s
      k3s_arch_lookup:
        armv7l:
          arch: arm
          suffix: "-armhf"
```
2019-04-24 16:15:39 -07:00
Xan Manning 27083e1d5b Bugfix: Checking of hash fixed for k3s v0.3.0 release 2019-04-06 12:04:09 +01:00
139 changed files with 5532 additions and 416 deletions

5
.ansible-lint Normal file
View File

@ -0,0 +1,5 @@
---
skip_list:
- role-name
- name[template]

26
.devcontainer/Dockerfile Normal file
View File

@ -0,0 +1,26 @@
ARG VARIANT=focal
FROM ubuntu:${VARIANT}
COPY molecule/requirements.txt /tmp/molecule/requirements.txt
COPY requirements.txt /tmp/requirements.txt
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install curl git python3-dev python3-pip \
python3-venv shellcheck sudo unzip docker.io jq \
&& curl -L \
"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" \
-o /usr/bin/kubectl \
&& chmod +x /usr/bin/kubectl \
&& python3 -m pip install pip --upgrade \
&& python3 -m pip install -r /tmp/molecule/requirements.txt
RUN useradd -s /bin/bash -m vscode && \
usermod -aG docker vscode && \
echo 'vscode ALL=(ALL:ALL) NOPASSWD: ALL' > /etc/sudoers.d/vscode && \
echo 'source /etc/bash_completion.d/git-prompt' >> /home/vscode/.bashrc && \
echo 'sudo chown vscode /var/run/docker-host.sock' >> /home/vscode/.bashrc && \
echo 'export PS1="${PS1:0:-1}\[\033[38;5;196m\]$(__git_ps1)\[$(tput sgr0)\] "' >> /home/vscode/.bashrc
RUN ln -s /var/run/docker-host.sock /var/run/docker.sock
USER vscode

View File

@ -0,0 +1,28 @@
{
"name": "Ubuntu",
"build": {
"context": "..",
"dockerfile": "Dockerfile",
"args": { "VARIANT": "focal" }
},
"settings": {
"terminal.integrated.profiles.linux": {
"bash (login)": {
"path": "/bin/bash",
"args": ["-l"]
}
}
},
"extensions": [
"ms-azuretools.vscode-docker",
"redhat.vscode-yaml"
],
"mounts": [
"source=/var/run/docker.sock,target=/var/run/docker-host.sock,type=bind"
],
"remoteUser": "vscode"
}

55
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,55 @@
---
name: Bug report
about: Create a report to help us improve
---
<!-- Please first verify that your issue is not already reported on GitHub -->
<!-- Complete *all* sections as described. -->
### Summary
<!-- Explain the problem briefly below -->
### Issue Type
- Bug Report
### Controller Environment and Configuration
<!-- Please re-run your playbook with: `-e "pyratlabs_issue_controller_dump=true"` -->
<!-- Example: `ansible-playbook -e "pyratlabs_issue_controller_dump=true" /path/to/playbook.yml` -->
<!-- Then please copy-and-paste the contents (or attach) to this issue. -->
<!-- Please also include information about the version of the role you are using -->
```text
```
### Steps to Reproduce
<!-- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!-- Paste example playbooks or commands between quotes below -->
```yaml
```
### Expected Result
<!-- Describe what you expected to happen when running the steps above -->
```text
```
### Actual Result
<!-- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!-- Paste verbatim command output between quotes -->
```text
```

3
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,3 @@
---
blank_issues_enabled: true

View File

@ -0,0 +1,33 @@
---
name: Feature request
about: Suggest an idea for this project
---
<!-- Please first verify that your feature was not already discussed on GitHub -->
<!-- Complete *all* sections as described, this form is processed automatically -->
### Summary
<!-- Describe the new feature/improvement briefly below -->
### Issue Type
- Feature Request
### User Story
<!-- If you can, please provide a user story, if you don't know what this is don't worry, it will be refined by PyratLabs. -->
<!-- Describe who would use it, why it is needed and the benefit -->
_As a_ <!-- (Insert Persona) --> \
_I want to_ <!-- (Insert Action) --> \
_So that_ <!-- (Insert Benefit) -->
### Additional Information
<!-- Please include any relevant documentation, URLs, etc. -->
<!-- Paste example playbooks or commands between quotes below -->
```yaml
```

37
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,37 @@
## TITLE
### Summary
<!-- Describe the change below, including rationale and design decisions -->
<!-- HINT: Include "Fixes #nnn" if you are fixing an existing issue -->
### Issue type
<!-- Pick one below and delete the rest -->
- Bugfix
- Documentation
- Feature
### Test instructions
<!-- Please provide instructions for testing this PR -->
### Acceptance Criteria
<!-- Please list criteria required to ensure this change has been sufficiently reviewed. -->
<!-- Example ticklist:
- [ ] GitHub Actions Build passes.
- [ ] Documentation updated.
-->
### Additional Information
<!-- Include additional information to help people understand the change here -->
<!-- Paste verbatim command output below, e.g. before and after your change -->
```text
```

18
.github/stale.yml vendored Normal file
View File

@ -0,0 +1,18 @@
---
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 60
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: wontfix
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

98
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,98 @@
---
name: CI
'on':
pull_request:
push:
branches:
- master
- main
- v1_release
schedule:
- cron: "0 1 1 * *"
defaults:
run:
working-directory: "xanmanning.k3s"
jobs:
ansible-lint:
name: Ansible Lint
runs-on: ubuntu-latest
steps:
- name: Checkout codebase
uses: actions/checkout@v2
with:
path: "xanmanning.k3s"
- name: Set up Python 3
uses: actions/setup-python@v2
with:
python-version: "3.x"
- name: Install test dependencies
run: pip3 install -r molecule/lint-requirements.txt
- name: Run yamllint
run: yamllint -s .
- name: Run ansible-lint
run: ansible-lint --exclude molecule/ --exclude meta/
molecule:
name: Molecule
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
include:
- distro: geerlingguy/docker-debian11-ansible:latest
scenario: default
prebuilt: 'true'
- distro: geerlingguy/docker-ubuntu2204-ansible:latest
scenario: default
prebuilt: 'true'
- distro: geerlingguy/docker-amazonlinux2-ansible:latest
scenario: default
prebuilt: 'true'
- distro: geerlingguy/docker-ubuntu2004-ansible:latest
scenario: default
prebuilt: 'true'
- distro: geerlingguy/docker-fedora35-ansible:latest
scenario: nodeploy
prebuilt: 'true'
- distro: geerlingguy/docker-fedora34-ansible:latest
scenario: highavailabilitydb
prebuilt: 'true'
- distro: geerlingguy/docker-fedora33-ansible:latest
scenario: autodeploy
- distro: xanmanning/docker-alpine-ansible:3.16
scenario: highavailabilityetcd
prebuilt: 'false'
- distro: geerlingguy/docker-rockylinux9-ansible:latest
scenario: highavailabilityetcd
prebuilt: 'true'
steps:
- name: Checkout codebase
uses: actions/checkout@v2
with:
path: "xanmanning.k3s"
- name: Set up Python 3
uses: actions/setup-python@v2
with:
python-version: "3.x"
- name: Install test dependencies
run: pip3 install -r molecule/requirements.txt
- name: Run Molecule tests
run: molecule test --scenario-name "${{ matrix.scenario }}"
# continue-on-error: true
env:
PY_COLORS: '1'
ANSIBLE_FORCE_COLOR: '1'
MOLECULE_DISTRO: ${{ matrix.distro }}
MOLECULE_PREBUILT: ${{ matrix.prebuilt }}
MOLECULE_DOCKER_COMMAND: ${{ matrix.command }}

32
.github/workflows/release.yml vendored Normal file
View File

@ -0,0 +1,32 @@
---
name: Release
'on':
push:
tags:
- '*'
defaults:
run:
working-directory: "xanmanning.k3s"
jobs:
release:
name: Release
runs-on: ubuntu-latest
steps:
- name: Checkout codebase
uses: actions/checkout@v2
with:
path: "xanmanning.k3s"
- name: Set up Python 3
uses: actions/setup-python@v2
with:
python-version: "3.x"
- name: Install Ansible
run: pip3 install -r requirements.txt
- name: Trigger a new import on Galaxy
run: ansible-galaxy role import --api-key ${{ secrets.GALAXY_API_KEY }} $(echo ${{ github.repository }} | cut -d/ -f1) $(echo ${{ github.repository }} | cut -d/ -f2)

9
.gitignore vendored
View File

@ -4,5 +4,10 @@ VAULT_PASSWORD
VAULT_PASS
.vault_pass
.vault_pass.asc
tests/fetch
tests/ubuntu-*.log
vagramt/fetch
vagrant/ubuntu-*.log
__pycache__
ansible.cfg
pyratlabs-issue-dump.txt
.cache
/.idea/

33
.yamllint Normal file
View File

@ -0,0 +1,33 @@
---
# Based on ansible-lint config
extends: default
rules:
braces:
max-spaces-inside: 1
level: error
brackets:
max-spaces-inside: 1
level: error
colons:
max-spaces-after: -1
level: error
commas:
max-spaces-after: -1
level: error
comments: disable
comments-indentation: disable
document-start: disable
empty-lines:
max: 3
level: error
hyphens:
level: error
indentation: disable
key-duplicates: enable
line-length: disable
new-line-at-end-of-file: disable
new-lines:
type: unix
trailing-spaces: disable
truthy: disable

640
CHANGELOG.md Normal file
View File

@ -0,0 +1,640 @@
# Change Log
<!--
## DATE, vx.x.x
### Notable changes
### Breaking changes
### Known issues
### Contributors
---
-->
## 2023-05-17, v3.4.1
### Notable changes
- fix: resolve ansible lint warnings and fix molecule tests in github actions
### Contributors
- [dbrennand](https://github.com/dbrennand)
---
## 2023-03-11, v3.4.0
### Notable changes
- refactor: add `until: 1.23.15` to `secrets-encryption` from `k3s_experimental_config` as it is no longer experimental. Fixes #200.
- docs(fix): typo in `CONTRIBUTING.md`
### Contributors
- [dbrennand](https://github.com/dbrennand)
---
## 2022-11-15, v3.3.1
### Notable changes
- fix: length indentation in registry.yaml
---
## 2022-09-11, v3.3.0
### Notable changes
- fix: `no_log` removed from `ansible.builtin.uri` tasks
- feat: `k3s_skip_post_checks` option added
---
## 2022-06-17, v3.2.0
### Notable changes
- feature: added support for alpine #182
- fix: `k3s_control_token` not working #187
## 2022-05-02, v3.1.2
### Notable changes
- fix: molecule tests
---
## 2022-02-18, v3.1.1
### Notable changes
- fix: support nftables for debian 11
### Contributors
- [eaglesemanation](https://github.com/eaglesemanation)
---
## 2022-01-30, v3.1.0
### Notable changes
- feat: use basename of url for items in `k3s_server_manifests_urls` and
`k3s_server_pod_manifests_urls` if filename is not provided #177
### Contributors
- [kossmac](https://github.com/kossmac)
---
## 2022-01-06, v3.0.1
### Notable changes
- fix: adding become to pre checks packages #173
### Contributors
- [xlejo](https://github.com/xlejo)
---
## 2022-01-02, v3.0.0
### Notable changes
- feat: Flattened task filesystem
- feat: Moved some tasks into `vars/` as templated variables
- feat: Airgap installation method added #165
### Breaking changes
- Minimum `python` version on targets is 3.6
- `k3s_become_for_all` renamed to `k3s_become`
- `k3s_become_for_*` removed.
### Contributors
- [crutonjohn](https://github.com/crutonjohn)
---
## 2021-12-23, v2.12.1
### Notable changes
- Fix typo in systemd unit file
### Contributors
- [andrewchen5678](https://github.com/andrewchen5678)
---
## 2021-12-20, v2.12.0
### Notable changes
- Fix RockyLinux HA etcd tests
- add Debian 11 test
- Fix Snapshotter in Molecule tests
- Added missing documentation for `k3s_api_url`
- Added option to change K3s updates API url
- Custom environment variables in systemd unit files
- Debian Bullseye support
- Fix HA etcd cluster startup
- Fix rootless for Debian
### Contributors
- [janar153](https://github.com/janar153)
---
## 2021-10-10, v2.11.1
### Notable changes
- docs: fixed references to `write-kubeconfig-mode` to set correct permissions #157
- fix: Flag --delete-local-data has been deprecated #159
---
## 2021-09-08, v2.11.0
### Notable changes
- docs: example of IPv6 configuration
- feat: checks for s3 backup configuration
- feat: implement config.yaml.d
### Contributors
- [onedr0p](https://github.com/onedr0p)
---
## 2021-08-18, v2.10.6
### Notable changes
- Fix: Define registration address from node-ip #142
---
## 2021-08-14, v2.10.5
### Notable changes
- Add advertised address #139
### Contributors
- [@abelfodil](https://github.com/abelfodil)
---
## 2021-07-24, v2.10.4
### Notable changes
- Updated systemd template to use token when joining a cluster #138
---
## 2021-07-21, v2.10.3
### Notable changes
- fix: typo #133
- fix: restore clustering and avoid failure with jinja2_native=true #135
- fix: do ignore etcd member count when uninstalling #136
### Contributors
- [@Yaro](https://github.com/Yajo)
---
## 2021-06-22, v2.10.2
### Notable changes
- Role is now tested against RockyLinux
---
## 2021-05-30, v2.10.1
### Notable changes
- Case insensitive control node lookup #126
### Contributors
- [@mrobinsn](https://github.com/mrobinsn)
---
## 2021-05-27, v2.10.0
### Notable changes
- Only deploy templates on primary controller #119
- Allow control plane static pods #120
- Add support for specifying URLs in templates #124
### Contributors
- [@bjw-s](https://github.com/bjw-s)
- [@onedr0p](https://github.com/onedr0p)
---
## 2021-05-14, v2.9.1
<!-- Today was a better day... <3 -->
### Notable changes
- Documentation, remove references to deprecated configuration techniques #115
- Bugfix: Templating issue.
---
## 2021-05-13, v2.9.0
<!-- a shit day... -->
### Notable changes
- Feature: Support k3s private registry configuration #114
### Contributors
- [@anjia0532](https://github.com/anjia0532)
---
## 2021-05-06, v2.8.5
### Notable changes
- Bugfix: Unmount CSI plugin folder to avoid data lost on uninstall #113
### Contributors
- [@angelnu](https://github.com/angelnu)
---
## 2021-05-01, v2.8.4
### Notable changes
- Fixed issue with draining nodes #112
### Contributors
- [@anjia0532](https://github.com/anjia0532)
---
## 2021-04-18, v2.8.3
### Notable changes
- Typo fix in README.md #110
- Fixed check mode for cgroup test #111
- Added check mode into molecule test sequence
- `inventory.yml` is now `blockinfile`
### Contributors
- [@bdronneau](https://github.com/bdronneau)
---
## 2021-04-10, v2.8.2
### Notable changes
- #105 - Added Ansible v2.9.16 support
- #102 - Pre-check for cgroup status
### Known issues
- As per README.md, you require `ansible` >= 2.9.16
or `ansible-base` >= 2.10.4. See [#105(comment)](https://github.com/PyratLabs/ansible-role-k3s/issues/105#issuecomment-817182233)
---
## 2021-03-22, v2.8.1
### Notable changes
- #100 - Fixed typo in README.md
### Contributors
- [@mbwmbw1337](https://github.com/mbwmbw1337)
---
## 2021-03-14, v2.8.0
Happy π day!
### Notable changes
- Updated GitHub Actions, resolved linting errors.
- Renamed `k3s_control_node_address` -> `k3s_registration_address`
### Breaking changes
- A task has been added to rename `k3s_control_node_address` to
`k3s_registration_address` for any users still using this variable name,
however this might still break something.
---
## 2021-02-28, v2.7.1
### Notable changes
- Bugfix, missing become on cluster token check.
---
## 2021-02-27, v2.7.0
### Notable changes
- Cluster init checks added.
- Tidy up of tasks, failed checks.
- Possible fix for #93 - force draining of nodes added.
---
## 2021-02-27, v2.6.1
### Notable changes
- Bugfix: Templating error for single control plane nodes using Etcd.
- Bugfix: a number of typos fixed.
---
## 2021-02-16, v2.6.0
### Notable changes
- Tidy up of `when` params and `assert` tasks to be more readable.
- Added feature to tweak K3S service dependencies.
- Updated documentation:
- Node labels and component arguments
- systemd config
- Use alternate CNI (Calico example)
---
## 2021-01-31, v2.5.3
### Notable changes
- Bugfix, missing update to minimum ansible version var #91.
---
## 2021-01-30, v2.5.2
### Notable changes
- Bugfix, missing `k3s_start_on_boot` to control `systemd.enabled` added.
---
## 2021-01-30, v2.5.1
### Notable changes
- Added uninstall task to remove hard-linked files #88
- Fixed missing become for `systemd` operations tasks. #89
- Added `k3s_start_on_boot` to control `systemd.enabled`.
---
## 2021-01-24, v2.5.0
### Notable changes
- Added support for Ansible >= 2.9.17 #83
---
## 2021-01-23, v2.4.3
### Notable changes
- Bufgix: Installation hangs on "Check that all nodes to be ready" #84
---
## 2021-01-10, v2.4.2
### Notable changes
- Bufgix: Docker check still failing on "false"
---
## 2021-01-02, v2.4.1
### Notable changes
- Fixed issue with armv6l (Raspberry Pi Zero W)
- Added path for private repositories config to directory creation list.
---
## 2020-12-21, v2.4.0
### Notable changes
- `k3s_config_dir` derived from `k3s_config_file`, reused throughout the role
to allow for easy removal of "Rancher" references #73.
- `k3s_token_location` has moved to be in `k3s_config_dir`.
- Tasks for creating directories now looped to caputure configuration from
`k3s_server` and `k3s_agent` and ensure directories exist before k3s
starts, see #75.
- Server token collected directly from token file, not symlinked file
(node-token).
- `k3s_runtime_config` defined in `vars/` for validation and overwritten in
tasks for control plane and workers.
- Removed unused references to GitHub API.
- `set_fact` and `command` tasks now use FQCN.
- Check of `ansible_version` in environment check.
- Introduction of target environment checks for #72.
- Fixed bug with non-default listening port not being passed to workers.
- Added ability to put documentation links into validation checks #76.
- Removed the requirement for `jmespath` on the Ansible controller.
- Fixed bug with issue data collection tasks.
### Breaking changes
- Ansible minimum version is hard set to v2.10.4
- `k3s_token_location` has moved to be in `k3s_config_dir` so re-running the
role will create a duplicate file here.
---
## 2020-12-19, v2.3.0
### Notable changes
- Updated k3s uninstall scripts #74
- Started moving Rancher references to `vars/` as per #73
---
## 2020-12-19, v2.2.2
### Notable changes
- Fixed typos in documentation.
- Molecule testing pinned to v3.1 due to tests failing.
---
## 2020-12-16, v2.2.1
### Notable changes
- Re-working documentation
- Updated GitHub link, org changed from Rancher to k3s-io.
- Replace deprecated `play_hosts` variable.
### Breaking changes
- Moving git branch from `master` to `main`.
---
## 2020-12-12, v2.2.0
### Notable changes
- Use of FQCNs enforced, minimum Ansible version now v2.10
- `k3s_etcd_datastore` no longer experimental after K3s version v1.19.5+k3s1
- Docker marked as deprecated for K3s > v1.20.0+k3s1
### Breaking changes
- Use of FQCNs enforced, minimum Ansible version now v2.10
- Use of Docker requires `k3s_use_unsupported_config` to be `true` after
v1.20.0+k3s1
---
## 2020-12-05, v2.1.1
### Notable changes
- Fixed link to documentation.
---
## 2020-12-05, v2.1.0
### Notable changes
- Deprecated configuration check built into validation steps.
- Removed duplicated tasks for single node cluster.
- Added documentation providing quickstart examples and common operations.
- Fixed data-dir configuration.
- Some tweaks to rootless.
- Fix draining and removing of nodes.
### Breaking changes
- `k3s_token_location` now points to a file location, not a directory.
- `k3s_systemd_unit_directory` renamed to `k3s_systemd_unit_dir`
- Removed `k3s_node_data_dir` as this is now configured with `data-dir` in
`k3s_server` and/or `k3s_agent`.
### Known issues
- Rootless is still broken, this is still not supported as a method for
running k3s using this role.
---
## 2020-11-30, v2.0.2
### Notable changes
- Updated issue template and information collection tasks.
---
## 2020-11-30, v2.0.1
### Notable changes
- Fixed a number of typos in the README.md
- Updated the meta/main.yml to put quotes around minimum Ansible version.
---
## 2020-11-29, v2.0.0
### Notable changes
- #64 - Initial release of v2.0.0 of
[ansible-role-k3s](https://github.com/PyratLabs/ansible-role-k3s).
- Minimum supported k3s version now: v1.19.1+k3s1
- Minimum supported Ansible version now: v2.10.0
- #62 - Remove all references to the word "master".
- #53 - Move to file-based configuration.
- Refactored to avoid duplication in code and make contribution easier.
- Validation checks moved to using variables defined in `vars/`
### Breaking changes
#### File based configuration
Issue #53
With the release of v1.19.1+k3s1, this role has moved to file-based
configuration of k3s. This requires manuall translation of v1 configuration
variables into configuration file format.
Please see: https://rancher.com/docs/k3s/latest/en/installation/install-options/#configuration-file
#### Minimum supported k3s version
As this role now relies on file-based configuration, the v2.x release of this
role will only support v1.19+ of k3s. If you are not in a position to update
k3s you will need to continue using the v1.x release of this role, which will
be supported until March 2021<!-- 1 year after k8s v1.18 release -->.
#### Minimum supported ansible version
This role now only supports Ansible v2.10+, this is because it has moved on to
using FQDNs, with the exception of `set_fact` tasks which have
[been broken](https://github.com/ansible/ansible/issues/72319) and the fixes
have [not yet been backported to v2.10](https://github.com/ansible/ansible/pull/71824).
The use of FQDNs allows for custom modules to be introduced to override task
behavior. If this role requires a custom ansible module to be introduced then
this can be added as a dependency and targeted specifically by using the
correct FQDN.

46
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,46 @@
# Contribution Guidelines
Thank you for taking time to contribute to this Ansible role.
There are a number of ways that you can contribute to this project, not all of
them requiring you to be able to write code. Below is a list of suggested
contributions welcomed by the community:
- Submit bug reports in GitHub issues
- Comment on bug reports with further information or suggestions
- Suggest new features
- Create Pull Requests fixing bugs or adding new features
- Update and improve documentation
- Review the role on Ansible Galaxy
- Write a blog post reviewing the role
- Sponsor me.
## Issue guidelines
Issues are the best way to capture an bug in the role, or suggest new features.
This is due to issues being visible to the entire community and allows for
other contributors to pick up the work, so is a better communication medium
than email.
A good bug issue will include as much information as possible about the
environment Ansible is running in, as well as the role configuration. If there
are any relevant pieces of documentation from upstream projects, this should
be included.
New feature requests are also best captured in issues, these should include
as much relevant information as possible and if possible include a "user story"
(don't sweat if you don't know how to write one). If there are any relevant
pieces of documentation from upstream projects, this should be included.
## Pull request guidelines
PRs should only contain 1 issue fix at a time to limit the scope of testing
required. The smaller the scope of the PR, the easier it is for it to be
reviewed.
PRs should include the keyword `Fixes` before an issue number if the PR will
completely close the issue. This is because automation will close the issue
once the PR is merged.
PRs are preferred to be merged in as a single commit, so rebasing before
pushing is recommended, however this isn't a strict rule.

View File

@ -1,22 +1,25 @@
Copyright 2019 Xan Manning
BSD 3-Clause License
Copyright (c) 2020, Xan Manning
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
@ -24,3 +27,4 @@ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

407
README.md
View File

@ -1,83 +1,412 @@
# Ansible Role: k3s
# Ansible Role: k3s (v3.x)
Ansible role for installing [Racher Labs k3s](https://k3s.io/) ("Lightweight
Ansible role for installing [K3S](https://k3s.io/) ("Lightweight
Kubernetes") as either a standalone server or cluster.
[![CI](https://github.com/PyratLabs/ansible-role-k3s/workflows/CI/badge.svg?event=push)](https://github.com/PyratLabs/ansible-role-k3s/actions?query=workflow%3ACI)
## Help Wanted!
Hi! :wave: [@xanmanning](https://github.com/xanmanning) is looking for a new
maintainer to work on this Ansible role. This is because I don't have as much
free time any more and I no longer write Ansible regularly as part of my day
job. If you're interested, get in touch.
## Release notes
Please see [Releases](https://github.com/PyratLabs/ansible-role-k3s/releases)
and [CHANGELOG.md](CHANGELOG.md).
## Requirements
This role has been tested on Ansible 2.6.0+ against the following Linux Distributions:
The host you're running Ansible from requires the following Python dependencies:
- CentOS 7
- Debian 9
- Ubuntu 18.04 LTS
- `python >= 3.6.0` - [See Notes below](#important-note-about-python).
- `ansible >= 2.9.16` or `ansible-base >= 2.10.4`
## Disclaimer
You can install dependencies using the requirements.txt file in this repository:
`pip3 install -r requirements.txt`.
:warning: Not suitable for production use.
This role has been tested against the following Linux Distributions:
Whilst Rancher Labs are awesome, k3s is a fairly new project and not yet a v1.0
release so extreme caution and operational rigor is recommended before using
this role for any serious development.
- Alpine Linux
- Amazon Linux 2
- Archlinux
- CentOS 8
- Debian 11
- Fedora 31
- Fedora 32
- Fedora 33
- openSUSE Leap 15
- RockyLinux 8
- Ubuntu 20.04 LTS
:warning: The v3 releases of this role only supports `k3s >= v1.19`, for
`k3s < v1.19` please consider updating or use the v1.x releases of this role.
Before upgrading, see [CHANGELOG](CHANGELOG.md) for notifications of breaking
changes.
## Role Variables
### Group Variables
Since K3s [v1.19.1+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.19.1%2Bk3s1)
you can now configure K3s using a
[configuration file](https://rancher.com/docs/k3s/latest/en/installation/install-options/#configuration-file)
rather than environment variables or command line arguments. The v2 release of
this role has moved to the configuration file method rather than populating a
systemd unit file with command-line arguments. There may be exceptions that are
defined in [Global/Cluster Variables](#globalcluster-variables), however you will
mostly be configuring k3s by configuration files using the `k3s_server` and
`k3s_agent` variables.
See "_Server (Control Plane) Configuration_" and "_Agent (Worker) Configuraion_"
below.
### Global/Cluster Variables
Below are variables that are set against all of the play hosts for environment
consistency.
consistency. These are generally cluster-level configuration.
| Variable | Description | Default Value |
|--------------------------------|--------------------------------------------------------------------------|--------------------------------|
| `k3s_release_version` | Use a specific version of k3s, eg. `v0.2.0`. Specify `false` for latest. | `false` |
| `k3s_github_url` | Set the GitHub URL to install k3s from. | https://github.com/rancher/k3s |
| `k3s_install_dir` | Installation directory for k3s. | `/usr/local/bin` |
| `k3s_control_workers` | Are control hosts also workers? | `true` |
| `k3s_ensure_docker_installed ` | Use Docker rather than Containerd? | `false` |
| Variable | Description | Default Value |
|--------------------------------------|--------------------------------------------------------------------------------------------|--------------------------------|
| `k3s_state` | State of k3s: installed, started, stopped, downloaded, uninstalled, validated. | installed |
| `k3s_release_version` | Use a specific version of k3s, eg. `v0.2.0`. Specify `false` for stable. | `false` |
| `k3s_airgap` | Boolean to enable air-gapped installations | `false` |
| `k3s_config_file` | Location of the k3s configuration file. | `/etc/rancher/k3s/config.yaml` |
| `k3s_build_cluster` | When multiple play hosts are available, attempt to cluster. Read notes below. | `true` |
| `k3s_registration_address` | Fixed registration address for nodes. IP or FQDN. | NULL |
| `k3s_github_url` | Set the GitHub URL to install k3s from. | https://github.com/k3s-io/k3s |
| `k3s_api_url` | URL for K3S updates API. | https://update.k3s.io |
| `k3s_install_dir` | Installation directory for k3s. | `/usr/local/bin` |
| `k3s_install_hard_links` | Install using hard links rather than symbolic links. | `false` |
| `k3s_server_config_yaml_d_files` | A flat list of templates to supplement the `k3s_server` configuration. | [] |
| `k3s_agent_config_yaml_d_files` | A flat list of templates to supplement the `k3s_agent` configuration. | [] |
| `k3s_server_manifests_urls` | A list of URLs to deploy on the primary control plane. Read notes below. | [] |
| `k3s_server_manifests_templates` | A flat list of templates to deploy on the primary control plane. | [] |
| `k3s_server_pod_manifests_urls` | A list of URLs for installing static pod manifests on the control plane. Read notes below. | [] |
| `k3s_server_pod_manifests_templates` | A flat list of templates for installing static pod manifests on the control plane. | [] |
| `k3s_use_experimental` | Allow the use of experimental features in k3s. | `false` |
| `k3s_use_unsupported_config` | Allow the use of unsupported configurations in k3s. | `false` |
| `k3s_etcd_datastore` | Enable etcd embedded datastore (read notes below). | `false` |
| `k3s_debug` | Enable debug logging on the k3s service. | `false` |
| `k3s_registries` | Registries configuration file content. | `{ mirrors: {}, configs:{} }` |
### K3S Service Configuration
The below variables change how and when the systemd service unit file for K3S
is run. Use this with caution, please refer to the [systemd documentation](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#%5BUnit%5D%20Section%20Options)
for more information.
| Variable | Description | Default Value |
|------------------------|----------------------------------------------------------------------|---------------|
| `k3s_start_on_boot` | Start k3s on boot. | `true` |
| `k3s_service_requires` | List of required systemd units to k3s service unit. | [] |
| `k3s_service_wants` | List of "wanted" systemd unit to k3s (weaker than "requires"). | []\* |
| `k3s_service_before` | Start k3s before a defined list of systemd units. | [] |
| `k3s_service_after` | Start k3s after a defined list of systemd units. | []\* |
| `k3s_service_env_vars` | Dictionary of environment variables to use within systemd unit file. | {} |
| `k3s_service_env_file` | Location on host of a environment file to include. | `false`\*\* |
\* The systemd unit template **always** specifies `network-online.target` for
`wants` and `after`.
\*\* The file must already exist on the target host, this role will not create
nor manage the file. You can manage this file outside of the role with
pre-tasks in your Ansible playbook.
### Group/Host Variables
Below are variables that are set against individual or groups of play hosts.
Typically you'd set these at group level for the control plane or worker nodes.
| Variable | Description | Default Value |
|--------------------|-------------------------------------------------------------------|---------------------------------------------------|
| `k3s_control_node` | Specify if a host (or host group) are part of the control plane. | `false` (role will automatically delegate a node) |
| `k3s_server` | Server (control plane) configuration, see notes below. | `{}` |
| `k3s_agent` | Agent (worker) configuration, see notes below. | `{}` |
#### Server (Control Plane) Configuration
The control plane is configured with the `k3s_server` dict variable. Please
refer to the below documentation for configuration options:
https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/
The `k3s_server` dictionary variable will contain flags from the above
(removing the `--` prefix). Below is an example:
```yaml
k3s_server:
datastore-endpoint: postgres://postgres:verybadpass@database:5432/postgres?sslmode=disable
cluster-cidr: 172.20.0.0/16
flannel-backend: 'none' # This needs to be in quotes
disable:
- traefik
- coredns
```
Alternatively, you can create a .yaml file and read it in to the `k3s_server`
variable as per the below example:
```yaml
k3s_server: "{{ lookup('file', 'path/to/k3s_server.yml') | from_yaml }}"
```
Check out the [Documentation](documentation/README.md) for example
configuration.
#### Agent (Worker) Configuration
Workers are configured with the `k3s_agent` dict variable. Please refer to the
below documentation for configuration options:
https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config
The `k3s_agent` dictionary variable will contain flags from the above
(removing the `--` prefix). Below is an example:
```yaml
k3s_agent:
with-node-id: true
node-label:
- "foo=bar"
- "hello=world"
```
Alternatively, you can create a .yaml file and read it in to the `k3s_agent`
variable as per the below example:
```yaml
k3s_agent: "{{ lookup('file', 'path/to/k3s_agent.yml') | from_yaml }}"
```
Check out the [Documentation](documentation/README.md) for example
configuration.
### Ansible Controller Configuration Variables
The below variables are used to change the way the role executes in Ansible,
particularly with regards to privilege escalation.
| Variable | Description | Default Value |
|------------------------|----------------------------------------------------------------|---------------|
| `k3s_skip_validation` | Skip all tasks that validate configuration. | `false` |
| `k3s_skip_env_checks` | Skip all tasks that check environment configuration. | `false` |
| `k3s_skip_post_checks` | Skip all tasks that check post execution state. | `false` |
| `k3s_become` | Escalate user privileges for tasks that need root permissions. | `false` |
#### Important note about Python
From v3 of this role, Python 3 is required on the target system as well as on
the Ansible controller. This is to ensure consistent behaviour for Ansible
tasks as Python 2 is now EOL.
If target systems have both Python 2 and Python 3 installed, it is most likely
that Python 2 will be selected by default. To ensure Python 3 is used on a
target with both versions of Python, ensure `ansible_python_interpreter` is
set in your inventory. Below is an example inventory:
```yaml
---
k3s_cluster:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
kube-1:
ansible_user: ansible
ansible_host: 10.10.9.3
ansible_python_interpreter: /usr/bin/python3
kube-2:
ansible_user: ansible
ansible_host: 10.10.9.4
ansible_python_interpreter: /usr/bin/python3
```
#### Important note about `k3s_release_version`
If you do not set a `k3s_release_version` the latest version of k3s will be
installed. If you are developing against a specific version of k3s you must
ensure this is set in your Ansible configuration, eg:
If you do not set a `k3s_release_version` the latest version from the stable
channel of k3s will be installed. If you are developing against a specific
version of k3s you must ensure this is set in your Ansible configuration, eg:
```yaml
k3s_release_version: v0.2.0
k3s_release_version: v1.19.3+k3s1
```
### Host Variables
It is also possible to install specific K3s "Channels", below are some
examples for `k3s_release_version`:
Below are variables that are set against specific hosts in your inventory.
```yaml
k3s_release_version: false # defaults to 'stable' channel
k3s_release_version: stable # latest 'stable' release
k3s_release_version: testing # latest 'testing' release
k3s_release_version: v1.19 # latest 'v1.19' release
k3s_release_version: v1.19.3+k3s3 # specific release
| Variable | Description | Default Value |
|--------------------|--------------------------------------------------------|---------------|
| `k3s_control_node` | Define the host as a control plane node, (True/False). | `false` |
# Specific commit
# CAUTION - only used for testing - must be 40 characters
k3s_release_version: 48ed47c4a3e420fa71c18b2ec97f13dc0659778b
```
#### Important note about `k3s_control_node`
#### Important note about `k3s_install_hard_links`
Currently only one host can be defined as a control node, if multiple hosts are
set to true the play will fail.
If you are using the [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller)
you will need to use hard links rather than symbolic links as the controller
will not be able to follow symbolic links. This option has been added however
is not enabled by default to avoid breaking existing installations.
If you do not set a host as a control node, the role will automatically delegate
the first play host as a control node.
To enable the use of hard links, ensure `k3s_install_hard_links` is set
to `true`.
```yaml
k3s_install_hard_links: true
```
The result of this can be seen by running the following in `k3s_install_dir`:
`ls -larthi | grep -E 'k3s|ctr|ctl' | grep -vE ".sh$" | sort`
Symbolic Links:
```text
[root@node1 bin]# ls -larthi | grep -E 'k3s|ctr|ctl' | grep -vE ".sh$" | sort
3277823 -rwxr-xr-x 1 root root 52M Jul 25 12:50 k3s-v1.18.4+k3s1
3279565 lrwxrwxrwx 1 root root 31 Jul 25 12:52 k3s -> /usr/local/bin/k3s-v1.18.6+k3s1
3279644 -rwxr-xr-x 1 root root 51M Jul 25 12:52 k3s-v1.18.6+k3s1
3280079 lrwxrwxrwx 1 root root 31 Jul 25 12:52 ctr -> /usr/local/bin/k3s-v1.18.6+k3s1
3280080 lrwxrwxrwx 1 root root 31 Jul 25 12:52 crictl -> /usr/local/bin/k3s-v1.18.6+k3s1
3280081 lrwxrwxrwx 1 root root 31 Jul 25 12:52 kubectl -> /usr/local/bin/k3s-v1.18.6+k3s1
```
Hard Links:
```text
[root@node1 bin]# ls -larthi | grep -E 'k3s|ctr|ctl' | grep -vE ".sh$" | sort
3277823 -rwxr-xr-x 1 root root 52M Jul 25 12:50 k3s-v1.18.4+k3s1
3279644 -rwxr-xr-x 5 root root 51M Jul 25 12:52 crictl
3279644 -rwxr-xr-x 5 root root 51M Jul 25 12:52 ctr
3279644 -rwxr-xr-x 5 root root 51M Jul 25 12:52 k3s
3279644 -rwxr-xr-x 5 root root 51M Jul 25 12:52 k3s-v1.18.6+k3s1
3279644 -rwxr-xr-x 5 root root 51M Jul 25 12:52 kubectl
```
#### Important note about `k3s_build_cluster`
If you set `k3s_build_cluster` to `false`, this role will install each play
host as a standalone node. An example of when you might use this would be
when building a large number of standalone IoT devices running K3s. Below is a
hypothetical situation where we are to deploy 25 Raspberry Pi devices, each a
standalone system and not a cluster of 25 nodes. To do this we'd use a playbook
similar to the below:
```yaml
- hosts: k3s_nodes # eg. 25 RPi's defined in our inventory.
vars:
k3s_build_cluster: false
roles:
- xanmanning.k3s
```
#### Important note about `k3s_control_node` and High Availability (HA)
By default only one host will be defined as a control node by Ansible, If you
do not set a host as a control node, this role will automatically delegate
the first play host as a control node. This is not suitable for use within
a Production workload.
If multiple hosts have `k3s_control_node` set to `true`, you must also set
`datastore-endpoint` in `k3s_server` as the connection string to a MySQL or
PostgreSQL database, or external Etcd cluster else the play will fail.
If using TLS, the CA, Certificate and Key need to already be available on
the play hosts.
See: [High Availability with an External DB](https://rancher.com/docs/k3s/latest/en/installation/ha/)
It is also possible, though not supported, to run a single K3s control node
with a `datastore-endpoint` defined. As this is not a typically supported
configuration you will need to set `k3s_use_unsupported_config` to `true`.
Since K3s v1.19.1 it is possible to use an embedded Etcd as the backend
database, and this is done by setting `k3s_etcd_datastore` to `true`.
The best practice for Etcd is to define at least 3 members to ensure quorum is
established. In addition to this, an odd number of members is recommended to
ensure a majority in the event of a network partition. If you want to use 2
members or an even number of members, please set `k3s_use_unsupported_config`
to `true`.
#### Important note about `k3s_server_manifests_urls` and `k3s_server_pod_manifests_urls`
To deploy server manifests and server pod manifests from URL, you need to
specify a `url` and optionally a `filename` (if none provided basename is used). Below is an example of how to deploy the
Tigera operator for Calico and kube-vip.
```yaml
---
k3s_server_manifests_urls:
- url: https://docs.projectcalico.org/archive/v3.19/manifests/tigera-operator.yaml
filename: tigera-operator.yaml
k3s_server_pod_manifests_urls:
- url: https://raw.githubusercontent.com/kube-vip/kube-vip/main/example/deploy/0.1.4.yaml
filename: kube-vip.yaml
```
#### Important note about `k3s_airgap`
When deploying k3s in an air gapped environment you should provide the `k3s` binary in `./files/`. The binary will not be downloaded from Github and will subsequently not be verified using the provided sha256 sum, nor able to verify the version that you are running. All risks and burdens associated are assumed by the user in this scenario.
## Dependencies
No dependencies on other roles.
## Example Playbook
## Example Playbooks
Example playbook:
Example playbook, single control node running `testing` channel k3s:
```yaml
- hosts: k3s_nodes
vars:
k3s_release_version: testing
roles:
- { role: xanmanning.k3s, k3s_release_version: v0.2.0 }
- role: xanmanning.k3s
```
Example playbook, Highly Available with PostgreSQL database running the latest
stable release:
```yaml
- hosts: k3s_nodes
vars:
k3s_registration_address: loadbalancer # Typically a load balancer.
k3s_server:
datastore-endpoint: "postgres://postgres:verybadpass@database:5432/postgres?sslmode=disable"
pre_tasks:
- name: Set each node to be a control node
ansible.builtin.set_fact:
k3s_control_node: true
when: inventory_hostname in ['node2', 'node3']
roles:
- role: xanmanning.k3s
```
## License
BSD
[BSD 3-clause](LICENSE.txt)
## Contributors
Contributions from the community are very welcome, but please read the
[contribution guidelines](CONTRIBUTING.md) before doing so, this will help
make things as streamlined as possible.
Also, please check out the awesome
[list of contributors](https://github.com/PyratLabs/ansible-role-k3s/graphs/contributors).
## Author Information
[Xan Manning](https://xanmanning.co.uk/)
[Xan Manning](https://xan.manning.io/)

View File

@ -1,17 +1,157 @@
---
##
# Global/Cluster Configuration
##
# k3s state, options: installed, started, stopped, restarted, uninstalled, validated
# (default: installed)
k3s_state: installed
# Use a specific k3s version, if set to "false" we will get the latest
# k3s_release_version: v0.1.0
# k3s_release_version: v1.19.3
k3s_release_version: false
# Location of the k3s configuration file
k3s_config_file: "/etc/rancher/k3s/config.yaml"
# Location of the k3s configuration directory
k3s_config_yaml_d_dir: "/etc/rancher/k3s/config.yaml.d"
# When multiple ansible_play_hosts are present, attempt to cluster the nodes.
# Using false will create multiple standalone nodes.
# (default: true)
k3s_build_cluster: true
# URL for GitHub project
k3s_github_url: https://github.com/rancher/k3s
k3s_github_url: https://github.com/k3s-io/k3s
# URL for K3s updates API
k3s_api_url: https://update.k3s.io
# Install K3s in Air Gapped scenarios
k3s_airgap: false
# Skip all tasks that validate configuration
k3s_skip_validation: false
# Skip all tasks that check environment configuration
k3s_skip_env_checks: false
# Skip post-checks
k3s_skip_post_checks: false
# Installation directory for k3s
k3s_install_dir: /usr/local/bin
# Are control hosts also worker nodes?
k3s_control_workers: true
# Install using hard links rather than symbolic links
k3s_install_hard_links: false
# Ensure Docker is installed on nodes
k3s_ensure_docker_installed: false
# A list of templates used for configuring the server.
k3s_server_config_yaml_d_files: []
# A list of templates used for configuring the agent.
k3s_agent_config_yaml_d_files: []
# A list of templates used for pre-configuring the cluster.
k3s_server_manifests_templates: []
# A list of URLs used for pre-configuring the cluster.
k3s_server_manifests_urls: []
# - url: https://some/url/to/manifest.yml
# filename: manifest.yml
# A list of templates used for installing static pod manifests on the control plane.
k3s_server_pod_manifests_templates: []
# A list of URLs used for installing static pod manifests on the control plane.
k3s_server_pod_manifests_urls: []
# - url: https://some/url/to/manifest.yml
# filename: manifest.yml
# Use experimental features in k3s?
k3s_use_experimental: false
# Allow for unsupported configurations in k3s?
k3s_use_unsupported_config: false
# Enable etcd embedded datastore
k3s_etcd_datastore: false
##
# Systemd config
##
# Start k3s on system boot
k3s_start_on_boot: true
# List of required systemd units to k3s service unit.
k3s_service_requires: []
# List of "wanted" systemd unit to k3s (weaker than "requires").
k3s_service_wants: []
# Start k3s before a defined list of systemd units.
k3s_service_before: []
# Start k3s after a defined list of systemd units.
k3s_service_after: []
# Dictionary of environment variables to use within systemd unit file
# Some examples below
k3s_service_env_vars: {}
# PATH: /opt/k3s/bin
# GOGC: 10
# Location on host of a environment file to include. This must already exist on
# the target as this role will not populate this file.
k3s_service_env_file: false
##
# Server Configuration
##
k3s_server: {}
# k3s_server:
# listen-port: 6443
##
# Agent Configuration
##
k3s_agent: {}
# k3s_agent:
# node-label:
# - "foo=bar"
# - "bish=bosh"
##
# Ansible Controller configuration
##
# Use become privileges?
k3s_become: false
# Private registry configuration.
# Rancher k3s documentation: https://rancher.com/docs/k3s/latest/en/installation/private-registry/
k3s_registries:
mirrors:
# docker.io:
# endpoint:
# - "https://mycustomreg.com:5000"
configs:
# "mycustomreg:5000":
# auth:
# # this is the registry username
# username: xxxxxx
# # this is the registry password
# password: xxxxxx
# tls:
# # path to the cert file used in the registry
# cert_file:
# # path to the key file used in the registry
# key_file:
# # path to the ca file used in the registry
# ca_file:

44
documentation/README.md Normal file
View File

@ -0,0 +1,44 @@
# ansible-role-k3s
This document describes a number of ways of consuming this Ansible role for use
in your own k3s deployments. It will not be able to cover every use case
scenario but will provide some common example configurations.
## Requirements
Before you start you will need an Ansible controller. This can either be your
workstation, or a dedicated system that you have access to. The instructions
in this documentation assume you are using `ansible` CLI, there are no
instructions available for Ansible Tower at this time.
Follow the below guide to get Ansible installed.
https://docs.ansible.com/ansible/latest/installation_guide/index.html
## Quickstart
Below are quickstart examples for a single node k3s server, a k3s cluster
with a single control node and HA k3s cluster. These represent the bare
minimum configuration.
- [Single node k3s](quickstart-single-node.md)
- [Simple k3s cluster](quickstart-cluster.md)
- [HA k3s cluster using embedded etcd](quickstart-ha-cluster.md)
## Example configurations and operations
### Configuration
- [Setting up 2-node HA control plane with external datastore](configuration/2-node-ha-ext-datastore.md)
- [Provision multiple standalone k3s nodes](configuration/multiple-standalone-k3s-nodes.md)
- [Set node labels and component arguments](configuration/node-labels-and-component-args.md)
- [Use an alternate CNI](configuration/use-an-alternate-cni.md)
- [IPv4/IPv6 Dual-Stack config](configuration/ipv4-ipv6-dual-stack.md)
- [Start K3S after another service](configuration/systemd-config.md)
### Operations
- [Stop/Start a cluster](operations/stop-start-cluster.md)
- [Updating k3s](operations/updating-k3s.md)
- [Extending a cluster](operations/extending-a-cluster.md)
- [Shrinking a cluster](operations/shrinking-a-cluster.md)

View File

@ -0,0 +1,79 @@
# 2 Node HA Control Plane with external database
For this configuration we are deploying a highly available control plane
composed of two control nodes. This can be achieved with embedded etcd, however
etcd ideally has an odd number of nodes.
The example below will use an external PostgreSQL datastore to store the
cluster state information.
Main guide: https://rancher.com/docs/k3s/latest/en/installation/ha/
## Architecture
```text
+-------------------+
| Load Balancer/VIP |
+---------+---------+
|
|
|
|
+------------+ | +------------+
| | | | |
+--------+ control-01 +<-----+----->+ control-02 |
| | | | |
| +-----+------+ +------+-----+
| | |
| +-------------+-------------+
| | | |
| +------v----+ +-----v-----+ +----v------+
| | | | | | |
| | worker-01 | | worker-02 | | worker-03 |
| | | | | | |
| +-----------+ +-----------+ +-----------+
|
| +-------+ +-------+
| | | | |
+-------------------> db-01 +--+ db-02 |
| | | |
+-------+ +-------+
```
### Required Components
- Load balancer
- 2 control plane nodes
- 1 or more worker nodes
- PostgreSQL Database (replicated, or Linux HA Cluster).
## Configuration
For your control nodes, you will need to instruct the control plane of the
PostgreSQL datastore endpoint and set `k3s_registration_address` to be the
hostname or IP of your load balancer or VIP.
Below is the example for PostgreSQL, it is possible to use MySQL or an Etcd
cluster as well. Consult the below guide for using alternative datastore
endpoints.
https://rancher.com/docs/k3s/latest/en/installation/datastore/#datastore-endpoint-format-and-functionality
```yaml
---
k3s_server:
datastore-endpoint: postgres://postgres:verybadpass@database:5432/postgres?sslmode=disable
node-taint:
- "k3s-controlplane=true:NoExecute"
```
Your worker nodes need to know how to connect to the control plane, this is
defined by setting `k3s_registration_address` to the hostname or IP address of
the load balancer.
```yaml
---
k3s_registration_address: control.examplek3s.com
```

View File

@ -0,0 +1,21 @@
# IPv4 and IPv6 Dual-stack config
If you need to run your K3S cluster with both IPv4 and IPv6 address ranges
you will need to configure the `k3s_server.cluster-cidr` and
`k3s_server.service-cidr` values specifying both ranges.
:hand: if you are using `k3s<1.23` you will need to use a different CNI as
dual-stack support is not available in Flannel.
Below is a noddy example:
```yaml
---
k3s_server:
# Using Calico on k3s<1.23 so Flannel needs to be disabled.
flannel-backend: 'none'
# Format: ipv4/cidr,ipv6/cidr
cluster-cidr: 10.42.0.0/16,fc00:a0::/64
service-cidr: 10.43.0.0/16,fc00:a1::/64
```

View File

@ -0,0 +1,71 @@
# Multiple standalone K3s nodes
This is an example of when you might want to configure multiple standalone
k3s nodes simultaneously. For this we will assume a hypothetical situation
where we are configuring 25 Raspberry Pis to deploy to our shop floors.
Each Rasperry Pi will be configured as a standalone IoT device hosting an
application that will push data to head office.
## Architecture
```text
+-------------+
| |
| Node-01 +-+
| | |
+--+----------+ +-+
| | |
+--+---------+ +-+
| | |
+--+--------+ |
| | Node-N
+----------+
```
## Configuration
Below is our example inventory of 200 nodes (Truncated):
```yaml
---
k3s_workers:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
kube-1:
ansible_user: ansible
ansible_host: 10.10.9.3
ansible_python_interpreter: /usr/bin/python3
kube-2:
ansible_user: ansible
ansible_host: 10.10.9.4
ansible_python_interpreter: /usr/bin/python3
# ..... SNIP .....
kube-199:
ansible_user: ansible
ansible_host: 10.10.9.201
ansible_python_interpreter: /usr/bin/python3
kube-200:
ansible_user: ansible
ansible_host: 10.10.9.202
ansible_python_interpreter: /usr/bin/python3
```
In our `group_vars/` (or as `vars:` in our playbook), we will need to set the
`k3s_build_cluster` variable to `false`. This will stop the role from
attempting to cluster all 200 nodes, instead it will install k3s across each
node as as 200 standalone servers.
```yaml
---
k3s_build_cluster: false
```

View File

@ -0,0 +1,39 @@
# Configure node labels and component arguments
The following command line arguments can be specified multiple times with
`key=value` pairs:
- `--kube-kubelet-arg`
- `--kube-proxy-arg`
- `--kube-apiserver-arg`
- `--kube-scheduler-arg`
- `--kube-controller-manager-arg`
- `--kube-cloud-controller-manager-arg`
- `--node-label`
- `--node-taint`
In the config file, this is done by defining a list of values for each
command like argument, for example:
```yaml
---
k3s_server:
# Set the plugins registry directory
kubelet-arg:
- "volume-plugin-dir=/var/lib/rancher/k3s/agent/kubelet/plugins_registry"
# Set the pod eviction timeout and node monitor grace period
kube-controller-manager-arg:
- "pod-eviction-timeout=2m"
- "node-monitor-grace-period=30s"
# Set API server feature gate
kube-apiserver-arg:
- "feature-gates=RemoveSelfLink=false"
# Laels to apply to a node
node-label:
- "NodeTier=development"
- "NodeLocation=eu-west-2a"
# Stop k3s control plane having workloads scheduled on them
node-taint:
- "k3s-controlplane=true:NoExecute"
```

View File

@ -0,0 +1,19 @@
# systemd config
Below are examples to tweak how and when K3S starts up.
## Wanted service units
In this example, we're going to start K3S after Wireguard. Our example server
has a Wireguard connection `wg0`. We are using "wants" rather than "requires"
as it's a weaker requirement that Wireguard must be running. We then want
K3S to start after Wireguard has started.
```yaml
---
k3s_service_wants:
- wg-quick@wg0.service
k3s_service_after:
- wg-quick@wg0.service
```

View File

@ -0,0 +1,63 @@
# Use an alternate CNI
K3S ships with Flannel, however sometimes you want an different CNI such as
Calico, Canal or Weave Net. To do this you will need to disable Flannel with
`flannel-backend: "none"`, specify a `cluster-cidr` and add your CNI manifests
to the `k3s_server_manifests_templates`.
## Calico example
The below is based on the
[Calico quickstart documentation](https://docs.projectcalico.org/getting-started/kubernetes/quickstart).
Steps:
1. Download `tigera-operator.yaml` to the manifests directory.
1. Download `custom-resources.yaml` to the manifests directory.
1. Choose a `cluster-cidr` (we are using 192.168.0.0/16)
1. Set `k3s_server` and `k3s_server_manifest_templates` as per the below,
ensure the paths to manifests are correct for your project repo.
```yaml
---
# K3S Server config, don't deploy flannel and set cluster pod CIDR.
k3s_server:
cluster-cidr: 192.168.0.0/16
flannel-backend: "none"
# Deploy the following k3s server templates.
k3s_server_manifests_templates:
- "manifests/calico/tigera-operator.yaml"
- "manifests/calico/custom-resources.yaml"
```
All nodes should come up as "Ready", below is a 3-node cluster:
```text
$ kubectl get nodes -o wide -w
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-0 Ready control-plane,etcd,master 114s v1.20.2+k3s1 10.10.9.2 10.10.9.2 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.3-k3s1
kube-1 Ready control-plane,etcd,master 80s v1.20.2+k3s1 10.10.9.3 10.10.9.3 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.3-k3s1
kube-2 Ready control-plane,etcd,master 73s v1.20.2+k3s1 10.10.9.4 10.10.9.4 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.3-k3s1
```
Pods should be deployed with deployed within the CIDR specified in our config
file.
```text
$ kubectl get pods -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-system calico-kube-controllers-cfb4ff54b-8rp8r 1/1 Running 0 5m4s 192.168.145.65 kube-0 <none> <none>
calico-system calico-node-2cm2m 1/1 Running 0 5m4s 10.10.9.2 kube-0 <none> <none>
calico-system calico-node-2s6lx 1/1 Running 0 4m42s 10.10.9.4 kube-2 <none> <none>
calico-system calico-node-zwqjz 1/1 Running 0 4m49s 10.10.9.3 kube-1 <none> <none>
calico-system calico-typha-7b6747d665-78swq 1/1 Running 0 3m5s 10.10.9.4 kube-2 <none> <none>
calico-system calico-typha-7b6747d665-8ff66 1/1 Running 0 3m5s 10.10.9.3 kube-1 <none> <none>
calico-system calico-typha-7b6747d665-hgplx 1/1 Running 0 5m5s 10.10.9.2 kube-0 <none> <none>
kube-system coredns-854c77959c-6qhgt 1/1 Running 0 5m20s 192.168.145.66 kube-0 <none> <none>
kube-system helm-install-traefik-4czr9 0/1 Completed 0 5m20s 192.168.145.67 kube-0 <none> <none>
kube-system metrics-server-86cbb8457f-qcxf5 1/1 Running 0 5m20s 192.168.145.68 kube-0 <none> <none>
kube-system traefik-6f9cbd9bd4-7h4rl 1/1 Running 0 2m50s 192.168.126.65 kube-1 <none> <none>
tigera-operator tigera-operator-b6c4bfdd9-29hhr 1/1 Running 0 5m20s 10.10.9.2 kube-0 <none> <none>
```

View File

@ -0,0 +1,69 @@
# Extending a cluster
This document describes the method for extending an cluster with new worker
nodes.
## Assumptions
It is assumed that you have already deployed a k3s cluster using this role,
you have an appropriately configured inventory and playbook to create the
cluster.
Below, our example inventory and playbook are as follows:
- inventory: `inventory.yml`
- playbook: `cluster.yml`
Currently your `inventory.yml` looks like this, it has two nodes defined,
`kube-0` (control node) and `kube-1` (worker node).
```yaml
---
k3s_cluster:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
kube-1:
ansible_user: ansible
ansible_host: 10.10.9.3
ansible_python_interpreter: /usr/bin/python3
```
## Method
We have our two nodes, one control, one worker. The goal is to extend this to
add capacity by adding a new worker node, `kube-2`. To do this we will add the
new node to our inventory.
```yaml
---
k3s_cluster:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
kube-1:
ansible_user: ansible
ansible_host: 10.10.9.3
ansible_python_interpreter: /usr/bin/python3
kube-2:
ansible_user: ansible
ansible_host: 10.10.9.4
ansible_python_interpreter: /usr/bin/python3
```
Once the new node has been added, you can re-run the automation to join it to
the cluster. You should expect the majority of changes to the worker node being
introduced to the cluster.
```text
PLAY RECAP *******************************************************************************************************
kube-0 : ok=53 changed=1 unreachable=0 failed=0 skipped=30 rescued=0 ignored=0
kube-1 : ok=40 changed=1 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0
kube-2 : ok=42 changed=10 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0
```

View File

@ -0,0 +1,74 @@
# Shrinking a cluster
This document describes the method for shrinking a cluster, by removing a
worker nodes.
## Assumptions
It is assumed that you have already deployed a k3s cluster using this role,
you have an appropriately configured inventory and playbook to create the
cluster.
Below, our example inventory and playbook are as follows:
- inventory: `inventory.yml`
- playbook: `cluster.yml`
Currently your `inventory.yml` looks like this, it has three nodes defined,
`kube-0` (control node) and `kube-1`, `kube-2` (worker nodes).
```yaml
---
k3s_cluster:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
kube-1:
ansible_user: ansible
ansible_host: 10.10.9.3
ansible_python_interpreter: /usr/bin/python3
kube-2:
ansible_user: ansible
ansible_host: 10.10.9.4
ansible_python_interpreter: /usr/bin/python3
```
## Method
We have our three nodes, one control, two workers. The goal is to shrink this to
remove excess capacity by offboarding the worker node `kube-2`. To do this we
will set `kube-2` node to `k3s_state: uninstalled` in our inventory.
```yaml
---
k3s_cluster:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
kube-1:
ansible_user: ansible
ansible_host: 10.10.9.3
ansible_python_interpreter: /usr/bin/python3
kube-2:
ansible_user: ansible
ansible_host: 10.10.9.4
ansible_python_interpreter: /usr/bin/python3
k3s_state: uninstalled
```
What you will typically see is changes to your control plane (`kube-0`) and the
node being removed (`kube-2`). The role will register the removal of the node
with the cluster by draining the node and removing it from the cluster.
```text
PLAY RECAP *******************************************************************************************************
kube-0 : ok=55 changed=2 unreachable=0 failed=0 skipped=28 rescued=0 ignored=0
kube-1 : ok=40 changed=0 unreachable=0 failed=0 skipped=35 rescued=0 ignored=0
kube-2 : ok=23 changed=2 unreachable=0 failed=0 skipped=17 rescued=0 ignored=1
```

View File

@ -0,0 +1,93 @@
# Stopping and Starting a cluster
This document describes the Ansible method for restarting a k3s cluster
deployed by this role.
## Assumptions
It is assumed that you have already deployed a k3s cluster using this role,
you have an appropriately configured inventory and playbook to create the
cluster.
Below, our example inventory and playbook are as follows:
- inventory: `inventory.yml`
- playbook: `cluster.yml`
## Method
### Start cluster
You can start the cluster using either of the following commands:
- Using the playbook: `ansible-playbook -i inventory.yml cluster.yml --become -e 'k3s_state=started'`
- Using an ad-hoc command: `ansible -i inventory.yml -m service -a 'name=k3s state=started' --become all`
Below is example output, remember that Ansible is idempotent so re-running a
command may not necessarily change the state.
**Playbook method output**:
```text
PLAY RECAP *******************************************************************************************************
kube-0 : ok=6 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
kube-1 : ok=6 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
kube-2 : ok=6 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Stop cluster
You can stop the cluster using either of the following commands:
- Using the playbook: `ansible-playbook -i inventory.yml cluster.yml --become -e 'k3s_state=stopped'`
- Using an ad-hoc command: `ansible -i inventory.yml -m service -a 'name=k3s state=stopped' --become all`
Below is example output, remember that Ansible is idempotent so re-running a
command may not necessarily change the state.
**Playbook method output**:
```text
PLAY RECAP *******************************************************************************************************
kube-0 : ok=6 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
kube-1 : ok=6 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
kube-2 : ok=6 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
### Restart cluster
Just like the `service` module, you can also specify `restarted` as a state.
This will do `stop` followed by `start`.
- Using the playbook: `ansible-playbook -i inventory.yml cluster.yml --become -e 'k3s_state=restarted'`
- Using an ad-hoc command: `ansible -i inventory.yml -m service -a 'name=k3s state=restarted' --become all`
```text
PLAY RECAP *******************************************************************************************************
kube-0 : ok=7 changed=1 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
kube-1 : ok=7 changed=1 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
kube-2 : ok=7 changed=1 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
```
## Tips
You can limit the targets by adding the `-l` flag to your `ansible-playbook`
command, or simply target your ad-hoc commands. For example, in a 3 node
cluster (called `kube-0`, `kube-1` and `kube-2`) we can limit the restart to
`kube-1` and `kube-2` with the following:
- Using the playbook: `ansible-playbook -i inventory.yml cluster.yml --become -e 'k3s_state=restarted' -l "kube-1,kube-2"`
- Using an ad-hoc command: `ansible -i inventory.yml -m service -a 'name=k3s state=restarted' --become "kube-1,kube-2"`
```text
PLAY RECAP ********************************************************************************************************
kube-1 : ok=7 changed=2 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
kube-2 : ok=7 changed=2 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
```
## FAQ
1. _Why might I use the `ansible-playbook` command over an ad-hoc command?_
- The stop/start tasks will be aware of configuration. As the role
develops, there might be some pre-tasks added to change how a cluster
is stopped or started.

View File

@ -0,0 +1,52 @@
# Updating k3s
## Before you start!
Ensure you back up your k3s cluster. This is particularly important if you use
an external datastore or embedded Etcd. Please refer to the below guide to
backing up your k3s datastore:
https://rancher.com/docs/k3s/latest/en/backup-restore/
Also, check your volume backups are also working!
## Proceedure
### Updates using Ansible
To update via Ansible, set `k3s_release_version` to the target version you wish
to go to. For example, from your `v1.19.3+k3s1` playbook:
```yaml
---
# BEFORE
- name: Provision k3s cluster
hosts: k3s_cluster
vars:
k3s_release_version: v1.19.3+k3s1
roles:
- name: xanmanning.k3s
```
Updating to `v1.20.2+k3s1`:
```yaml
---
# AFTER
- name: Provision k3s cluster
hosts: k3s_cluster
vars:
k3s_release_version: v1.20.2+k3s1
roles:
- name: xanmanning.k3s
```
### Automatic updates
For automatic updates, consider installing Rancher's
[system-upgrade-controller](https://rancher.com/docs/k3s/latest/en/upgrades/automated/)
**Please note**, to be able to update using the system-upgrade-controller you
will need to set `k3s_install_hard_links` to `true`.

View File

@ -0,0 +1,147 @@
# Quickstart: K3s cluster with a single control node
This is the quickstart guide to creating your own k3s cluster with one control
plane node. This control plane node will also be a worker.
:hand: This example requires your Ansible user to be able to connect to the
servers over SSH using key-based authentication. The user is also has an entry
in a sudoers file that allows privilege escalation without requiring a
password.
To test this is the case, run the following check replacing `<ansible_user>`
and `<server_name>`. The expected output is `Works`
`ssh <ansible_user>@<server_name> 'sudo cat /etc/shadow >/dev/null && echo "Works"'`
For example:
```text
[ xmanning@dreadfort:~/git/kubernetes-playground ] (master) $ ssh ansible@kube-0 'sudo cat /etc/shadow >/dev/null && echo "Works"'
Works
[ xmanning@dreadfort:~/git/kubernetes-playground ] (master) $
```
## Directory structure
Our working directory will have the following files:
```text
kubernetes-playground/
|_ inventory.yml
|_ cluster.yml
```
## Inventory
Here's a YAML based example inventory for our servers called `inventory.yml`:
```yaml
---
k3s_cluster:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
kube-1:
ansible_user: ansible
ansible_host: 10.10.9.3
ansible_python_interpreter: /usr/bin/python3
kube-2:
ansible_user: ansible
ansible_host: 10.10.9.4
ansible_python_interpreter: /usr/bin/python3
```
We can test this works with `ansible -i inventory.yml -m ping all`, expected
result:
```text
kube-0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
kube-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
kube-2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
## Playbook
Here is our playbook for the k3s cluster (`cluster.yml`):
```yaml
---
- name: Build a cluster with a single control node
hosts: k3s_cluster
vars:
k3s_become: true
roles:
- role: xanmanning.k3s
```
## Execution
To execute the playbook against our inventory file, we will run the following
command:
`ansible-playbook -i inventory.yml cluster.yml`
The output we can expect is similar to the below, with no failed or unreachable
nodes. The default behavior of this role is to delegate the first play host as
the control node, so kube-0 will have more changed tasks than others:
```text
PLAY RECAP *******************************************************************************************************
kube-0 : ok=56 changed=11 unreachable=0 failed=0 skipped=28 rescued=0 ignored=0
kube-1 : ok=43 changed=10 unreachable=0 failed=0 skipped=32 rescued=0 ignored=0
kube-2 : ok=43 changed=10 unreachable=0 failed=0 skipped=32 rescued=0 ignored=0
```
## Testing
After logging into kube-0, we can test that k3s is running across the cluster,
that all nodes are ready and that everything is ready to execute our Kubernetes
workloads by running the following:
- `sudo kubectl get nodes -o wide`
- `sudo kubectl get pods -o wide --all-namespaces`
:hand: Note we are using `sudo` because we need to be root to access the
kube config for this node. This behavior can be changed with specifying
`write-kubeconfig-mode: '0644'` in `k3s_server`.
**Get Nodes**:
```text
ansible@kube-0:~$ sudo kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-0 Ready master 34s v1.19.4+k3s1 10.0.2.15 <none> Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.1-k3s1
kube-2 Ready <none> 14s v1.19.4+k3s1 10.0.2.17 <none> Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.1-k3s1
kube-1 Ready <none> 14s v1.19.4+k3s1 10.0.2.16 <none> Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.1-k3s1
ansible@kube-0:~$
```
**Get Pods**:
```text
ansible@kube-0:~$ sudo kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system local-path-provisioner-7ff9579c6-72j8x 1/1 Running 0 55s 10.42.2.2 kube-1 <none> <none>
kube-system metrics-server-7b4f8b595-lkspj 1/1 Running 0 55s 10.42.1.2 kube-2 <none> <none>
kube-system helm-install-traefik-b6vnt 0/1 Completed 0 55s 10.42.0.3 kube-0 <none> <none>
kube-system coredns-66c464876b-llsh7 1/1 Running 0 55s 10.42.0.2 kube-0 <none> <none>
kube-system svclb-traefik-jrqg7 2/2 Running 0 27s 10.42.1.3 kube-2 <none> <none>
kube-system svclb-traefik-gh65q 2/2 Running 0 27s 10.42.0.4 kube-0 <none> <none>
kube-system svclb-traefik-5z7zp 2/2 Running 0 27s 10.42.2.3 kube-1 <none> <none>
kube-system traefik-5dd496474-l2k74 1/1 Running 0 27s 10.42.1.4 kube-2 <none> <none>
```

View File

@ -0,0 +1,154 @@
# Quickstart: K3s cluster with a HA control plane using embedded etcd
This is the quickstart guide to creating your own 3 node k3s cluster with a
highly available control plane using the embedded etcd datastore.
The control plane will all be workers as well.
:hand: This example requires your Ansible user to be able to connect to the
servers over SSH using key-based authentication. The user is also has an entry
in a sudoers file that allows privilege escalation without requiring a
password.
To test this is the case, run the following check replacing `<ansible_user>`
and `<server_name>`. The expected output is `Works`
`ssh <ansible_user>@<server_name> 'sudo cat /etc/shadow >/dev/null && echo "Works"'`
For example:
```text
[ xmanning@dreadfort:~/git/kubernetes-playground ] (master) $ ssh ansible@kube-0 'sudo cat /etc/shadow >/dev/null && echo "Works"'
Works
[ xmanning@dreadfort:~/git/kubernetes-playground ] (master) $
```
## Directory structure
Our working directory will have the following files:
```text
kubernetes-playground/
|_ inventory.yml
|_ ha_cluster.yml
```
## Inventory
Here's a YAML based example inventory for our servers called `inventory.yml`:
```yaml
---
# We're adding k3s_control_node to each host, this can be done in host_vars/
# or group_vars/ as well - but for simplicity we are setting it here.
k3s_cluster:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
k3s_control_node: true
kube-1:
ansible_user: ansible
ansible_host: 10.10.9.3
ansible_python_interpreter: /usr/bin/python3
k3s_control_node: true
kube-2:
ansible_user: ansible
ansible_host: 10.10.9.4
ansible_python_interpreter: /usr/bin/python3
k3s_control_node: true
```
We can test this works with `ansible -i inventory.yml -m ping all`, expected
result:
```text
kube-0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
kube-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
kube-2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
## Playbook
Here is our playbook for the k3s cluster (`ha_cluster.yml`):
```yaml
---
- name: Build a cluster with HA control plane
hosts: k3s_cluster
vars:
k3s_become: true
k3s_etcd_datastore: true
k3s_use_experimental: true # Note this is required for k3s < v1.19.5+k3s1
roles:
- role: xanmanning.k3s
```
## Execution
To execute the playbook against our inventory file, we will run the following
command:
`ansible-playbook -i inventory.yml ha_cluster.yml`
The output we can expect is similar to the below, with no failed or unreachable
nodes. The default behavior of this role is to delegate the first play host as
the primary control node, so kube-0 will have more changed tasks than others:
```text
PLAY RECAP *******************************************************************************************************
kube-0 : ok=53 changed=8 unreachable=0 failed=0 skipped=30 rescued=0 ignored=0
kube-1 : ok=47 changed=10 unreachable=0 failed=0 skipped=28 rescued=0 ignored=0
kube-2 : ok=47 changed=9 unreachable=0 failed=0 skipped=28 rescued=0 ignored=0
```
## Testing
After logging into any of the servers (it doesn't matter), we can test that k3s
is running across the cluster, that all nodes are ready and that everything is
ready to execute our Kubernetes workloads by running the following:
- `sudo kubectl get nodes -o wide`
- `sudo kubectl get pods -o wide --all-namespaces`
:hand: Note we are using `sudo` because we need to be root to access the
kube config for this node. This behavior can be changed with specifying
`write-kubeconfig-mode: '0644'` in `k3s_server`.
**Get Nodes**:
```text
ansible@kube-0:~$ sudo kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-0 Ready etcd,master 2m58s v1.19.4+k3s1 10.10.9.2 10.10.9.2 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.1-k3s1
kube-1 Ready etcd,master 2m22s v1.19.4+k3s1 10.10.9.3 10.10.9.3 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.1-k3s1
kube-2 Ready etcd,master 2m10s v1.19.4+k3s1 10.10.9.4 10.10.9.4 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.1-k3s1
```
**Get Pods**:
```text
ansible@kube-0:~$ sudo kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-66c464876b-rhgn6 1/1 Running 0 3m38s 10.42.0.2 kube-0 <none> <none>
kube-system helm-install-traefik-vwglv 0/1 Completed 0 3m39s 10.42.0.3 kube-0 <none> <none>
kube-system local-path-provisioner-7ff9579c6-d5xpb 1/1 Running 0 3m38s 10.42.0.5 kube-0 <none> <none>
kube-system metrics-server-7b4f8b595-nhbt8 1/1 Running 0 3m38s 10.42.0.4 kube-0 <none> <none>
kube-system svclb-traefik-9lzcq 2/2 Running 0 2m56s 10.42.1.2 kube-1 <none> <none>
kube-system svclb-traefik-vq487 2/2 Running 0 2m45s 10.42.2.2 kube-2 <none> <none>
kube-system svclb-traefik-wkwkk 2/2 Running 0 3m1s 10.42.0.7 kube-0 <none> <none>
kube-system traefik-5dd496474-lw6x8 1/1 Running 0 3m1s 10.42.0.6 kube-0 <none> <none>
```

View File

@ -0,0 +1,121 @@
# Quickstart: K3s single node
This is the quickstart guide to creating your own single-node k3s "cluster".
:hand: This example requires your Ansible user to be able to connect to the
server over SSH using key-based authentication. The user is also has an entry
in a sudoers file that allows privilege escalation without requiring a
password.
To test this is the case, run the following check replacing `<ansible_user>`
and `<server_name>`. The expected output is `Works`
`ssh <ansible_user>@<server_name> 'sudo cat /etc/shadow >/dev/null && echo "Works"'`
For example:
```text
[ xmanning@dreadfort:~/git/kubernetes-playground ] (master) $ ssh ansible@kube-0 'sudo cat /etc/shadow >/dev/null && echo "Works"'
Works
[ xmanning@dreadfort:~/git/kubernetes-playground ] (master) $
```
## Directory structure
Our working directory will have the following files:
```text
kubernetes-playground/
|_ inventory.yml
|_ single_node.yml
```
## Inventory
Here's a YAML based example inventory for our server called `inventory.yml`:
```yaml
---
k3s_cluster:
hosts:
kube-0:
ansible_user: ansible
ansible_host: 10.10.9.2
ansible_python_interpreter: /usr/bin/python3
```
We can test this works with `ansible -i inventory.yml -m ping all`, expected
result:
```text
kube-0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
## Playbook
Here is our playbook for a single node k3s cluster (`single_node.yml`):
```yaml
---
- name: Build a single node k3s cluster
hosts: kube-0
vars:
k3s_become: true
roles:
- role: xanmanning.k3s
```
## Execution
To execute the playbook against our inventory file, we will run the following
command:
`ansible-playbook -i inventory.yml single_node.yml`
The output we can expect is similar to the below, with no failed or unreachable
nodes:
```text
PLAY RECAP *******************************************************************************************************
kube-0 : ok=39 changed=8 unreachable=0 failed=0 skipped=39 rescued=0 ignored=0
```
## Testing
After logging into the server, we can test that k3s is running and that it is
ready to execute our Kubernetes workloads by running the following:
- `sudo kubectl get nodes`
- `sudo kubectl get pods -o wide --all-namespaces`
:hand: Note we are using `sudo` because we need to be root to access the
kube config for this node. This behavior can be changed with specifying
`write-kubeconfig-mode: '0644'` in `k3s_server`.
**Get Nodes**:
```text
ansible@kube-0:~$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-0 Ready master 5m27s v1.19.4+k3s
ansible@kube-0:~$
```
**Get Pods**:
```text
ansible@kube-0:~$ sudo kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system metrics-server-7b4f8b595-k692h 1/1 Running 0 9m38s 10.42.0.2 kube-0 <none> <none>
kube-system local-path-provisioner-7ff9579c6-5lgzb 1/1 Running 0 9m38s 10.42.0.3 kube-0 <none> <none>
kube-system coredns-66c464876b-xg42q 1/1 Running 0 9m38s 10.42.0.5 kube-0 <none> <none>
kube-system helm-install-traefik-tdpcs 0/1 Completed 0 9m38s 10.42.0.4 kube-0 <none> <none>
kube-system svclb-traefik-hk248 2/2 Running 0 9m4s 10.42.0.7 kube-0 <none> <none>
kube-system traefik-5dd496474-bf4kv 1/1 Running 0 9m4s 10.42.0.6 kube-0 <none> <none>
```

View File

@ -1,18 +1,39 @@
---
- name: reload systemctl
command: systemctl daemon-reload
args:
warn: false
- name: Reload systemd
ansible.builtin.systemd:
daemon_reload: true
scope: "{{ k3s_systemd_context }}"
become: "{{ k3s_become }}"
- name: restart k3s
service:
- name: Reload service
ansible.builtin.set_fact:
k3s_service_reloaded: true
become: "{{ k3s_become }}"
- name: Restart k3s systemd
ansible.builtin.systemd:
name: k3s
state: restarted
enabled: true
scope: "{{ k3s_systemd_context }}"
enabled: "{{ k3s_start_on_boot }}"
retries: 3
delay: 3
register: k3s_systemd_restart_k3s
failed_when:
- k3s_systemd_restart_k3s is not success
- not ansible_check_mode
become: "{{ k3s_become }}"
- name: restart docker
service:
name: docker
- name: Restart k3s service
ansible.builtin.service:
name: k3s
state: restarted
enabled: true
enabled: "{{ k3s_start_on_boot }}"
retries: 3
delay: 3
register: k3s_service_restart_k3s
failed_when:
- k3s_service_restart_k3s is not success
- not ansible_check_mode
become: "{{ k3s_become }}"

View File

@ -1,7 +1,12 @@
---
galaxy_info:
role_name: k3s
namespace: xanmanning
author: Xan Manning
description: Ansible role for installing k3s as either a standalone server or cluster
description: Ansible role for installing k3s as either a standalone server or HA cluster
company: Pyrat Ltd.
github_branch: main
# If the issue tracker for your role is not on github, uncomment the
# next line and provide a value
@ -16,7 +21,7 @@ galaxy_info:
# - CC-BY
license: BSD
min_ansible_version: 2.6
min_ansible_version: '2.9'
# If this a Container Enabled role, provide the minimum Ansible Container version.
# min_ansible_container_version:
@ -26,20 +31,37 @@ galaxy_info:
# Galaxy will use this branch. During import Galaxy will access files on
# this branch. If Travis integration is configured, only notifications for this
# branch will be accepted. Otherwise, in all cases, the repo's default branch
# (usually master) will be used.
#github_branch:
# (usually main) will be used.
# github_branch:
#
# platforms is a list of platforms, and each platform has a name and a list of versions.
#
platforms:
- name: Alpine
versions:
- all
- name: Archlinux
versions:
- all
- name: EL
versions:
- 7
- 8
- name: Amazon
- name: Fedora
versions:
- 29
- 30
- 31
- name: Debian
versions:
- buster
- jessie
- stretch
- name: SLES
versions:
- 15
- name: Ubuntu
versions:
- xenial
@ -47,10 +69,11 @@ galaxy_info:
galaxy_tags:
- k3s
- k8s
- kubernetes
- docker
- containerd
- cluster
- lightweight
# List tags for your role here, one per line. A tag is a keyword that describes
# and categorizes the role. Users find roles by searching for tags. Be sure to
# remove the '[]' above, if you add tags to this list.
@ -59,5 +82,5 @@ galaxy_info:
# Maximum 20 tags per role.
dependencies: []
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
# if you add dependencies to this list.
# List your role dependencies here, one per line. Be sure to remove the '[]' above,
# if you add dependencies to this list.

View File

@ -0,0 +1,28 @@
---
- name: Converge
hosts: node*
become: true
vars:
molecule_is_test: true
k3s_release_version: v1.22
k3s_build_cluster: false
k3s_control_token: 55ba04e5-e17d-4535-9170-3e4245453f4d
k3s_install_dir: /opt/k3s/bin
k3s_config_file: /opt/k3s/etc/k3s_config.yaml
k3s_server:
data-dir: /var/lib/k3s-io
default-local-storage-path: /var/lib/k3s-io/local-storage
disable:
- metrics-server
- traefik
# k3s_agent:
# snapshotter: native
k3s_server_manifests_templates:
- "molecule/autodeploy/templates/00-ns-monitoring.yml.j2"
k3s_server_manifests_urls:
- url: https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
filename: 05-metallb-namespace.yml
k3s_service_env_vars:
K3S_TEST_VAR: "Hello world!"
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,55 @@
---
dependency:
name: galaxy
driver:
name: docker
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- check
- converge
- idempotence
- side_effect
- verify
- cleanup
- destroy
platforms:
- name: node1
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node2
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node3
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
provisioner:
name: ansible
options:
verbose: true
verifier:
name: ansible

View File

@ -0,0 +1,26 @@
---
- name: Prepare
hosts: node*
become: true
tasks:
- name: Ensure apt cache is updated and iptables is installed
ansible.builtin.apt:
name: iptables
state: present
update_cache: true
when: ansible_pkg_mgr == 'apt'
- name: Ensure install directory and configuration directory exists
ansible.builtin.file:
path: "/opt/k3s/{{ item }}"
state: directory
mode: 0755
loop:
- bin
- etc
- name: Ensure data directory exists
ansible.builtin.file:
path: "/var/lib/k3s-io"
state: directory
mode: 0755

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: monitoring

View File

@ -0,0 +1,9 @@
---
# This is an example playbook to execute Ansible tests.
- name: Verify
hosts: all
tasks:
- name: Example assertion
ansible.builtin.assert:
that: true

View File

@ -0,0 +1,14 @@
---
- name: Converge
hosts: all
become: true
vars:
pyratlabs_issue_controller_dump: true
# k3s_agent:
# snapshotter: native
pre_tasks:
- name: Ensure k3s_debug is set
ansible.builtin.set_fact:
k3s_debug: true
roles:
- xanmanning.k3s

View File

@ -0,0 +1,55 @@
---
dependency:
name: galaxy
driver:
name: docker
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- check
- converge
- idempotence
- side_effect
- verify
- cleanup
- destroy
platforms:
- name: node1
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node2
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node3
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
provisioner:
name: ansible
options:
verbose: true
verifier:
name: ansible

View File

@ -0,0 +1,10 @@
---
- name: Prepare
hosts: all
tasks:
- name: Ensure apt cache is updated and iptables is installed
ansible.builtin.apt:
name: iptables
state: present
update_cache: true
when: ansible_pkg_mgr == 'apt'

View File

@ -0,0 +1,9 @@
---
# This is an example playbook to execute Ansible tests.
- name: Verify
hosts: all
tasks:
- name: Example assertion
ansible.builtin.assert:
that: true

View File

@ -0,0 +1,26 @@
# Molecule managed
{% if item.registry is defined %}
FROM {{ item.registry.url }}/{{ item.image }}
{% else %}
FROM {{ item.image }}
{% endif %}
RUN if [ $(command -v apt-get) ]; then apt-get update && apt-get install -y python systemd sudo bash ca-certificates && apt-get clean; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install python systemd sudo python-devel python*-dnf bash && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y python systemd sudo yum-plugin-ovl bash && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python systemd sudo bash python-xml && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python sudo systemd bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python systemd sudo bash ca-certificates && xbps-remove -O; fi
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*; \
rm -f /etc/systemd/system/*.wants/*; \
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*; \
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [/sys/fs/cgroup]
CMD [/usr/sbin/init]

View File

@ -0,0 +1,22 @@
*******
Docker driver installation guide
*******
Requirements
============
* Docker Engine
Install
=======
Please refer to the `Virtual environment`_ documentation for installation best
practices. If not using a virtual environment, please consider passing the
widely recommended `'--user' flag`_ when invoking ``pip``.
.. _Virtual environment: https://virtualenv.pypa.io/en/latest/
.. _'--user' flag: https://packaging.python.org/tutorials/installing-packages/#installing-to-the-user-site
.. code-block:: bash
$ pip install 'molecule[docker]'

View File

@ -0,0 +1,12 @@
---
- name: Converge
hosts: all
become: true
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"
vars:
molecule_is_test: true
k3s_install_hard_links: true
k3s_release_version: stable
# k3s_agent:
# snapshotter: native

View File

@ -0,0 +1,55 @@
---
dependency:
name: galaxy
driver:
name: docker
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- check
- converge
- idempotence
- side_effect
- verify
- cleanup
- destroy
platforms:
- name: node1
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node2
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node3
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
provisioner:
name: ansible
options:
verbose: true
verifier:
name: ansible

View File

@ -0,0 +1,9 @@
---
- name: Converge
hosts: all
become: true
vars:
molecule_is_test: true
k3s_state: downloaded
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,9 @@
---
- name: Converge
hosts: all
become: true
vars:
molecule_is_test: true
k3s_state: restarted
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,15 @@
---
- name: Converge
hosts: node1
become: true
become_user: k3suser
vars:
molecule_is_test: true
k3s_use_experimental: true
k3s_server:
rootless: true
k3s_agent:
rootless: true
k3s_install_dir: "/home/{{ ansible_user_id }}/bin"
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,9 @@
---
- name: Converge
hosts: all
become: true
vars:
molecule_is_test: true
k3s_build_cluster: false
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,9 @@
---
- name: Converge
hosts: all
become: true
vars:
molecule_is_test: true
k3s_state: started
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,9 @@
---
- name: Converge
hosts: all
become: true
vars:
molecule_is_test: true
k3s_state: stopped
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,9 @@
---
- name: Converge
hosts: all
become: true
vars:
molecule_is_test: true
k3s_state: uninstalled
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,23 @@
---
- name: Prepare
hosts: node1
become: true
tasks:
- name: Ensure a user group exists
ansible.builtin.group:
name: user
state: present
- name: Ensure a normal user exists
ansible.builtin.user:
name: k3suser
group: user
state: present
- name: Ensure a normal user has bin directory
ansible.builtin.file:
path: /home/k3suser/bin
state: directory
owner: k3suser
group: user
mode: 0700

View File

@ -0,0 +1,10 @@
---
- name: Prepare
hosts: all
tasks:
- name: Ensure apt cache is updated and iptables is installed
ansible.builtin.apt:
name: iptables
state: present
update_cache: true
when: ansible_pkg_mgr == 'apt'

View File

@ -0,0 +1,14 @@
import os
import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
def test_hosts_file(host):
f = host.file('/etc/hosts')
assert f.exists
assert f.user == 'root'
assert f.group == 'root'

Binary file not shown.

View File

@ -0,0 +1,7 @@
# Molecule managed
{% if item.registry is defined %}
FROM {{ item.registry.url }}/{{ item.image }}
{% else %}
FROM {{ item.image }}
{% endif %}

View File

@ -0,0 +1,22 @@
*******
Docker driver installation guide
*******
Requirements
============
* Docker Engine
Install
=======
Please refer to the `Virtual environment`_ documentation for installation best
practices. If not using a virtual environment, please consider passing the
widely recommended `'--user' flag`_ when invoking ``pip``.
.. _Virtual environment: https://virtualenv.pypa.io/en/latest/
.. _'--user' flag: https://packaging.python.org/tutorials/installing-packages/#installing-to-the-user-site
.. code-block:: bash
$ pip install 'molecule[docker]'

View File

@ -0,0 +1,21 @@
---
- name: Converge
hosts: node*
become: true
vars:
molecule_is_test: true
k3s_registration_address: loadbalancer
k3s_control_token: 55ba04e5-e17d-4535-9170-3e4245453f4d
k3s_server:
datastore-endpoint: "postgres://postgres:verybadpass@database:5432/postgres?sslmode=disable"
# k3s_agent:
# snapshotter: native
k3s_service_env_file: /tmp/k3s.env
pre_tasks:
- name: Set each node to be a control node
ansible.builtin.set_fact:
k3s_control_node: true
when: inventory_hostname in ['node2', 'node3']
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,13 @@
frontend loadbalancer
bind *:6443
mode tcp
default_backend control_nodes
timeout client 1m
backend control_nodes
mode tcp
balance roundrobin
server node2 node2:6443
server node3 node3:6443
timeout connect 30s
timeout server 30m

View File

@ -0,0 +1,68 @@
---
dependency:
name: galaxy
driver:
name: docker
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- check
- converge
- idempotence
- side_effect
- verify
- cleanup
- destroy
platforms:
- name: node1
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node2
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node3
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: database
image: postgres:11-alpine
pre_build_image: true
command: "postgres"
env:
POSTGRES_PASSWORD: "verybadpass"
networks:
- name: k3snet
- name: loadbalancer
image: geerlingguy/docker-rockylinux8-ansible:latest
pre_build_image: true
ports:
- "6443:6443"
networks:
- name: k3snet
provisioner:
name: ansible
options:
verbose: true

View File

@ -0,0 +1,48 @@
---
- name: Prepare Load Balancer
hosts: loadbalancer
tasks:
- name: Ensure apt cache is updated
ansible.builtin.apt:
update_cache: true
when: ansible_pkg_mgr == 'apt'
- name: Ensure HAProxy is installed
ansible.builtin.package:
name: haproxy
state: present
- name: Ensure HAProxy config directory exists
ansible.builtin.file:
path: /usr/local/etc/haproxy
state: directory
mode: 0755
- name: Ensure HAProxy is configured
ansible.builtin.template:
src: haproxy-loadbalancer.conf.j2
dest: /usr/local/etc/haproxy/haproxy.cfg
mode: 0644
- name: Ensure HAProxy service is started
ansible.builtin.command:
cmd: haproxy -D -f /usr/local/etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid
args:
creates: /var/run/haproxy.pid
- name: Prepare nodes
hosts: node*
tasks:
- name: Ensure apt cache is updated and iptables is installed
ansible.builtin.apt:
name: iptables
state: present
update_cache: true
when: ansible_pkg_mgr == 'apt'
- name: Ensure environment file exists for k3s_service_env_file
ansible.builtin.lineinfile:
path: /tmp/k3s.env
line: "THISHOST={{ ansible_hostname }}"
mode: 0644
create: true

View File

@ -0,0 +1,14 @@
import os
import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
def test_hosts_file(host):
f = host.file('/etc/hosts')
assert f.exists
assert f.user == 'root'
assert f.group == 'root'

Binary file not shown.

View File

@ -0,0 +1,24 @@
---
- name: Converge
hosts: node*
become: true
vars:
molecule_is_test: true
k3s_release_version: "v1.21"
k3s_use_experimental: true
k3s_etcd_datastore: true
k3s_server:
secrets-encryption: true
k3s_agent:
node-ip: "{{ ansible_default_ipv4.address }}"
snapshotter: native
selinux: "{{ ansible_os_family | lower == 'redhat' }}"
k3s_skip_validation: "{{ k3s_service_handler[ansible_service_mgr] == 'service' }}"
# k3s_skip_post_checks: "{{ ansible_os_family | lower == 'redhat' }}"
pre_tasks:
- name: Set each node to be a control node
ansible.builtin.set_fact:
k3s_control_node: true
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,13 @@
frontend loadbalancer
bind *:6443
mode tcp
default_backend control_nodes
timeout client 1m
backend control_nodes
mode tcp
balance roundrobin
server node2 node2:6443
server node3 node3:6443
timeout connect 30s
timeout server 30m

View File

@ -0,0 +1,60 @@
---
dependency:
name: galaxy
driver:
name: docker
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- check
- converge
- idempotence
- side_effect
- verify
- cleanup
- destroy
platforms:
- name: node1
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node2
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node3
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: loadbalancer
image: geerlingguy/docker-rockylinux8-ansible:latest
pre_build_image: true
ports:
- "6443:6443"
networks:
- name: k3snet
provisioner:
name: ansible
options:
verbose: true

View File

@ -0,0 +1,59 @@
---
- name: Prepare all nodes
hosts: all
tasks:
- name: Ensure apt cache is updated
ansible.builtin.apt:
update_cache: true
when: ansible_pkg_mgr == 'apt'
- name: Ensure sudo is installed
community.general.apk:
name: sudo
state: present
update_cache: true
when: ansible_pkg_mgr == 'apk'
- name: Prepare Load Balancer
hosts: loadbalancer
tasks:
- name: Ensure HAProxy is installed
ansible.builtin.package:
name: haproxy
state: present
- name: Ensure HAProxy config directory exists
ansible.builtin.file:
path: /usr/local/etc/haproxy
state: directory
mode: 0755
- name: Ensure HAProxy is configured
ansible.builtin.template:
src: haproxy-loadbalancer.conf.j2
dest: /usr/local/etc/haproxy/haproxy.cfg
mode: 0644
- name: Ensure HAProxy service is started
ansible.builtin.command:
cmd: haproxy -D -f /usr/local/etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid
args:
creates: /var/run/haproxy.pid
- name: Prepare nodes
hosts: node*
tasks:
- name: Ensure apt cache is updated and iptables is installed
ansible.builtin.apt:
name: iptables
state: present
update_cache: true
when: ansible_pkg_mgr == 'apt'
- name: Ensure iproute is installed
ansible.builtin.dnf:
name: iproute
state: present
update_cache: true
when: ansible_pkg_mgr == 'dnf'

View File

@ -0,0 +1,4 @@
-r ../requirements.txt
yamllint>=1.25.0
ansible-lint>=4.3.5

1
molecule/nodeploy/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
files/*

View File

@ -0,0 +1,12 @@
---
- name: Converge
hosts: all
become: true
vars:
molecule_is_test: true
k3s_server: "{{ lookup('file', 'k3s_server.yml') | from_yaml }}"
k3s_agent: "{{ lookup('file', 'k3s_agent.yml') | from_yaml }}"
k3s_airgap: true
k3s_release_version: latest
roles:
- role: "{{ lookup('env', 'MOLECULE_PROJECT_DIRECTORY') | basename }}"

View File

@ -0,0 +1,9 @@
---
node-label:
- "foo=bar"
- "hello=world"
kubelet-arg:
- "cloud-provider=external"
- "provider-id=azure"
# snapshotter: native

View File

@ -0,0 +1,14 @@
---
flannel-backend: 'none'
disable-scheduler: true
disable-cloud-controller: true
disable-network-policy: true
disable:
- coredns
- traefik
- servicelb
- local-storage
- metrics-server
node-taint:
- "k3s-controlplane=true:NoExecute"

View File

@ -0,0 +1,55 @@
---
dependency:
name: galaxy
driver:
name: docker
scenario:
test_sequence:
- dependency
- cleanup
- destroy
- syntax
- create
- prepare
- check
- converge
- idempotence
- side_effect
- verify
- cleanup
- destroy
platforms:
- name: node1
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node2
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
- name: node3
image: ${MOLECULE_DISTRO:-"geerlingguy/docker-rockylinux8-ansible:latest"}
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: ${MOLECULE_PREBUILT:-true}
networks:
- name: k3snet
provisioner:
name: ansible
options:
verbose: true
verifier:
name: ansible

View File

@ -0,0 +1,27 @@
---
- name: Prepare
hosts: all
tasks:
- name: Ensure apt cache is updated and iptables is installed
ansible.builtin.apt:
name: iptables
state: present
update_cache: true
when: ansible_pkg_mgr == 'apt'
- name: Prepare air-gapped installation
delegate_to: localhost
run_once: true
block:
- name: Ensure files directory exists
ansible.builtin.file:
path: ./files
state: directory
mode: 0750
- name: Ensure k3s is downloaded for air-gap installation
ansible.builtin.get_url:
url: https://github.com/k3s-io/k3s/releases/download/v1.22.5%2Bk3s1/k3s
dest: ./files/k3s
mode: 0755

View File

@ -0,0 +1,9 @@
---
# This is an example playbook to execute Ansible tests.
- name: Verify
hosts: all
tasks:
- name: Example assertion
ansible.builtin.assert:
that: true

View File

@ -0,0 +1,4 @@
-r ../requirements.txt
molecule-plugins[docker]
docker>=4.3.1

1
requirements.txt Normal file
View File

@ -0,0 +1 @@
ansible>=2.10.7

View File

@ -1,46 +0,0 @@
---
- name: Ensure ansible_host is mapped to inventory_hostname
lineinfile:
path: /tmp/inventory.txt
line: "{{ item }}@@@{{ hostvars[item].ansible_host }}@@@{{ hostvars[item].k3s_control_node }}"
create: true
loop: "{{ play_hosts }}"
- name: Lookup control node from file
command: "grep 'True' /tmp/inventory.txt"
changed_when: false
register: k3s_control_delegate_raw
- name: Ensure control node is delegated to for obtaining a token
set_fact:
k3s_control_delegate: "{{ k3s_control_delegate_raw.stdout.split('@@@')[0] }}"
- name: Ensure the control node address is registered in Ansible
set_fact:
k3s_control_node_address: "{{ hostvars[k3s_control_delegate].ansible_host }}"
- name: Ensure NODE_TOKEN is captured from control node
slurp:
path: "/var/lib/rancher/k3s/server/node-token"
register: k3s_control_token
delegate_to: "{{ k3s_control_delegate }}"
- name: Ensure k3s service unit file is present
template:
src: k3s.service.j2
dest: /etc/systemd/system/k3s.service
notify:
- reload systemctl
- restart k3s
- meta: flush_handlers
- name: Wait for all nodes to be ready
command: "{{ k3s_install_dir }}/kubectl get nodes"
changed_when: false
register: kubectl_get_nodes_result
until: kubectl_get_nodes_result.stdout.find("NotReady") == -1
retries: 30
delay: 20
when: k3s_control_node

View File

@ -0,0 +1,10 @@
---
- name: Ensure systemd context is correct if we are running k3s rootless
ansible.builtin.set_fact:
k3s_systemd_context: user
k3s_systemd_unit_dir: "{{ ansible_user_dir }}/.config/systemd/user"
when:
- k3s_runtime_config is defined
- k3s_runtime_config.rootless is defined
- k3s_runtime_config.rootless

View File

@ -1,33 +0,0 @@
---
- name: Ensure target host architecture information is set as a fact
set_fact:
k3s_arch: "{{ k3s_arch_lookup[ansible_architecture].arch }}"
k3s_arch_suffix: "{{ k3s_arch_lookup[ansible_architecture].suffix }}"
- name: Ensure URLs are set as facts for downloading binaries
set_fact:
k3s_binary_url: "{{ k3s_github_download_url }}/{{ k3s_release_version }}/k3s{{ k3s_arch_suffix }}"
k3s_hash_url: "{{ k3s_github_download_url }}/{{ k3s_release_version }}/sha256sum-{{ k3s_arch }}.txt"
- name: Ensure the k3s hashsum is downloaded
uri:
url: "{{ k3s_hash_url }}"
return_content: true
register: k3s_hash_sum_raw
- name: Ensure sha256sum is set from hashsum variable
shell: >
set -o pipefail && \
echo "{{ k3s_hash_sum_raw.content }}" | grep 'k3s' | awk '{ print $1 }'
changed_when: false
args:
executable: /bin/bash
register: k3s_hash_sum
- name: Ensure k3s binary is downloaded
get_url:
url: "{{ k3s_binary_url }}"
dest: "{{ k3s_install_dir }}/k3s-{{ k3s_release_version }}"
checksum: "sha256:{{ k3s_hash_sum.stdout }}"
mode: 0755

108
tasks/ensure_cluster.yml Normal file
View File

@ -0,0 +1,108 @@
---
- name: "Ensure cluster token is captured from {{ k3s_control_delegate }}"
ansible.builtin.slurp:
path: "{{ k3s_runtime_config['data-dir'] | default(k3s_data_dir) }}/server/token"
register: k3s_slurped_cluster_token
delegate_to: "{{ k3s_control_delegate }}"
when:
- k3s_control_token is not defined
- not ansible_check_mode
become: "{{ k3s_become }}"
- name: Ensure cluster token is formatted correctly for use in templates
ansible.builtin.set_fact:
k3s_control_token_content: "{{ k3s_control_token | default(k3s_slurped_cluster_token.content | b64decode) }}"
when:
- k3s_control_token is not defined
- not ansible_check_mode
- name: Ensure dummy cluster token is defined for ansible_check_mode
ansible.builtin.set_fact:
k3s_control_token_content: "{{ k3s_control_delegate | to_uuid }}"
check_mode: false
when:
- ansible_check_mode
- name: Ensure the cluster token file location exists
ansible.builtin.file:
path: "{{ k3s_token_location | dirname }}"
state: directory
mode: 0755
become: "{{ k3s_become }}"
- name: Ensure k3s cluster token file is present
ansible.builtin.template:
src: cluster-token.j2
dest: "{{ k3s_token_location }}"
mode: 0600
become: "{{ k3s_become }}"
notify:
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
- name: Ensure k3s service unit file is present
ansible.builtin.template:
src: k3s.service.j2
dest: "{{ k3s_systemd_unit_dir }}/k3s.service"
mode: 0644
become: "{{ k3s_become }}"
when:
- k3s_service_handler[ansible_service_mgr] == 'systemd'
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
- name: Ensure k3s service file is present
ansible.builtin.template:
src: k3s.openrc.j2
dest: "{{ k3s_openrc_service_dir }}/k3s"
mode: 0744
when:
- k3s_service_handler[ansible_service_mgr] == 'service'
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"
- name: Ensure k3s logrotate file is present
ansible.builtin.template:
src: k3s.logrotate.j2
dest: "{{ k3s_logrotate_dir }}/k3s"
mode: 0640
when:
- k3s_service_handler[ansible_service_mgr] == 'service'
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"
- name: Ensure k3s config file exists
ansible.builtin.template:
src: config.yaml.j2
dest: "{{ k3s_config_file }}"
mode: 0644
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"
- name: Ensure secondary controllers are started
ansible.builtin.include_tasks: ensure_control_plane_started_{{ ansible_service_mgr }}.yml
when:
- k3s_control_node
- not k3s_primary_control_node
- name: Run control plane post checks
ansible.builtin.import_tasks: post_checks_control_plane.yml
when:
- not k3s_skip_validation
- not k3s_skip_post_checks
- name: Flush Handlers
ansible.builtin.meta: flush_handlers
- name: Run node post checks
ansible.builtin.import_tasks: post_checks_nodes.yml
when:
- not k3s_skip_validation
- not k3s_skip_post_checks

View File

@ -0,0 +1,11 @@
---
- name: Ensure containerd registries file exists
ansible.builtin.template:
src: registries.yaml.j2
dest: "{{ k3s_config_dir }}/registries.yaml"
mode: 0600
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"

View File

@ -0,0 +1,15 @@
---
- name: Ensure k3s control plane server is started
ansible.builtin.service:
name: k3s
state: started
enabled: "{{ k3s_start_on_boot }}"
register: k3s_service_start_k3s
until: k3s_service_start_k3s is succeeded
retries: 3
delay: 3
failed_when:
- k3s_service_start_k3s is not succeeded
- not ansible_check_mode
become: "{{ k3s_become }}"

View File

@ -0,0 +1,16 @@
---
- name: Ensure k3s control plane server is started
ansible.builtin.systemd:
name: k3s
state: started
enabled: "{{ k3s_start_on_boot }}"
scope: "{{ k3s_systemd_context }}"
register: k3s_systemd_start_k3s
until: k3s_systemd_start_k3s is succeeded
retries: 3
delay: 3
failed_when:
- k3s_systemd_start_k3s is not succeeded
- not ansible_check_mode
become: "{{ k3s_become }}"

View File

@ -0,0 +1,12 @@
---
- name: Ensure {{ directory.name }} exists
ansible.builtin.file:
path: "{{ directory.path }}"
state: directory
mode: "{{ directory.mode | default(755) }}"
become: "{{ k3s_become }}"
when:
- directory.path is defined
- directory.path | length > 0
- directory.path != omit

View File

@ -0,0 +1,51 @@
---
- name: Ensure target host architecture information is set as a fact
ansible.builtin.set_fact:
k3s_arch: "{{ k3s_arch_lookup[ansible_architecture].arch }}"
k3s_arch_suffix: "{{ k3s_arch_lookup[ansible_architecture].suffix }}"
check_mode: false
- name: Ensure URLs are set as facts for downloading binaries
ansible.builtin.set_fact:
k3s_binary_url: "{{ k3s_github_download_url }}/{{ k3s_release_version }}/k3s{{ k3s_arch_suffix }}"
k3s_hash_url: "{{ k3s_github_download_url }}/{{ k3s_release_version }}/sha256sum-{{ k3s_arch }}.txt"
check_mode: false
- name: Override k3s_binary_url and k3s_hash_url facts for testing specific commit
ansible.builtin.set_fact:
k3s_binary_url: "https://storage.googleapis.com/k3s-ci-builds/k3s{{ k3s_arch_suffix }}-{{ k3s_release_version }}"
k3s_hash_url: "https://storage.googleapis.com/k3s-ci-builds/k3s{{ k3s_arch_suffix }}-{{ k3s_release_version }}.sha256sum"
when:
- k3s_release_version | regex_search("^[a-z0-9]{40}$")
check_mode: false
- name: Ensure the k3s hashsum is downloaded
ansible.builtin.uri:
url: "{{ k3s_hash_url }}"
return_content: true
register: k3s_hash_sum_raw
check_mode: false
- name: Ensure sha256sum is set from hashsum variable
ansible.builtin.set_fact:
k3s_hash_sum: "{{ (k3s_hash_sum_raw.content.split('\n') |
select('search', 'k3s' + k3s_arch_suffix) |
reject('search', 'images') |
first).split() | first }}"
changed_when: false
check_mode: false
- name: Ensure installation directory exists
ansible.builtin.file:
path: "{{ k3s_install_dir }}"
state: directory
mode: 0755
- name: Ensure k3s binary is downloaded
ansible.builtin.get_url:
url: "{{ k3s_binary_url }}"
dest: "{{ k3s_install_dir }}/k3s-{{ k3s_release_version }}"
checksum: "sha256:{{ k3s_hash_sum }}"
mode: 0755
become: "{{ k3s_become }}"

View File

@ -0,0 +1,54 @@
---
- name: Check if kubectl exists
ansible.builtin.stat:
path: "{{ k3s_install_dir }}/kubectl"
register: k3s_check_kubectl
become: "{{ k3s_become }}"
- name: Clean up nodes that are in an uninstalled state
when:
- k3s_check_kubectl.stat.exists is defined
- k3s_check_kubectl.stat.exists
- k3s_control_delegate is defined
- not ansible_check_mode
block:
- name: Gather a list of nodes
ansible.builtin.command:
cmd: "{{ k3s_install_dir }}/kubectl get nodes"
changed_when: false
failed_when: false
delegate_to: "{{ k3s_control_delegate }}"
run_once: true
register: kubectl_get_nodes_result
become: "{{ k3s_become }}"
- name: Ensure uninstalled nodes are drained # noqa no-changed-when
ansible.builtin.command:
cmd: >-
{{ k3s_install_dir }}/kubectl drain {{ hostvars[item].ansible_hostname }}
--ignore-daemonsets
--{{ k3s_drain_command[ansible_version.string is version_compare('1.22', '>=')] }}
--force
delegate_to: "{{ k3s_control_delegate }}"
run_once: true
when:
- kubectl_get_nodes_result.stdout is defined
- hostvars[item].ansible_hostname in kubectl_get_nodes_result.stdout
- hostvars[item].k3s_state is defined
- hostvars[item].k3s_state == 'uninstalled'
loop: "{{ ansible_play_hosts }}"
become: "{{ k3s_become }}"
- name: Ensure uninstalled nodes are removed # noqa no-changed-when
ansible.builtin.command:
cmd: "{{ k3s_install_dir }}/kubectl delete node {{ hostvars[item].ansible_hostname }}"
delegate_to: "{{ k3s_control_delegate }}"
run_once: true
when:
- kubectl_get_nodes_result.stdout is defined
- hostvars[item].ansible_hostname in kubectl_get_nodes_result.stdout
- hostvars[item].k3s_state is defined
- hostvars[item].k3s_state == 'uninstalled'
loop: "{{ ansible_play_hosts }}"
become: "{{ k3s_become }}"

View File

@ -0,0 +1,32 @@
---
- name: Ensure directories exist
ansible.builtin.include_tasks: ensure_directories.yml
loop: "{{ k3s_ensure_directories_exist }}"
loop_control:
loop_var: directory
- name: Ensure installed node
ansible.builtin.include_tasks: ensure_installed_node.yml
when:
- ((k3s_control_node and k3s_controller_list | length == 1)
or (k3s_primary_control_node and k3s_controller_list | length > 1))
- not ansible_check_mode
- name: Flush Handlers
ansible.builtin.meta: flush_handlers
- name: Ensure installed node | k3s_build_cluster
ansible.builtin.include_tasks: ensure_installed_node.yml
when: k3s_build_cluster
- name: Determine if the systems are already clustered
ansible.builtin.stat:
path: "{{ k3s_token_location }}"
register: k3s_token_cluster_check
- name: Ensure control plane started with {{ ansible_service_mgr }}
ansible.builtin.include_tasks: ensure_control_plane_started_{{ ansible_service_mgr }}.yml
when: (k3s_control_node and k3s_controller_list | length == 1)
or (k3s_primary_control_node and k3s_controller_list | length > 1)
or k3s_token_cluster_check.stat.exists

View File

@ -0,0 +1,103 @@
---
- name: Ensure k3s is linked into the installation destination
ansible.builtin.file:
src: "{{ k3s_install_dir }}/k3s-{{ k3s_release_version }}"
dest: "{{ k3s_install_dir }}/{{ item }}"
state: "{{ 'hard' if k3s_install_hard_links else 'link' }}"
force: "{{ k3s_install_hard_links }}"
mode: 0755
loop:
- k3s
- kubectl
- crictl
- ctr
when: not ansible_check_mode
notify:
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"
- name: Ensure k3s config file exists
ansible.builtin.template:
src: config.yaml.j2
dest: "{{ k3s_config_file }}"
mode: 0644
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"
- name: Ensure cluster token is present when pre-defined
when: k3s_control_token is defined
block:
- name: Ensure the cluster token file location exists
ansible.builtin.file:
path: "{{ k3s_token_location | dirname }}"
state: directory
mode: 0755
become: "{{ k3s_become }}"
- name: Ensure k3s cluster token file is present
ansible.builtin.template:
src: cluster-token.j2
dest: "{{ k3s_token_location }}"
mode: 0600
become: "{{ k3s_become }}"
notify:
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
- name: Ensure k3s service unit file is present
ansible.builtin.template:
src: k3s.service.j2
dest: "{{ k3s_systemd_unit_dir }}/k3s.service"
mode: 0644
when:
- k3s_service_handler[ansible_service_mgr] == 'systemd'
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"
- name: Ensure k3s service file is present
ansible.builtin.template:
src: k3s.openrc.j2
dest: "{{ k3s_openrc_service_dir }}/k3s"
mode: 0744
when:
- k3s_service_handler[ansible_service_mgr] == 'service'
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"
- name: Ensure k3s logrotate file is present
ansible.builtin.template:
src: k3s.logrotate.j2
dest: "{{ k3s_logrotate_dir }}/k3s"
mode: 0640
when:
- k3s_service_handler[ansible_service_mgr] == 'service'
notify:
- "Reload {{ k3s_service_handler[ansible_service_mgr] }}"
- "Restart k3s {{ k3s_service_handler[ansible_service_mgr] }}"
become: "{{ k3s_become }}"
- name: Ensure k3s killall script is present
ansible.builtin.template:
src: k3s-killall.sh.j2
dest: "/usr/local/bin/k3s-killall.sh"
mode: 0700
become: "{{ k3s_become }}"
when:
- k3s_runtime_config is defined
- ("rootless" not in k3s_runtime_config or not k3s_runtime_config.rootless)
- name: Ensure k3s uninstall script is present
ansible.builtin.template:
src: k3s-uninstall.sh.j2
dest: "/usr/local/bin/k3s-uninstall.sh"
mode: 0700
become: "{{ k3s_become }}"
when:
- k3s_runtime_config is defined
- ("rootless" not in k3s_runtime_config or not k3s_runtime_config.rootless)

View File

@ -0,0 +1,70 @@
---
- name: Ensure that the manifests directory exists
ansible.builtin.file:
state: directory
path: "{{ k3s_server_manifests_dir }}"
mode: 0755
when: >-
k3s_primary_control_node and
(k3s_server_manifests_templates | length > 0
or k3s_server_manifests_urls | length > 0)
become: "{{ k3s_become }}"
- name: Ensure that the pod-manifests directory exists
ansible.builtin.file:
state: directory
path: "{{ k3s_server_pod_manifests_dir }}"
mode: 0755
when: >-
k3s_control_node and
(k3s_server_pod_manifests_templates | length > 0
or k3s_server_pod_manifests_urls | length > 0)
become: "{{ k3s_become }}"
# https://rancher.com/docs/k3s/latest/en/advanced/#auto-deploying-manifests
- name: Ensure auto-deploying manifests are copied to the primary controller
ansible.builtin.template:
src: "{{ item }}"
dest: "{{ k3s_server_manifests_dir }}/{{ item | basename | replace('.j2', '') }}"
mode: 0644
loop: "{{ k3s_server_manifests_templates }}"
become: "{{ k3s_become }}"
when:
- k3s_primary_control_node
- k3s_server_manifests_templates | length > 0
- name: Ensure auto-deploying manifests are downloaded to the primary controller
ansible.builtin.get_url:
url: "{{ item.url }}"
dest: "{{ k3s_server_manifests_dir }}/{{ item.filename | default(item.url | basename) }}"
mode: 0644
loop: "{{ k3s_server_manifests_urls }}"
become: "{{ k3s_become }}"
when:
- k3s_primary_control_node
- not ansible_check_mode
- k3s_server_manifests_urls | length > 0
# https://github.com/k3s-io/k3s/pull/1691
- name: Ensure static pod manifests are copied to controllers
ansible.builtin.template:
src: "{{ item }}"
dest: "{{ k3s_server_pod_manifests_dir }}/{{ item | basename | replace('.j2', '') }}"
mode: 0644
loop: "{{ k3s_server_pod_manifests_templates }}"
become: "{{ k3s_become }}"
when:
- k3s_control_node
# https://rancher.com/docs/k3s/latest/en/advanced/#auto-deploying-manifests
- name: Ensure auto-deploying manifests are downloaded to the primary controller
ansible.builtin.get_url:
url: "{{ item.url }}"
dest: "{{ k3s_server_pod_manifests_dir }}/{{ item.filename | default(item.url | basename) }}"
mode: 0644
loop: "{{ k3s_server_pod_manifests_urls }}"
become: "{{ k3s_become }}"
when:
- k3s_control_node
- not ansible_check_mode

View File

@ -0,0 +1,31 @@
---
- name: Ensure that the config.yaml.d directory exists
ansible.builtin.file:
state: directory
path: "{{ k3s_config_yaml_d_dir }}"
mode: 0755
when: >-
k3s_server_config_yaml_d_files | length > 0
or k3s_agent_config_yaml_d_files | length > 0
become: "{{ k3s_become }}"
# https://github.com/k3s-io/k3s/pull/3162
- name: Ensure configuration files are copied to controllers
ansible.builtin.template:
src: "{{ item }}"
dest: "{{ k3s_config_yaml_d_dir }}/{{ item | basename | replace('.j2', '') }}"
mode: 0644
loop: "{{ k3s_server_config_yaml_d_files }}"
become: "{{ k3s_become }}"
when: k3s_control_node
# https://github.com/k3s-io/k3s/pull/3162
- name: Ensure configuration files are copied to agents
ansible.builtin.template:
src: "{{ item }}"
dest: "{{ k3s_config_yaml_d_dir }}/{{ item | basename | replace('.j2', '') }}"
mode: 0644
loop: "{{ k3s_agent_config_yaml_d_files }}"
become: "{{ k3s_become }}"
when: not k3s_control_node

View File

@ -0,0 +1,163 @@
---
- name: Ensure k3s_build_cluster is false if running against a single node.
ansible.builtin.set_fact:
k3s_build_cluster: false
when:
- ansible_play_hosts | length < 2
- k3s_registration_address is not defined
- name: Ensure k3s control node fact is set
ansible.builtin.set_fact:
k3s_control_node: "{{ not k3s_build_cluster }}"
when: k3s_control_node is not defined
- name: Ensure k3s primary control node fact is set
ansible.builtin.set_fact:
k3s_primary_control_node: "{{ not k3s_build_cluster }}"
when: k3s_primary_control_node is not defined
- name: Ensure k3s control plane port is captured
ansible.builtin.set_fact:
k3s_control_plane_port: "{{ k3s_runtime_config['https-listen-port'] | default(6443) }}"
delegate_to: k3s_primary_control_node
- name: Ensure k3s node IP is configured when node-ip is defined
ansible.builtin.set_fact:
k3s_node_ip: "{{ k3s_runtime_config['node-ip'] }}"
when:
- k3s_runtime_config['node-ip'] is defined
- name: Ensure a count of control nodes is generated from ansible_play_hosts
ansible.builtin.set_fact:
k3s_controller_list: "{{ k3s_controller_list + [item] }}"
when:
- hostvars[item].k3s_control_node is defined
- hostvars[item].k3s_control_node
loop: "{{ ansible_play_hosts }}"
- name: Ensure a k3s control node is defined if none are found in ansible_play_hosts
when:
- k3s_controller_list | length < 1
- k3s_build_cluster is defined
- k3s_build_cluster
block:
- name: Set the control host
ansible.builtin.set_fact:
k3s_control_node: true
when: inventory_hostname == ansible_play_hosts[0]
- name: Ensure a count of control nodes is generated
ansible.builtin.set_fact:
k3s_controller_list: "{{ k3s_controller_list + [item] }}"
when:
- hostvars[item].k3s_control_node is defined
- hostvars[item].k3s_control_node
loop: "{{ ansible_play_hosts }}"
- name: Ensure an existing primary k3s control node is defined if multiple are found and at least one is running
when:
- k3s_controller_list | length >= 1
- k3s_build_cluster is defined
- k3s_build_cluster
- k3s_control_delegate is not defined
block:
- name: Test if control plane is running
ansible.builtin.wait_for:
port: "{{ k3s_runtime_config['https-listen-port'] | default('6443') }}"
host: "{{ k3s_runtime_config['bind-address'] | default('127.0.0.1') }}"
timeout: 5
register: k3s_control_node_running
ignore_errors: true
when: k3s_control_node
- name: List running control planes
ansible.builtin.set_fact:
k3s_running_controller_list: "{{ k3s_running_controller_list + [item] }}"
when:
- hostvars[item].k3s_control_node_running is not skipped
- hostvars[item].k3s_control_node_running is succeeded
loop: "{{ ansible_play_hosts }}"
- name: Choose first running node as delegate
ansible.builtin.set_fact:
k3s_control_delegate: "{{ k3s_running_controller_list[0] }}"
when: k3s_running_controller_list | length >= 1
- name: Ensure k3s_primary_control_node is set on the delegate
ansible.builtin.set_fact:
k3s_primary_control_node: true
when:
- k3s_control_delegate is defined
- inventory_hostname == k3s_control_delegate
- name: Ensure a primary k3s control node is defined if multiple are found in ansible_play_hosts
ansible.builtin.set_fact:
k3s_primary_control_node: true
when:
- k3s_controller_list is defined
- inventory_hostname == k3s_controller_list[0]
- k3s_build_cluster is defined
- k3s_build_cluster
- k3s_control_delegate is not defined
- name: Ensure ansible_host is mapped to inventory_hostname
ansible.builtin.blockinfile:
path: /tmp/inventory.txt
block: |
{% for host in ansible_play_hosts %}
{% filter replace('\n', ' ') %}
{{ host }}
@@@
{{ hostvars[host].ansible_host | default(hostvars[host].ansible_fqdn) | string }}
@@@
C_{{ hostvars[host].k3s_control_node | string }}
@@@
P_{{ hostvars[host].k3s_primary_control_node | default(False) | string }}
{% endfilter %}
@@@ END:{{ host }}
{% endfor %}
create: true
mode: 0600
check_mode: false
when: k3s_control_node is defined
- name: Delegate an initializing control plane node
when: k3s_registration_address is not defined
or k3s_control_delegate is not defined
block:
- name: Lookup control node from file
ansible.builtin.command:
cmd: "grep -i '{{ 'P_True' if (k3s_controller_list | length > 1) else 'C_True' }}' /tmp/inventory.txt"
changed_when: false
check_mode: false
register: k3s_control_delegate_raw
- name: Ensure control node is delegated for obtaining a cluster token
ansible.builtin.set_fact:
k3s_control_delegate: "{{ k3s_control_delegate_raw.stdout.split(' @@@ ')[0] }}"
check_mode: false
when: k3s_control_delegate is not defined
- name: Ensure the node registration address is defined from k3s_control_node_address
ansible.builtin.set_fact:
k3s_registration_address: "{{ k3s_control_node_address }}"
check_mode: false
when: k3s_control_node_address is defined
- name: Ensure the node registration address is defined from node-ip
ansible.builtin.set_fact:
k3s_registration_address: "{{ hostvars[k3s_control_delegate].k3s_node_ip }}"
check_mode: false
when:
- k3s_registration_address is not defined
- k3s_control_node_address is not defined
- hostvars[k3s_control_delegate].k3s_node_ip is defined
- name: Ensure the node registration address is defined
ansible.builtin.set_fact:
k3s_registration_address: "{{ hostvars[k3s_control_delegate].ansible_host | default(hostvars[k3s_control_delegate].ansible_fqdn) }}"
check_mode: false
when:
- k3s_registration_address is not defined
- k3s_control_node_address is not defined

20
tasks/ensure_started.yml Normal file
View File

@ -0,0 +1,20 @@
---
- name: Ensure k3s service is started
ansible.builtin.systemd:
name: k3s
state: started
enabled: "{{ k3s_start_on_boot }}"
when: k3s_non_root is not defined or not k3s_non_root
become: "{{ k3s_become }}"
- name: Ensure k3s service is started
ansible.builtin.systemd:
name: k3s
state: started
enabled: "{{ k3s_start_on_boot }}"
scope: user
when:
- k3s_non_root is defined
- k3s_non_root
become: "{{ k3s_become }}"

20
tasks/ensure_stopped.yml Normal file
View File

@ -0,0 +1,20 @@
---
- name: Ensure k3s service is stopped
ansible.builtin.systemd:
name: k3s
state: stopped
enabled: "{{ k3s_start_on_boot }}"
when: k3s_non_root is not defined or not k3s_non_root
become: "{{ k3s_become }}"
- name: Ensure k3s service is stopped
ansible.builtin.systemd:
name: k3s
state: stopped
enabled: "{{ k3s_start_on_boot }}"
scope: user
when:
- k3s_non_root is defined
- k3s_non_root
become: "{{ k3s_become }}"

View File

@ -0,0 +1,42 @@
---
- name: Check to see if k3s-killall.sh exits
ansible.builtin.stat:
path: /usr/local/bin/k3s-killall.sh
register: check_k3s_killall_script
- name: Check to see if k3s-uninstall.sh exits
ansible.builtin.stat:
path: /usr/local/bin/k3s-uninstall.sh
register: check_k3s_uninstall_script
- name: Run k3s-killall.sh
ansible.builtin.command:
cmd: /usr/local/bin/k3s-killall.sh
register: k3s_killall
changed_when: k3s_killall.rc == 0
when: check_k3s_killall_script.stat.exists
become: "{{ k3s_become }}"
- name: Run k3s-uninstall.sh
ansible.builtin.command:
cmd: /usr/local/bin/k3s-uninstall.sh
args:
removes: /usr/local/bin/k3s-uninstall.sh
register: k3s_uninstall
changed_when: k3s_uninstall.rc == 0
when: check_k3s_uninstall_script.stat.exists
become: "{{ k3s_become }}"
- name: Ensure hard links are removed
ansible.builtin.file:
path: "{{ k3s_install_dir }}/{{ item }}"
state: absent
loop:
- kubectl
- crictl
- ctr
when:
- k3s_install_hard_links
- not ansible_check_mode
become: "{{ k3s_become }}"

15
tasks/ensure_uploads.yml Normal file
View File

@ -0,0 +1,15 @@
---
- name: Ensure installation directory exists
ansible.builtin.file:
path: "{{ k3s_install_dir }}"
state: directory
mode: 0755
- name: Ensure k3s binary is copied from controller to target host
ansible.builtin.copy:
src: k3s
# TODO: allow airgap to bypass version post-fix
dest: "{{ k3s_install_dir }}/k3s-{{ k3s_release_version }}"
mode: 0755
become: "{{ k3s_become }}"

View File

@ -1,10 +0,0 @@
---
- name: Get the latest release version from GitHub
uri:
url: https://github.com/rancher/k3s/releases/latest
register: k3s_latest_release
- name: Ensure the release version is set as a fact
set_fact:
k3s_release_version: "{{ k3s_latest_release.url.split('/')[-1] }}"

View File

@ -1,27 +0,0 @@
---
- name: Ensure Docker prerequisites are installed
apt:
name: "{{ item }}"
state: present
register: ensure_docker_prerequisites_installed
until: ensure_docker_prerequisites_installed is succeeded
retries: 3
delay: 10
loop:
- apt-transport-https
- ca-certificates
- curl
- "{{ 'gnupg2' if ansible_distribution == 'Debian' else 'gnupg-agent' }}"
- software-properties-common
- name: Ensure Docker APT key is present
apt_key:
url: https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg
state: present
- name: Ensure Docker repository is installed and configured
apt_repository:
filename: docker-ce
repo: "deb https://download.docker.com/linux/{{ ansible_distribution | lower }} {{ ansible_distribution_release }} stable"
update_cache: true

Some files were not shown because too many files have changed in this diff Show More