```
FAILED! => {"attempts": 3, "changed": false, "msg": "Unable to enable service k3s: Failed to enable unit: Access denied\n"}
```
The task never sets become to true, hence failing due to lack of permissions on the user that is executing it by default.
Fixes#17
There appeared to be a race condition where starting all secondary
masters all at once would cause the k3s service to fail on a number of
the other masters. A retry has been added to the task to attempt to
bring them all up until they stop failing.
Fixes#16
This is because without a CNI, nodes will never be ready and the task
will fail. You need to deploy your choice of CNI manually (such as
Calico) then check the state of the cluster using `kubectl get nodes`.
1. Now does not remove prerequisite packages, lvm2 was included in
these packages (not good when you use LVM2 for real).
2. Added a bit more idempotency to the shell scripts - only delete if
it exists.
3. Check that the process isn't running and binaries are gone.
I attempted to install on arm64 and armhf. Both fail because the
[checksum filter](e07903a5cf/tasks/build/download-k3s.yml (L21))
finds the first line with "k3s". On the arm checksum files,
the first lines are for "k3s-airgap-images-arm64.tar" and "k3s-airgap-images-arm.tar"
so the wrong checksum is grabbed.
I attempted to fix this with a more specific filter:
`select('search', 'k3s'+k3s_arch_suffix)`.
This works for both arm architectures,
but fails for amd64 because the key is simply "k3s" and not "k3s-amd64".
The solution I settled on is not ideal for future proofing,
but works for now at least.
1. Ability to specify control host address, for connecting to a control plane
provisioned outside of the role.
2. Ability to specify the control host token, again for connecting to
a control plane provisioned outside of the role.
3. Included upstream changes from @nolte to define KubeConfig file
permissions.