Add documentation for CNI and custom args

This commit is contained in:
Xan Manning 2021-02-16 18:02:30 +00:00
parent 7e4a16e167
commit f6e009f1fd
7 changed files with 109 additions and 3 deletions

View File

@ -31,6 +31,8 @@ minimum configuration.
- [Setting up 2-node HA control plane with external datastore](configuration/2-node-ha-ext-datastore.md)
- [Provision multiple standalone k3s nodes](configuration/multiple-standalone-k3s-nodes.md)
- [Set node labels and component arguments](configuration/node-labels-and-component-args.md)
- [Use an alternate CNI](configuration/use-an-alternate-cni.md)
### Operations

View File

@ -64,6 +64,8 @@ https://rancher.com/docs/k3s/latest/en/installation/datastore/#datastore-endpoin
k3s_server:
datastore-endpoint: postgres://postgres:verybadpass@database:5432/postgres?sslmode=disable
node-taint:
- "k3s-controlplane=true:NoExecute"
```
Your worker nodes need to know how to connect to the control plane, this is

View File

@ -0,0 +1,39 @@
# Configure node labels and component arguments
The following command line arguments can be specified multiple times with
`key=value` pairs:
- `--kube-kubelet-arg`
- `--kube-proxy-arg`
- `--kube-apiserver-arg`
- `--kube-scheduler-arg`
- `--kube-controller-manager-arg`
- `--kube-cloud-controller-manager-arg`
- `--node-label`
- `--node-taint`
In the config file, this is done by defining a list of values for each
command like argument, for example:
```yaml
---
k3s_server:
# Set the plugins registry directory
kubelet-arg:
- "volume-plugin-dir=/var/lib/rancher/k3s/agent/kubelet/plugins_registry"
# Set the pod eviction timeout and node monitor grace period
kube-controller-manager-arg:
- "pod-eviction-timeout=2m"
- "node-monitor-grace-period=30s"
# Set API server feature gate
kube-apiserver-arg:
- "feature-gates=RemoveSelfLink=false"
# Laels to apply to a node
node-label:
- "NodeTier=development"
- "NodeLocation=eu-west-2a"
# Stop k3s control plane having workloads scheduled on them
node-taint:
- "k3s-controlplane=true:NoExecute"
```

View File

@ -0,0 +1,63 @@
# Use an alternate CNI
K3S ships with Flannel, however sometimes you want an different CNI such as
Calico, Canal or Weave Net. To do this you will need to disable Flannel with
`flannel-backend: "none"`, specify a `cluster-cidr` and add your CNI manifests
to the `k3s_server_manifests_templates`.
## Calico example
The below is based on the
[Calico quickstart documentation](https://docs.projectcalico.org/getting-started/kubernetes/quickstart).
Steps:
1. Download `tigera-operator.yaml` to the manifests directory.
1. Download `custom-resources.yaml` to the manifests directory.
1. Choose a `cluster-cidr` (we are using 192.168.0.0/16)
1. Set `k3s_server` and `k3s_server_manifest_templates` as per the below,
ensure the paths to manifests are correct for your project repo.
```yaml
---
# K3S Server config, don't deploy flannel and set cluster pod CIDR.
k3s_server:
cluster-cidr: 192.168.0.0/16
flannel-backend: "none"
# Deploy the following k3s server templates.
k3s_server_manifests_templates:
- "manifests/calico/tigera-operator.yaml"
- "manifests/calico/custom-resources.yaml"
```
All nodes should come up as "Ready", below is a 3-node cluster:
```text
$ kubectl get nodes -o wide -w
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-0 Ready control-plane,etcd,master 114s v1.20.2+k3s1 10.10.9.2 10.10.9.2 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.3-k3s1
kube-1 Ready control-plane,etcd,master 80s v1.20.2+k3s1 10.10.9.3 10.10.9.3 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.3-k3s1
kube-2 Ready control-plane,etcd,master 73s v1.20.2+k3s1 10.10.9.4 10.10.9.4 Ubuntu 20.04.1 LTS 5.4.0-56-generic containerd://1.4.3-k3s1
```
Pods should be deployed with deployed within the CIDR specified in our config
file.
```text
$ kubectl get pods -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-system calico-kube-controllers-cfb4ff54b-8rp8r 1/1 Running 0 5m4s 192.168.145.65 kube-0 <none> <none>
calico-system calico-node-2cm2m 1/1 Running 0 5m4s 10.10.9.2 kube-0 <none> <none>
calico-system calico-node-2s6lx 1/1 Running 0 4m42s 10.10.9.4 kube-2 <none> <none>
calico-system calico-node-zwqjz 1/1 Running 0 4m49s 10.10.9.3 kube-1 <none> <none>
calico-system calico-typha-7b6747d665-78swq 1/1 Running 0 3m5s 10.10.9.4 kube-2 <none> <none>
calico-system calico-typha-7b6747d665-8ff66 1/1 Running 0 3m5s 10.10.9.3 kube-1 <none> <none>
calico-system calico-typha-7b6747d665-hgplx 1/1 Running 0 5m5s 10.10.9.2 kube-0 <none> <none>
kube-system coredns-854c77959c-6qhgt 1/1 Running 0 5m20s 192.168.145.66 kube-0 <none> <none>
kube-system helm-install-traefik-4czr9 0/1 Completed 0 5m20s 192.168.145.67 kube-0 <none> <none>
kube-system metrics-server-86cbb8457f-qcxf5 1/1 Running 0 5m20s 192.168.145.68 kube-0 <none> <none>
kube-system traefik-6f9cbd9bd4-7h4rl 1/1 Running 0 2m50s 192.168.126.65 kube-1 <none> <none>
tigera-operator tigera-operator-b6c4bfdd9-29hhr 1/1 Running 0 5m20s 10.10.9.2 kube-0 <none> <none>
```

View File

@ -86,7 +86,7 @@ Here is our playbook for the k3s cluster (`cluster.yml`):
vars:
k3s_become_for_all: true
roles:
- xanmanning.k3s
- role: xanmanning.k3s
```
## Execution

View File

@ -94,7 +94,7 @@ Here is our playbook for the k3s cluster (`ha_cluster.yml`):
k3s_etcd_datastore: true
k3s_use_experimental: true # Note this is required for k3s < v1.19.5+k3s1
roles:
- xanmanning.k3s
- role: xanmanning.k3s
```
## Execution

View File

@ -68,7 +68,7 @@ Here is our playbook for a single node k3s cluster (`single_node.yml`):
vars:
k3s_become_for_all: true
roles:
- xanmanning.k3s
- role: xanmanning.k3s
```
## Execution