Abstract

Demonstrate a quick and dirty deployment with kubeadm.

Quick and Dirty control plane with kubeadm#

For this demonstration we will be using kubeadm to deploy and configure a control plane node which is the Lenovo ThinkCentre M910q that was acquired for this purpose.

Q&D Control Plane Requirements#

The remainder of this guide assumes the presence of an ArchLinux system on a locally-accessible network that accepts ssh connections from a non-root user with sudo access to the system in question, which should also have a functional installation of yay.

Prepare for kubeadm#

In the interests of keeping things in relatively standard locations, all of the configuration files used by kubeadm will be kept in /etc/kubeadm, which we can create now.

Create the kubeadm configuration directory#
ssh kcp01
yay -S containerd kubeadm kubelet
sudo su -
mkdir /etc/kubeadm
cd /etc/kubeadm
modprobe br_netfilter
sysctl net.ipv4.ip_forward=1

List of commands#

The remainder of this guide assumes that the commands listed above have been successfully executed, so far you should have:

  1. created an ssh connection#
    ssh kcp01
    
  2. installed kubelet and kubeadm with yay#
    yay -S kubelet kubeadm
    
  3. used sudo to become the root user#
    sudo su -
    
  4. enabled the kubelet service#
    systemctl enable --now kubelet
    

    Tip

    It is safe to ignore any errors that the command above may throw, they will be resolved by kubeadm later.

  5. created the kubeadm configuration directory#
    mkdir -pv /etc/kubeadm
    
  6. updated your working directory#
    cd /etc/kubeadm
    
  7. enable the br_netfilter kernel module#
    modprobe br_netfilter
    
  8. enable ip forwarding#
    sysctl net.ipv4.ip_forward=1
    

Create an initial configuration#

While it is possible to write a kubeadm initial configuration file from scratch, you are much better off letting kubeadm do that for you.

create default kubeadm init configuration#
kubeadm config print init-defaults \
  --component-configs KubeProxyConfiguration,KubeletConfiguration > init.yaml

This should leave you with something similar to the output below:

[root@kcp01:/etc/kubeadm]{0} # ls -l
total 4
-rw-r--r-- 1 root root 1139 Jun 29 18:53 init.yaml
default init.yml#
  1apiVersion: kubeadm.k8s.io/v1beta4
  2bootstrapTokens:
  3- groups:
  4  - system:bootstrappers:kubeadm:default-node-token
  5  token: abcdef.0123456789abcdef
  6  ttl: 24h0m0s
  7  usages:
  8  - signing
  9  - authentication
 10kind: InitConfiguration
 11localAPIEndpoint:
 12  advertiseAddress: 1.2.3.4
 13  bindPort: 6443
 14nodeRegistration:
 15  criSocket: unix:///var/run/containerd/containerd.sock
 16  imagePullPolicy: IfNotPresent
 17  imagePullSerial: true
 18  name: node
 19  taints: null
 20timeouts:
 21  controlPlaneComponentHealthCheck: 4m0s
 22  discovery: 5m0s
 23  etcdAPICall: 2m0s
 24  kubeletHealthCheck: 4m0s
 25  kubernetesAPICall: 1m0s
 26  tlsBootstrap: 5m0s
 27  upgradeManifests: 5m0s
 28---
 29apiServer: {}
 30apiVersion: kubeadm.k8s.io/v1beta4
 31caCertificateValidityPeriod: 87600h0m0s
 32certificateValidityPeriod: 8760h0m0s
 33certificatesDir: /etc/kubernetes/pki
 34clusterName: kubernetes
 35controllerManager: {}
 36dns: {}
 37encryptionAlgorithm: RSA-2048
 38etcd:
 39  local:
 40    dataDir: /var/lib/etcd
 41imageRepository: registry.k8s.io
 42kind: ClusterConfiguration
 43kubernetesVersion: 1.33.0
 44networking:
 45  dnsDomain: cluster.local
 46  serviceSubnet: 10.96.0.0/12
 47proxy: {}
 48scheduler: {}
 49---
 50apiVersion: kubeproxy.config.k8s.io/v1alpha1
 51bindAddress: 0.0.0.0
 52bindAddressHardFail: false
 53clientConnection:
 54  acceptContentTypes: ""
 55  burst: 0
 56  contentType: ""
 57  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
 58  qps: 0
 59clusterCIDR: ""
 60configSyncPeriod: 0s
 61conntrack:
 62  maxPerCore: null
 63  min: null
 64  tcpBeLiberal: false
 65  tcpCloseWaitTimeout: null
 66  tcpEstablishedTimeout: null
 67  udpStreamTimeout: 0s
 68  udpTimeout: 0s
 69detectLocal:
 70  bridgeInterface: ""
 71  interfaceNamePrefix: ""
 72detectLocalMode: ""
 73enableProfiling: false
 74healthzBindAddress: ""
 75hostnameOverride: ""
 76iptables:
 77  localhostNodePorts: null
 78  masqueradeAll: false
 79  masqueradeBit: null
 80  minSyncPeriod: 0s
 81  syncPeriod: 0s
 82ipvs:
 83  excludeCIDRs: null
 84  minSyncPeriod: 0s
 85  scheduler: ""
 86  strictARP: false
 87  syncPeriod: 0s
 88  tcpFinTimeout: 0s
 89  tcpTimeout: 0s
 90  udpTimeout: 0s
 91kind: KubeProxyConfiguration
 92logging:
 93  flushFrequency: 0
 94  options:
 95    json:
 96      infoBufferSize: "0"
 97    text:
 98      infoBufferSize: "0"
 99  verbosity: 0
100metricsBindAddress: ""
101mode: ""
102nftables:
103  masqueradeAll: false
104  masqueradeBit: null
105  minSyncPeriod: 0s
106  syncPeriod: 0s
107nodePortAddresses: null
108oomScoreAdj: null
109portRange: ""
110showHiddenMetricsForVersion: ""
111winkernel:
112  enableDSR: false
113  forwardHealthCheckVip: false
114  networkName: ""
115  rootHnsEndpointName: ""
116  sourceVip: ""
117---
118apiVersion: kubelet.config.k8s.io/v1beta1
119authentication:
120  anonymous:
121    enabled: false
122  webhook:
123    cacheTTL: 0s
124    enabled: true
125  x509:
126    clientCAFile: /etc/kubernetes/pki/ca.crt
127authorization:
128  mode: Webhook
129  webhook:
130    cacheAuthorizedTTL: 0s
131    cacheUnauthorizedTTL: 0s
132cgroupDriver: systemd
133clusterDNS:
134- 10.96.0.10
135clusterDomain: cluster.local
136containerRuntimeEndpoint: ""
137cpuManagerReconcilePeriod: 0s
138crashLoopBackOff: {}
139evictionPressureTransitionPeriod: 0s
140fileCheckFrequency: 0s
141healthzBindAddress: 127.0.0.1
142healthzPort: 10248
143httpCheckFrequency: 0s
144imageMaximumGCAge: 0s
145imageMinimumGCAge: 0s
146kind: KubeletConfiguration
147logging:
148  flushFrequency: 0
149  options:
150    json:
151      infoBufferSize: "0"
152    text:
153      infoBufferSize: "0"
154  verbosity: 0
155memorySwap: {}
156nodeStatusReportFrequency: 0s
157nodeStatusUpdateFrequency: 0s
158rotateCertificates: true
159runtimeRequestTimeout: 0s
160shutdownGracePeriod: 0s
161shutdownGracePeriodCriticalPods: 0s
162staticPodPath: /etc/kubernetes/manifests
163streamingConnectionIdleTimeout: 0s
164syncFrequency: 0s
165volumeStatsAggPeriod: 0s

Most of the default values are acceptable, but a number of them will need to be updated.

Configure and initialize the control plane#

The lines that must be updated have been highlighted below, their values are described in the following table.

number

attribute

value

description

5

token

$(kubeadm token create)

a basic authentication token for bootstrapping

12

advertiseAddress

172.17.0.100

an ipv4 address for the control plane node to listen on

18

name

kcp01

an arbitrary dns-compatible name that describes the control plane

34

clusterName

the-hard-way

an arbitrary dns-compatible name that describes the cluster

47

podSubnet

10.244.0.0/16

the default cidr block for the flannel networking plugin

101

metricsBindAddress

0.0.0.0:9090

the address of a metrics receiver to send metrics to

157

swapBehavior

LimitedSwap

enable experimental swap support on the kubelet

Configured kubeadm init.yaml#

updated init.yaml#
  1apiVersion: kubeadm.k8s.io/v1beta4
  2bootstrapTokens:
  3- groups:
  4  - system:bootstrappers:kubeadm:default-node-token
  5  token: 0fa83j.zskjs5g0h1lqz8mi
  6  ttl: 24h0m0s
  7  usages:
  8  - signing
  9  - authentication
 10kind: InitConfiguration
 11localAPIEndpoint:
 12  advertiseAddress: 172.17.0.100
 13  bindPort: 6443
 14nodeRegistration:
 15  criSocket: unix:///var/run/containerd/containerd.sock
 16  imagePullPolicy: IfNotPresent
 17  imagePullSerial: true
 18  name: kcp01
 19  taints: null
 20timeouts:
 21  controlPlaneComponentHealthCheck: 4m0s
 22  discovery: 5m0s
 23  etcdAPICall: 2m0s
 24  kubeletHealthCheck: 4m0s
 25  kubernetesAPICall: 1m0s
 26  tlsBootstrap: 5m0s
 27  upgradeManifests: 5m0s
 28---
 29apiServer: {}
 30apiVersion: kubeadm.k8s.io/v1beta4
 31caCertificateValidityPeriod: 87600h0m0s
 32certificateValidityPeriod: 8760h0m0s
 33certificatesDir: /etc/kubernetes/pki
 34clusterName: kubernetes
 35controllerManager: {}
 36dns: {}
 37encryptionAlgorithm: RSA-2048
 38etcd:
 39  local:
 40    dataDir: /var/lib/etcd
 41imageRepository: registry.k8s.io
 42kind: ClusterConfiguration
 43kubernetesVersion: 1.33.0
 44networking:
 45  dnsDomain: cluster.local
 46  serviceSubnet: 10.96.0.0/12
 47  podSubnet: 10.244.0.0/16
 48proxy: {}
 49scheduler: {}
 50---
 51apiVersion: kubeproxy.config.k8s.io/v1alpha1
 52bindAddress: 0.0.0.0
 53bindAddressHardFail: false
 54clientConnection:
 55  acceptContentTypes: ""
 56  burst: 0
 57  contentType: ""
 58  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
 59  qps: 0
 60clusterCIDR: ""
 61configSyncPeriod: 0s
 62conntrack:
 63  maxPerCore: null
 64  min: null
 65  tcpBeLiberal: false
 66  tcpCloseWaitTimeout: null
 67  tcpEstablishedTimeout: null
 68  udpStreamTimeout: 0s
 69  udpTimeout: 0s
 70detectLocal:
 71  bridgeInterface: ""
 72  interfaceNamePrefix: ""
 73detectLocalMode: ""
 74enableProfiling: false
 75healthzBindAddress: ""
 76hostnameOverride: ""
 77iptables:
 78  localhostNodePorts: null
 79  masqueradeAll: false
 80  masqueradeBit: null
 81  minSyncPeriod: 0s
 82  syncPeriod: 0s
 83ipvs:
 84  excludeCIDRs: null
 85  minSyncPeriod: 0s
 86  scheduler: ""
 87  strictARP: false
 88  syncPeriod: 0s
 89  tcpFinTimeout: 0s
 90  tcpTimeout: 0s
 91  udpTimeout: 0s
 92kind: KubeProxyConfiguration
 93logging:
 94  flushFrequency: 0
 95  options:
 96    json:
 97      infoBufferSize: "0"
 98    text:
 99      infoBufferSize: "0"
100  verbosity: 0
101metricsBindAddress: "0.0.0.0:9090"
102mode: ""
103nftables:
104  masqueradeAll: false
105  masqueradeBit: null
106  minSyncPeriod: 0s
107  syncPeriod: 0s
108nodePortAddresses: null
109oomScoreAdj: null
110portRange: ""
111showHiddenMetricsForVersion: ""
112winkernel:
113  enableDSR: false
114  forwardHealthCheckVip: false
115  networkName: ""
116  rootHnsEndpointName: ""
117  sourceVip: ""
118---
119apiVersion: kubelet.config.k8s.io/v1beta1
120authentication:
121  anonymous:
122    enabled: false
123  webhook:
124    cacheTTL: 0s
125    enabled: true
126  x509:
127    clientCAFile: /etc/kubernetes/pki/ca.crt
128authorization:
129  mode: Webhook
130  webhook:
131    cacheAuthorizedTTL: 0s
132    cacheUnauthorizedTTL: 0s
133cgroupDriver: systemd
134clusterDNS:
135- 10.96.0.10
136clusterDomain: cluster.local
137containerRuntimeEndpoint: ""
138cpuManagerReconcilePeriod: 0s
139crashLoopBackOff: {}
140evictionPressureTransitionPeriod: 0s
141fileCheckFrequency: 0s
142healthzBindAddress: 127.0.0.1
143healthzPort: 10248
144httpCheckFrequency: 0s
145imageMaximumGCAge: 0s
146imageMinimumGCAge: 0s
147kind: KubeletConfiguration
148logging:
149  flushFrequency: 0
150  options:
151    json:
152      infoBufferSize: "0"
153    text:
154      infoBufferSize: "0"
155  verbosity: 0
156memorySwap:
157  swapBehavior: LimitedSwap
158nodeStatusReportFrequency: 0s
159nodeStatusUpdateFrequency: 0s
160rotateCertificates: true
161runtimeRequestTimeout: 0s
162shutdownGracePeriod: 0s
163shutdownGracePeriodCriticalPods: 0s
164staticPodPath: /etc/kubernetes/manifests
165streamingConnectionIdleTimeout: 0s
166syncFrequency: 0s
167volumeStatsAggPeriod: 0s

The changes to lines 156-157 can be skipped if you disable any active swap[1] file systems on the control plane.

Initialize the control plane#

Now that we’ve got an initial configuration that resembles the intended control plane configuration, we can go ahead and spin it up.

kubeadm init --config init.yaml

If things go well, your shell should produce something like the following output.

control plan init results#
[init] Using Kubernetes version: v1.33.0
[preflight] Running pre-flight checks
        [WARNING Swap]: swap is supported for cgroup v2 only. \
          The kubelet must be properly configured to use swap. \
          Please refer to https://shorturl.at/BKSgm \
            or disable swap on the node
        [WARNING Service-Kubelet]: kubelet service is not enabled, \
          please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of \
  your internet connection
[preflight] You can also perform this action beforehand using \
  'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names \
  [kubernetes kubernetes.default kubernetes.default.svc \
  kubernetes.default.svc.cluster.local kcp01] and IPs \
  [10.96.0.1 172.16.0.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names
  [localhost kcp01] and IPs [172.16.0.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names
  [localhost kcp01] and IPs [172.16.0.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest
  for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod
  manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file
  with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration
  to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot
  up the control plane as static Pods
  from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at
  http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 3.001790154s
[control-plane-check] Waiting for healthy control
  plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver
  at https://172.16.0.100:6443/livez
[control-plane-check] Checking
  kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking
  kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check]
  kube-controller-manager is healthy after 1.506311953s
[control-plane-check]
  kube-scheduler is healthy after 2.300444114s
[control-plane-check]
  kube-apiserver is healthy after 4.00284659s
[upload-config] Storing the configuration
  used in ConfigMap "kubeadm-config"
  in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config"
  in namespace kube-system with the configuration for
  the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kcp01 as control-plane by
  adding the labels: [
    node-role.kubernetes.io/control-plane
    node.kubernetes.io/exclude-from-external-load-balancers
  ]
[mark-control-plane] Marking the node kcp01 as control-plane by
  adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: r51n8g.qj8psxn8wszh915u
[bootstrap-token] Configuring bootstrap tokens,
  cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow
  Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node
  Bootstrap tokens to post CSRs in order for nodes to
  get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover
  controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate
  rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap
  in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf"
  to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one \
  of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes \
  by running the following on each as root:

kubeadm join 172.16.0.100:6443 --token r51n8g.qj8psxn8wszh915u \
  --discovery-token-ca-cert-hash sha256:012345678

Check the initialized control plane#

We can verify that this worked with the kubectl command, after we configure the new cluster’s authentication.

add authentication#
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

The last command should produce output similar to this.

list of active nodes#
NAME     STATUS     ROLES           AGE   VERSION
kcp01    NotReady   control-plane   19m   v1.33.2

Note

The node status is not ready. This is because we still need to installed the flannel network layer, fortunately this is simple.

Install flannel networking layer#

apply flannel network layer#
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
flannel application result#
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Now, after a few moments you should be able to obtain a different result from the node list request.

Verify the control-plane is ready#

get nodes again#
kubectl get nodes
and we have a working control plane#
NAME     STATUS   ROLES           AGE   VERSION
kcp01   Ready    control-plane   21m   v1.33.2