Kubernetes The Easy Way

This post is a tutorial about creating a Kubernetes cluster from scratch using the kubeadm. The kubeadm tool helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use kubeadm to set up a cluster that will pass the Kubernetes Conformance tests.

Being Google Cloud Partners, we will use Google Cloud VMs, but you can also choose to use any other VM. We will use kubeadm instead of Google Kubernetes Engine (GKE) which is a platform that allows you to quickly start working with Kubernetes clusters. For a better understanding of how Kubernetes works this tutorial will not be too easy, as the GKE way is basically a single click.

This “easy way” was inspired by the amazing tutorial by Kelsey Hightower, Kubernetes The Hard Way

Setup a GCP account

Before you can build your cluster you need to have a GCP environment. If you don’t have one ready, you need to set it up.

Roughly, the whole procedure includes:

  1. Signing up for a Google account, in the unlikely case you don’t have one already.
  2. Creating a project or using the default one in the Google Cloud Platform Console. Check: https://cloud.google.com/.
  3. Enabling billing. This requires your credit card info, but Google provides a 12-month free trial.
  4. Setup your client tools for interacting with Google Cloud products and services. Visit https://cloud.google.com/sdk/ to get the client binaries for your platform and follow the “Quick start” guide to configure it.

Once you get your GCE account ready, and the client tools configured for your project you are ready to start using the shell to easily create your Kubernetes cluster. We are going to use only the console, not the web interface, for all the tasks. It is recommended to also have configured a default compute region and zone to avoid the need of selecting one at each command.

For example,

gcloud set config set compute/region europe-north1

gcloud set config set compute/zone europe-north1-b

Next, we are going to create compute instance resources with an Ubuntu image on them. After that, we will log in via ssh to the future master node and install the needed packages. The same steps will be repeated for each of the worker nodes.

Create Virtual Instances

Let’s start! Open your console. We are going to create a control plane node which we will call master-0 and two minions that we will call worker-0 and worker-1 respectively.
For the control plane node, we need at least 2 CPUs and 2 GB RAM, so we will use the standard machine type n1-standard-2. For the worker nodes a n1-standard-1 machine type fits well. For other types of N1 machines check the following table: https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types.

gcloud compute instances create --help gives the explanation for the options that we are going to use.

Next, execute the following commands in your terminal:

gcloud compute instances create master-0 \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1604-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-2 \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring

To create the two worker instances:

for i in 0 1; do
  gcloud compute instances create worker-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1604-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring

To verify the newly created instances execute the following: gcloud compute instances list.

Install the kube tools

The installation was already addressed by my colleague Ville in his blog post Wedding Day Kubernetes but let’s slightly refresh the procedure. We are going to install kubelet, kubeadm and kubectl. You don’t need any coffee or mate breaks for these steps, as they are really easy and quick to accomplish. You only need about 15 minutes (or less). Just don't make typing errors when writing gcloud. (I wrote glcoud instead several times.)

To access via ssh the new instance you can write:
gcloud compute ssh master-0

Once logged in we are going to install the necessary packages, the container runtime (Docker) and the Kubernetes tools:

sudo apt update && sudo apt install docker.io -y
sudo apt update && sudo apt install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Once we have the tools installed at our fresh new server, the remaining steps are initing the cluster, installing a networking addon and joining the minions to the cluster.
For simplicity, we are going to use the Flannel network. For the flannel to work correctly, you must pass --pod-network-cidr= to kubeadm init.

sudo kubeadm init --pod-network-cidr=

Kubeadm init command, after some pre-flight checks, will generate us the corresponding certificates and keys, write config files, and of course configure our control plane node. Pay attention to the output of the command because the following steps are already there:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed in: https://kubernetes.io/docs/concepts/cluster-administration/addons/.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join --token yfv8vo.oryxq2bnh7um70q9 \
    --discovery-token-ca-cert-hash sha256:694dde4a548359821608ad305e0eefbf629c6caaa3dbeeed22a516abda4e76a2 

A good tip is to save this output somewhere for further reference in case our console window closes or we need to scroll up a lot to find the commands again. So, go ahead! Create your $HOME/.kube directory and copy there the admin.conf file giving the right permissions as the previous output explains.

Let’s first check our node status. Our first kubectl command will be to check the status of the nodes.

kubectl get nodes

master-0   NotReady    master   3m    v1.18.1

Kubectl get node should show you only one node, into a NotReady status. We are going to check the status of our pods before and after deploying the network addon to see how it works. Notice that CoreDNS is still in pending status.

kubectl get pods --all-namespaces

NAMESPACE     NAME                          READY   STATUS    RESTARTS AGE
kube-system   coredns-5644d7b6d9-4nwwd       0/1     Pending   0       13m
kube-system   coredns-5644d7b6d9-fj2r4       0/1     Pending   0       13m

We have to deploy our chosen network addon (flannel) running kubectl apply -f [podnetwork].yaml. The options for network addons are listed here https://kubernetes.io/docs/concepts/cluster-administration/addons/. As we previously have chosen flannel, the right command is:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

Once a Pod network has been installed, you can confirm that it is working by checking that the CoreDNS Pod is running in the output of kubectl get pods --all-namespaces. And once the CoreDNS Pod is up and running, you can continue by joining your nodes.

Prepare the Worker Nodes

Let’s now prepare our worker nodes. The recommended way to bootstrap it is using tmux with a split window and also use synchronized panes window option to send each pane the same keyboard input simultaneously. Of course this is optional: you can repeat the steps multiple times if you are not using tmux.

Run tmux. To split in two panes press the combination of keys Ctrl b + ".

Run gcloud compute ssh worker-0 in one and gcloud compute ssh worker-1 on the other. You should have an environment like the one below.

Now set synchronize panes Ctrl b + : and write in the new prompt the command set synchronize-pane on. The steps here are exactly the same as in the master node, so also we could have done the same with the master node. The real difference is at the last step when using the kubeadm command.

sudo apt update && sudo apt install docker.io -y
sudo apt update && sudo apt install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Now it is time to join our nodes with the kubeadm join command that the kubeadm init script suggested to us.

sudo kubeadm join --token yfv8vo.oryxq2bnh7um70q9 \
    --discovery-token-ca-cert-hash sha256:694dde4a548359821608ad305e0eefbf629c6caaa3dbeeed22a516abda4e76a2

Now we have our cluster ready. Let’s just do some final tests before cleaning up everything.

We’ll check if our nodes are ready, and then we are going to deploy a nginx server to our new cluster.

kubectl create deployment nginx --image=nginx

Clean up

This step is optional, but if you want to clean up the resources that were created during this tutorial, you can execute the following command:

gcloud -q compute instances delete \
  master-0 \
  worker-0 worker-1 \
  --zone $(gcloud config get-value compute/zone)

That’s all folks! I hope you have enjoyed this easy way to create your own cluster. I have used this procedure several times when preparing for my CKA exam. When in the exam I was asked about bootstrapping a cluster, the steps were very similar, and I think I completed the whole exercise in three minutes. You can check other tips for the CKA exam in my previous blog post Become a Certified Kubernetes Administrator.

Any questions don’t hesitate to contact me mario@montel.fi or leave your comments at the comments section below.

Also, if you need help with your Kubernetes set up and management we are happy to help. Contact Tuba tuba@montel.fi to hear more about our offerings.

Mario Moya

Full-Stack Software Señor & Certified Kubernetes Administrator

Mario is a senior full-stack developer working for Montel in Patagonia, Argentina. He is a python enthusiast and a great fisherman who will drink you under the table with mate.

Posts authored by mario

Contact us

We are here to help your company's technology bloom.
So do not hesitate to contact us in any matters!

Read more insight in our blog