Create a Kubernetes cluster with microk8s and terraform on Hetzner Cloud

In this tutorial I will show you how to create a kubernetes cluster on Hetzner cloud.

To follow the steps in this tutorial, you will need the following:

Step 1 - Create terraform project

Let's create our terraform project directory by typing the following command into your terminal:

# create directory
mkdir hetzner_terraform

# get into directory
cd hetzner_terraform

Next, let's create the file inside the project directory


Step 2 - Setup Hetzner project requirements

Login into your Hetzner cloud account and create a new project if you don't have one yet.

In the Projects tab, click on NEW PROJECT, type your project name (I used kubernetes) and then press ADD PROJECT red button.

Once you have created the project, click on the project name thumbnail to enter in its configurations pages.

Generate API token

In the left panel, you will see a key icon. Click on the key icon and then on the top navigation menu, click on API TOKENS tab and then click on GENERATE API TOKEN.

Let's name the token k8s-terraform and then click on GENERATE API TOKEN button.

Once the token was created, copy the generated token and then add the following code inside file:

provider "hcloud" {
  token = "-paste here your generated token-"

Now, that we have our hetzner cloud setup done, let's install dependencies in terraform by typing the following command in your terminal window:

terraform init

Then, let's add the following resource in terraform file to add our public key:

resource "hcloud_ssh_key" "access-ssh-key" {
  name = "access-ssh-key"
  public_key = file("~/.ssh/")

The above command will copy the content of your public key to the Hetzner Cloud keys list. If you don't have a public key, run ssh-keygen to generate one.

Now, let's create our first resource in Hetzner Cloud, which is the public key.

Run the following command to preview the changes that will be executed:

terraform plan

Then, to execute the plan and create the changes, run:

terraform apply -auto-approve

Step 3 - Create network

Next, we'll create a private network that our servers will use to communicate with each other. Add the following resource in the terraform file

resource "hcloud_network" "k8s-network" {
  ip_range = ""
  name = "k8s-network"

Then, let's add a subnet for our network

resource "hcloud_network_subnet" "k8s-subnet" {
  ip_range = ""
  network_id =
  network_zone = "eu-central"
  type = "server"

And setup a network route

resource "hcloud_network_route" "k8s-route" {
  destination = ""
  gateway = ""
  network_id =

Once you have all those resources defined in your file, you can see the terraform plan and then apply it:

# see the changes that will be made to the infrastructure
terraform plan

# apply changes to create the resources
terraform apply -auto-approve

Step 4 - Create Volume to store data

We'll need to create a volume resource to store our data in the kubernetes cluster.

Let's create a new resource definition in the file:

resource "hcloud_volume" "k8s-data" {
  name = "k8s-data"
  size = 50
  format = "xfs"
  automount = true
  server_id =
Since on Hetzner you can assign the volume to only one server, we'll assign it to master node, and then we'll use NFS to share it with the server nodes.

The name of the volume will be k8s-data with a size of 50GB and will be located in Helsinki data center. You can increase the size of the volume if you want.

If you apply the changes now, terraform will output an error because hcloud_server.master is not defined. We'll add it in the next step and then will apply the changes.

Step 5 - Create master node server

We'll start with master node and will add the following resource in our terraform file

resource "hcloud_server" "master" {
  image = "ubuntu-20.04"
  name = "k8s-master"
  server_type = "cx11"
  location = "hel1"

  ssh_keys = []

As you can see, we have setup an Ubuntu 20.04 server of type CX11 (1cpu, 2GB RAM) and will be located in Helsinki data center. The ssh key will be added to the authorized_keys file so we won't need any password to ssh into the servers. Anyway, there is no password generated for the servers.

Let's attach the network to the server. Add the following resource in file:

resource "hcloud_server_network" "master-net" {
  network_id =
  server_id =
  ip = ""

Run terraform apply -auto-approve to create the resources on Hetzner cloud.

Step 6 - Create kubernetes nodes

For the nodes, we'll use only one resource, and we'll specify the number of servers to create:

variable "node_count" {
  default = 2

Then, add the servers definition

resource "hcloud_server" "nodes" {
  count = var.node_count
  image = "ubuntu-20.04"
  name = "k8s-node-${count.index+1}"
  server_type = "cx11"
  location = "hel1"

  ssh_keys = []

The OS on the server and the resources will be the same as on master. Feel free to add more resources to nodes if you want, for example, cx21 for server_type will double resources.

And then, attach private ips to the nodes:

resource "hcloud_server_network" "nodes-network" {
  count = var.node_count
  network_id =
  server_id = element(hcloud_server.nodes.*.id, count.index)
  ip = cidrhost(hcloud_network_subnet.k8s-subnet.ip_range, count.index+2)

Add the following code to the file to output volume id after terraform execution

output "volume_id" {
  value =

Now, run again terraform apply -auto-approve to create the new resources.

Step 7 - Install microk8s on all servers

For this step you'll have to ssh into each machine using the public ips of the servers.

On Hetzner Cloud page, click on Servers link that is in the left sidebar menu and then you'll see the public ips for each server that you have created.

Run in terminal the following command and replace <server-public-ip> with the public ip of the server you want to connect to.

ssh root@<server-public-ip>

Then run the following command to create an installation file inside each server:


and add the following code inside (Press i to enter in -INSERT- mode and paste the following code)


apt-get update && apt-get -y upgrade
apt-get install -y snapd
snap install microk8s --classic
echo 'PATH="/snap/bin:${PATH}"' >> ~/.bashrc
/snap/bin/microk8s.enable dns ingress storage metrics-server

echo 'alias kubectl="microk8s.kubectl"' >> ~/.bashrc
echo 'alias k="microk8s.kubectl"' >> ~/.bashrc
echo 'alias kd="microk8s.kubectl describe"' >> ~/.bashrc
echo 'alias kal="clear && k get po,svc,ing,deploy,cm,secret,pv,no -o wide"' >> ~/.bashrc

To save and exit VIM, press ESCAPE key, type :x and then press Enter.

To execute the file run:

Make sure you run the above commands on all servers (master and nodes)

Now, press CTRL+D on all servers to logout, and then ssh again into each of them. I recommend to use different terminal windows to be logged on the servers in the same time.

Step 8 - Assign nodes to master

On the master server run the following command:


This command will output the commands that you'll have to execute on the nodes servers to assign them to the master node, to be part of the cluster.

We'll use the command that has the private ip defined in the network ( The command will look like this:

microk8s join<generated-token-here> # don't copy this one. Copy it from the terminal
Make sure you run microk8s join ... command ONLY ON THE NODES SERVERS!

Run the command from the output on the node-1 and then do the same for node-2, but first run the microk8s.add-node command on master again.

After you do this for all nodes, go on master node and run the following command to see the available nodes:

k get nodes -o wide
In the bash file you have created, there are some aliases and this is one of them.

Now, that you have all nodes registered, I recommend removing the master node.

Run the following command to remove the master node from the cluster.

microk8s.remove-node k8s-master
Removing master node from the cluster will just make sure that no kubernetes pods will run on the master node.

If you don't remove master node, the files added on NFS from the master node will have root permissions.

Step 9 - Configure NFS on master node

Install nfs-kernel-server on master node

apt-get install -y nfs-kernel-server

Create shared directory

mkdir /mnt/nfs

Mount volume to /mnt/nfs. Type the following command in terminal on the master server:

mount -o discard,defaults /dev/disk/by-id/scsi-0HC_Volume_<your-volume-id> /mnt/nfs
Make sure you replace from the above command

Add volume to fstab by running the following command in your terminal window on the master server:

echo "/dev/disk/by-id/scsi-0HC_Volume_<your-volume-id> /mnt/nfs xfs discard,nofail,defaults 0 0" >> /etc/fstab
Make sure you replace from the above command

Change shared directory ownership

chown nobody:nogroup -R /mnt/nfs

Add the following lines in your /etc/exports file:

If you add more servers, you'll need to add an entry for each of them

Create the NFS table that holds the exports of your shares by using the following command:

exportfs -a

And now let's start the NFS server:

service nfs-kernel-server start

Step 10 - Configure NFS on nodes servers

The following instructions must be made on each node (NOT on master).

Connect to node servers and install nfs-common package

apt-get install -y nfs-common

Create nfs directory

mkdir -p /mnt/nfs

Mount the shared volume

mount /mnt/nfs
If the master server doesn't allow to mount the directory, run exportfs -a on master server again.

Step 11 - Test NFS configuration

On the master server, run the following command to create an empty file inside shared directory:

touch /mnt/nfs/master.txt

Change permissions on the file

chown nobody:nogroup /mnt/nfs/master.txt

On the nodes servers, run the following commands:

Node 1

touch /mnt/nfs/node-1.txt

Node 2

touch /mnt/nfs/node-2.txt

Now, on each server, if you run the following command you should see the files:

ls -la /mnt/nfs/

On the master server, if you run the following command, you should see the files in both /mnt/nfs and /mnt/HC_Volume_<id> directories:

ls -laR /mnt


At this point, you should have a working microk8s cluster running on Hetzner Cloud that you can use for development or testing.

If you want to destroy the resources on hetzner that you've created for this tutorial, run:

terraform destroy