If you have followed the Budget Kubernetes on Hetzner cloud with Terraform tutorial you should have two servers available. One for the database and one for the kubernetes.
Open the browser into your Hetzner account, select the project where you have the servers, then go to Servers page and copy the public IP from the database server.
If you register on Hetzner with this url https://hetzner.cloud/?ref=Q8kG7vzgBaP0 you'll get a bonus of 20 euros.
Once you have copied the ip, write the following command in your terminal to connect to it:
ssh root@<database-server-public-ip>
Now, that you are connected to your database server, let's use the commands from the Official Postgres Documentation and run the following commands to install the database server:
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install postgresql
Once the database is installed, you can check if is working by running:
/etc/init.d/postgres status
# or, a more elegant version
systemctl status postgresql
The output should be like this:
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Sat 2021-10-30 23:39:21 CEST; 11min ago
Main PID: 3545 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 2280)
Memory: 0B
CGroup: /system.slice/postgresql.service
Oct 30 23:39:21 database systemd
Oct 30 23:39:21 database systemd
Once the database is installed and running, let's connect to it:
sudo -i -u postgres psql
If you run \du
command, while connected to the postgres server, you should see only one user available:
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
Let's create another user with a password for our application:
CREATE ROLE kisphp_user WITH SUPERUSER CREATEDB CREATEROLE LOGIN ENCRYPTED PASSWORD 'kisphp_password';
Now, if you run \du
again, you should have a new user added to the server:
postgres=# \du
List of roles
Role name | Attributes | Member of
-------------+------------------------------------------------------------+-----------
kisphp_user | Superuser, Create role, Create DB | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
For the moment, we're done with the database server. Let's logout from the postgres cli by pressing CTRL + D
or type \q
and then press CTRL + D
to logout from the database server and return to your computer terminal.
Now, let's focus on kubernetes installation and configuration with Microk8s.
Copy the public ip of the microk8s server that you have in your hetzner servers page and connect to it via SSH:
ssh root@<kubernetes-server-public-ip>
Once you are connected to the server, let's update APT
apt-get update
apt-get upgrade -y
And then, let's install some useful tools that we'll need:
apt-get install -y vim jq ncdu zip unzip snapd wget curl make apache2-utils
Here, we install the following tools:
Install microk8s:
snap install microk8s --classic
Once the command finishes, wait a few seconds until the microk8s starts and then run the following command:
microk8s status
You should have an output similar to this:
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
dashboard # The Kubernetes dashboard
dns # CoreDNS
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
ingress # Ingress controller for external access
istio # Core Istio service mesh services
jaeger # Kubernetes Jaeger operator with its simple config
kata # Kata Containers is a secure runtime with lightweight VMS
keda # Kubernetes-based Event Driven Autoscaling
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
multus # Multus CNI enables attaching multiple network interfaces to pods
openebs # OpenEBS is the open-source storage solution for Kubernetes
openfaas # openfaas serverless framework
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
storage # Storage class; allocates storage from host directory
traefik # traefik Ingress controller for external access
As you can see, there are some plugins that you can use in your microk8s server. Just keep in mind that the more of them you enable, the more resources will be needed and so, the bigger and more expensive server you'll need to support all of them.
Let's start by enabling a few of them:
microk8s enable dns ingress metrics-server rbac storage
Run again micrk8s status
to make sure that the plugins are enabled. You should have an output like this:
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # CoreDNS
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metrics-server # K8s Metrics Server for API access to service metrics
rbac # Role-Based Access Control for authorisation
storage # Storage class; allocates storage from host directory
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
dashboard # The Kubernetes dashboard
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
istio # Core Istio service mesh services
jaeger # Kubernetes Jaeger operator with its simple config
kata # Kata Containers is a secure runtime with lightweight VMS
keda # Kubernetes-based Event Driven Autoscaling
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
metallb # Loadbalancer for your Kubernetes cluster
multus # Multus CNI enables attaching multiple network interfaces to pods
openebs # OpenEBS is the open-source storage solution for Kubernetes
openfaas # openfaas serverless framework
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
registry # Private image registry exposed on localhost:32000
traefik # traefik Ingress controller for external access
If the plugins are not shown as enabled, wait a couple more minutes. If they are still not displayed, then restart microk8s:
microk8s stop
# then
micrk8s start
# there is no restart command
microk8s status
Once the ingress plugin is enable and running, if you open your browser pointing to the IP of the server, you should see a 404 Page not found error. This is correct and that is happening because you don't have a default backend for your ingress controller.
Let's generate our kubeconfig file:
microk8s config > ~/.kube/config
Let's install helm 3 on our server because we'll use it in the near future.
wget https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz
tar -xzvf helm-v3.7.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
Micrk8s comes with kubectl included but we need to call it from microk8s tool itself:
microk8s kubectl get po
Let's install kubectl on the system with the following command:
snap install kubectl --classic
Now, you can access kubectl as normal:
kubectl get pods -A
-A
meansall namespaces
In the output of the previous command, if you see that Calico kube
pod does not start, just kill the pod so it can restart
root@microk8s:~# kubectl get pods -A --show-kind
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-7f9c69c78c-pphb7 1/1 Running 1 (14m ago) 17m
kube-system pod/calico-node-tcnpm 1/1 Running 1 (14m ago) 20m
kube-system pod/hostpath-provisioner-5c65fbdb4f-fk9wd 1/1 Running 0 13m
ingress pod/nginx-ingress-microk8s-controller-wxmkk 1/1 Running 0 13m
kube-system pod/metrics-server-85df567dd8-ltcr9 1/1 Running 0 13m
kube-system pod/calico-kube-controllers-6759bf5b94-85w6p 0/1 CrashLoopBackOff 7 (2m43s ago) 20m
kubectl delete -n kube-system pod/calico-kube-controllers-6759bf5b94-85w6p
Then, you should see it running:
root@microk8s:~# kubectl get pods -A --show-kind
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-7f9c69c78c-pphb7 1/1 Running 1 (15m ago) 18m
kube-system pod/calico-node-tcnpm 1/1 Running 1 (15m ago) 22m
kube-system pod/hostpath-provisioner-5c65fbdb4f-fk9wd 1/1 Running 0 14m
ingress pod/nginx-ingress-microk8s-controller-wxmkk 1/1 Running 0 14m
kube-system pod/metrics-server-85df567dd8-ltcr9 1/1 Running 0 14m
kube-system pod/calico-kube-controllers-6759bf5b94-sdch5 1/1 Running 0 7s
I've said earlier, that if you open your browser and paste in the url the ip of the server, you'll have a 404 error page. Let's do something about this. Let's deploy a default backend for our server.
On the server, create a file called default-ingress-backend.yaml
and add the following code:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: default-backend
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
defaultBackend:
service:
name: default-backend
port:
number: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: default-backend
namespace: default
data:
index.html: |
<!doctype html>
<html>
<head><title>default index file</title></head>
<style>
body { color: #fff; font-size: 18px; text-align: center; padding: 5% 0; }
</style>
<body id="body">
<h1>Hello world from default nginx backend</h1>
<script>
function randomItem(items) {
return items[Math.floor(Math.random()*items.length)]
}
var items = ["#369", "#c48", "#4aa", "#1192a0"]
document.getElementById('body').style.background = randomItem(items);
</script>
</body>
</html>
---
apiVersion: v1
kind: Service
metadata:
name: default-backend
namespace: default
spec:
type: ClusterIP
selector:
app: default-backend
ports:
- port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-backend
namespace: default
labels:
app.kubernetes.io/name: default-backend
spec:
revisionHistoryLimit: 0
selector:
matchLabels:
app: default-backend
template:
metadata:
labels:
app: default-backend
app.kubernetes.io/name: default-backend
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: html-files
volumes:
- name: html-files
configMap:
name: default-backend
If you use VIMvim default-ingress-backend.yaml
for the first time, pressi
to enter INSERT mode, then pressCTRL + V
to paste the yaml content, then pressESCAPE
to enter again in VIEW mode and then type:x
to save the file and exit. An alternative for save and exit is:wq
.
Once you have the file with the content inside, run the following command to deploy the default backend:
kubectl apply -f default-ingress-backend.yaml
The output of the command should be like this:
root@microk8s:~# kubectl apply -f default-ingress-backend.yaml
ingress.networking.k8s.io/default-backend created
configmap/default-backend created
service/default-backend created
deployment.apps/default-backend created
Now, if you refresh the browser page where you have the IP of the server, you should see a nice basic html page with a text and a background that changes on every refresh.
In this yaml file, we have created: