Skip to Content
PostsCost-effective Kubernetes-based development environment on Hetzner

Introduction

Hetzner Cloud offers robust VPS and dedicated server solutions at a fraction of the cost of major cloud providers like AWS, GCP, or Azure. This guide outlines the steps to configure a fully functional development environment on Hetzner Cloud, incorporating the following services:

  1. Secure Virtual Private Cloud (VPC) using Hetzner Cloud Networks for isolated networking.
  2. WireGuard VPN for secure access to the VPC.
  3. Hetzner Cloud Load Balancers (public and internal) to manage access to services.
  4. Kubernetes Cluster to orchestrate and run containerized applications.
  5. Flannel as a basic option for Container Network Interface (CNI)
  6. Hetzner Cloud Controller to enable Kubernetes to provision and manage Hetzner Cloud Load Balancers.
  7. Hetzner CSI Driver for Kubernetes to dynamically provision and manage Hetzner Cloud Volumes.
  8. Kubernetes Node Autoscaler for Hetzner to dynamically scale cluster capacity based on workload demands.
  9. Ingress Nginx Controller to provide access to the services.
  10. Cert-Manager with Cloudflare Integration to automate valid TLS certificates for public and internal services.
  11. Gitea Git Hosting Service with Gitea Actions for version control and CI/CD workflows.
  12. ArgoCD for GitOps-driven deployments, ensuring continuous delivery and infrastructure consistency.

This setup leverages Hetzner Cloud’s cost-effective infrastructure to create a secure, scalable, and automated development environment tailored for modern application deployment.

Hetzner overview

Hetzner provides Virtual Cloud Servers with flexible configurations featuring AMD or Intel CPUs. For instance, the CPX41 shared VPS instance, powered by AMD EPYC™ 7002, offers 8 vCPUs, 16GB RAM, and a 240GB NVMe SSD, delivering a Geekbench 6 single-core score of ~1500 and multi-core of ~8000 is available at €28-50/month in data centers located in the USA, Germany, Finland and Singapore.

Instance NamevCPUsRAMSSDSingle-Core GeekBenchMulti-Core GeekBenchPrice USA/moPrice GER,FIN/moPrice SGP/mo
CPX51 (Shared)16 AMD32GB360GB~1500~11000€71.39€64.74€91.51
CPX41 (Shared)8 AMD16GB240GB~1500~8000€35.69€29.39€50.69
CX42 (Shared)8 Intel16GB160GB~600~3200-€18.92-
CPX31 (Shared)4 AMD8GB160GB~1500~4500€19.03€15.59€29.51

Hetzner Cloud Server prices in Achburn, VA datacenter

Create Hetzner Cloud Project

  1. New Project can be created in Hetzner Cloud Console.
  2. Find or create your SSH public key
cat ~/.ssh/id_rsa.pub

If you don’t have ssh key, follow the guide or google for it.

  1. Upload your SSH public key in Hetzner Console Security/SSH keys menu for easy access to servers SSH public key in Hetzner Console

Create Private network

Let’s start setting up a Virtual Private Cloud (VPC) by creating a new private network with CIDR 10.0.0.0/24. Go to the Networks menu and click the Create Network button, name the network internal-network, select the desired zone and use 10.0.0.0/24 as the IP address range. Private network All our servers will be connected to this private network, and external access to internal resources will only be possible through VPN or Load Balancers.

Provision VPN server

  1. To access the VPC we will need a VPN server. For these purposes we can use CPX11 server with 2 vCPUs, 2GB RAM instance (€5/month). Got to the Server menu, click on “Add Server” button, select Location, Image (Ubuntu 24.04), Shared vCPU as a Type, CPX11.
  2. The server should have public and private IP addresses, so do not foreget to enable “Private networks” checkbox in the server configuration and choose previously created private network. Gateway private network
  3. Select previously uploaded SSH key for passwordless SSH access to avoid additional SSH service configuration later.
  4. For simplicity, we will also use this server as a gateway to provide Internet access to the internal network servers, so let’s name the server “gateway”.

This is the first server in our network, so once provisioned the internal network interface will be assigned the IP address 10.0.0.2. Gateway config

Configure VPN access on the gateway server

We will use Wireguard as a VPN solution, I already described installation and configuration of Wireguard in another post, and it also will work for Hetzner. The only difference is that Hetzner servers use different layout for the network interfaces (eth0 for public network and enp7s0 for private network):

[gateway] ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether NN:NN:NN:NN:NN:NN brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.XXX/32 metric 100 scope global dynamic eth0 valid_lft 85572sec preferred_lft 85572sec inet6 NNNN:NNNN:NNNN:NNNN::1/64 scope global valid_lft forever preferred_lft forever inet6 NNNN::NNNN:NNNN:NNNN:NNNN/64 scope link valid_lft forever preferred_lft forever 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc fq_codel state UP group default qlen 1000 link/ether NN:NN:NN:NN:NN:NN brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/32 brd 10.0.0.2 scope global dynamic enp7s0 valid_lft 85576sec preferred_lft 74776sec inet6 NNNN::NNNN:NN:NNNN:NNNN/64 scope link valid_lft forever preferred_lft forever

Let’s describe major steps:

  1. Find out and copy Gateway Public IP address in Hetzner Cloud Console
  2. ssh to the Gateway Server using Public IP address
ssh root@GATEWAY_PUBLIC_IP
  1. Install Wireguard
[gateway] sudo apt update && sudo apt install wireguard
  1. Enable routing between public and private network interfaces
[gateway] cat <<EOF | sudo tee /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 net.ipv6.conf.all.forwarding = 1 EOF # Apply sysctl params without reboot [gateway] sudo sysctl --system
  1. Download and untar WireGuard UI
[gateway] curl -sL https://github.com/ngoduykhanh/wireguard-ui/releases/download/v0.6.2/wireguard-ui-v0.6.2-linux-amd64.tar.gz | tar xz
  1. Run WireGuard UI in Gateway
[gateway] BIND_ADDRESS=127.0.0.1:5000 ./wireguard-ui
  1. Bind remote port localy using ssh port-forwarding (run on your workstation)
ssh -L 5000:localhost:5000 root@GATEWAY_PUBLIC_IP
  1. Open WireGuard UI locally at http://localhost:5000/, login as admin:admin  (on your workstation)
  2. In Wireguard Server Settings set Post Up and Post Down Script using network interface name (enp7s0)
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o enp7s0 -j MASQUERADE; PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o enp7s0 -j MASQUERADE;

Post Up and Post Down Script in Wireguard Server Settings

  1. Create a WireGuard Client in WireGuard UI, IP Allocation for the client should be 10.252.1.1/32, 10.0.0.0/24 Create a WireGuard Client in WireGuard UI

  2. Apply configuration in WireGuard UI

  3. Start and enable WireGuard service in gateway

[gateway] sudo systemctl start wg-quick@wg0 [gateway] sudo systemctl status wg-quick@wg0 [gateway] sudo systemctl enable wg-quick@wg0
  1. Install latest version of Wireguard on your workstation, download client configuration file from WireGuard UI and add it to your local Wireguard
  2. Establish VPN connection from your local worstation and ping gateway internal IP:
ping 10.0.0.2

Further it is assumed that the VPN connection is always on.

Configure private network routes for the gateway

To route traffic from the internal network to the public Internet, we need to define a default route for the internal network that points to the IP address of the gateway server. You can add routes in the Routes submenu of internal-network. Add a route for 0.0.0.0/0 to 10.0.0.2. Gateway config

Provision of Load Balancers

We need two Load Balancers: Public Load Balancer distributes incoming traffic from external sources across internal services, Internal Load Balancer will be entry point for private services. In this guide we will create Load Balancers manually, and after will configure Kubernetes to manage them.

  1. To create Public Load Balancer open menu Load Balancers in Cloud Console, click on “Create Load Balancer” button, select Location, LB11 instance as a Type (€6.41/mo), connect it to internal-network, remove Services definitions, and name it “public-lb”. Public network will be connected automatically and Internal IP will be 10.0.0.3.
  2. To create Internal Load Balancer do the same again, but name it “internal-lb”. For this load balancer we have to disable public network. You can do it in Overview menu/Options. Internal IP will be 10.0.0.4. Gateway config

In fact, Hetzner operators for Kubernetes allow you to dynamically create load balancers using Ingress configurations, but in this case, the assigned IP addresses for the load balancers will be out of order. I prefer to provision Load Balancers first and have internal IP 10.0.0.3 for Public and 10.0.0.4 for Internal Load Balancer.

Provision main Kubernetes node

  1. Hetzner Cloud CPX41 server with 8 vCPUs, 16GB RAM and 240GB NVMe SSD is a good option for main Kubernetes node. Select the same location, Ubuntu 24.04, Shared CPX41 instance.
  2. All Kubernetes nodes including Main will be private and will have only internal-network interface, so uncheck all “Public network” checkboxes in the server configuration, and enable “Private networks” checkbox. For private network choose previously created internal-network. K8s main node provision
  3. We will configure and install Kubernetes main node automatically using the following cloud-init script, paste it in cloud-init section in the server configuration, make nessesary changes in KUBERNETES_VERSION, POD_CIDR, CRICTL_VERSION values before.
#cloud-config write_files: - path: /run/scripts/node-setup.sh content: | #!/bin/bash # CHANGEME! export KUBERNETES_VERSION=v1.33 POD_CIDR="10.244.0.0/16" CRICTL_VERSION=v1.33.0 # enp7s0 is internal network interface name, we define persistent configuration for enp7s0 with default gateway 10.0.0.1 echo "cloud-init: configure enp7s0 default gateway" cat <<EOF > /etc/systemd/network/10-enp7s0.network # Custom network configuration added by cloud-init [Match] Name=enp7s0 [Network] DHCP=yes Gateway=10.0.0.1 EOF # sometimes assigning an internal IP takes time, we have to wait echo "cloud-init: waiting for enp7s0 network interface" /lib/systemd/systemd-networkd-wait-online --any -i enp7s0 --timeout=60 # apply enp7s0 configuration echo "cloud-init: restart systemd-networkd" systemctl restart systemd-networkd # configure DNS echo "cloud-init: configure DNS" cat <<EOF > /etc/systemd/resolved.conf # Custom network configuration added by cloud-init [Resolve] DNS=185.12.64.2 185.12.64.1 FallbackDNS=8.8.8.8 EOF # apply DNS changes echo "cloud-init: restart systemd-resolved" systemctl restart systemd-resolved # Find IP of node export NODE_IP=$(ifconfig enp7s0 | grep 'inet ' | awk '{ print $2}') echo "cloud-init: NODE_IP=$NODE_IP" # Enable iptables Bridged Traffic echo "cloud-init: enable iptables Bridged Traffic" cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # sysctl params required by setup, params persist across reboots cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system # Install CRI-O Runtime echo "cloud-init: install CRI-O Runtime" sudo apt-get update -y sudo apt-get install -y software-properties-common gpg curl apt-transport-https ca-certificates curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key | sudo gpg --yes --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /" | sudo tee /etc/apt/sources.list.d/cri-o.list sudo apt-get update -y sudo apt-get install -y cri-o wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$CRICTL_VERSION/crictl-$CRICTL_VERSION-linux-amd64.tar.gz sudo tar zxvf crictl-$CRICTL_VERSION-linux-amd64.tar.gz -C /usr/local/bin rm -f crictl-$CRICTL_VERSION-linux-amd64.tar.gz # Start CRI-O Daemon echo "cloud-init: start CRI-O daemon" sudo systemctl daemon-reload sudo systemctl enable crio --now sudo systemctl start crio.service # Install Kubeadm & Kubelet & Kubectl echo "cloud-init: install Kubeadm & Kubelet & Kubectl" sudo mkdir -p /etc/apt/keyrings curl -fsSL https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/Release.key | sudo gpg --yes --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/ /" |\ sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update -y sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl # Configure Kubelet extra args to support Hetzner Cloud, read more https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/856#issuecomment-2631043985 echo "cloud-init: configure Kubelet extra args to support Hetzner Cloud " export PROVIDER_ID=$(dmidecode -t system | awk '/Serial Number/ {print $3}') echo "KUBELET_EXTRA_ARGS=--cloud-provider=external --provider-id=hcloud://$PROVIDER_ID --node-ip=$NODE_IP" | tee /etc/default/kubelet # Install Kubernetes echo "cloud-init: install Kubernetes" export NODENAME=$(hostname -s) kubeadm init \ --apiserver-advertise-address=$NODE_IP \ --apiserver-cert-extra-sans=$NODE_IP \ --pod-network-cidr=$POD_CIDR \ --node-name $NODENAME \ --ignore-preflight-errors Swap \ --cri-socket unix:///var/run/crio/crio.sock permissions: '0755' runcmd: - [ sh, "/run/scripts/node-setup.sh" ]
  1. Name node as “k8s-main-1” Kubernetes main node cloud-init script.

Internal IP of the main node should be 10.0.0.5, find it out in Cloud Console.

Troubleshooting of main Kubernetes node provision

If everything goes well, the main Kubernetes node will be available within 30 seconds.

ping 10.0.0.5

If not, here are the troubleshooting steps:

  1. The node has only an internal IP address. If the network setup in cloud-init fails, the node will not be directly accessible via VPN connection. We can use “ssh jumping” via Gateway to reach the main node
ssh -J root@10.0.0.2 root@10.0.0.5
  1. Check cloud-init logs to identify the issue
[k8s-main-1] cat /var/log/cloud-init-output.log
  1. Fix the issue, rerun cloud-init script
[k8s-main-1] sh /run/scripts/node-setup.sh

Configure local environment to work with Kubernetes

To access the cluster and deploy applications, we need to install kubectl and helm locally. Cluster configuration stored in /etc/kubernetes/admin.conf file in main node, let’s get it.

  1. Login to the Kubernetes main node using ssh
ssh root@10.0.0.5
  1. Prepare kubeconfig file
[k8s-main-1] export KUBECONFIG=/etc/kubernetes/admin.conf [k8s-main-1] mkdir -p $HOME/.kube [k8s-main-1] kubectl config view --flatten >> $HOME/.kube/config
  1. Copy kubeconfig to your local workstation (run on your local workstation)
scp root@10.0.0.5:/root/.kube/config $HOME/.kube/hetzner-dev.yaml sudo chown $(id -u):$(id -g) $HOME/.kube/hetzner-dev.yaml
  1. Configure kubectl
export KUBECONFIG=$HOME/.kube/hetzner-dev.yaml
  1. Check kubectl configuration
kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-main-1 NotReady control-plane 17m v1.33.1
  1. Install and configure additional tools like OpenLens (with OpenLens Node/Pod Menu Extension) or k9s

Install Container Network Interface (CNI)

In this guide we will use Flannel as as a basic option for CNI.

  1. Add Flannel Helm repository
helm repo add flannel https://flannel-io.github.io/flannel/ && helm repo update flannel && helm search repo flannel
  1. Creating a namespace for Flannel
kubectl create ns kube-flannel kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged
  1. Deploy Flannel CNI
cat <<EOF | helm upgrade flannel flannel/flannel --install --create-namespace -n kube-flannel --version 0.26.7 -f - podCidr: "10.244.0.0/16" EOF
  1. Patch CoreDNS after flannel installation to evade problems with CoreDNS Pods initialization
kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'

Install Hetzner Cloud Controller

This controller allows Kubernetes to manage Hetzner Cloud resources such as load balancers.

  1. Create Hetzner Cloud API token HETZNER_API_TOKEN Create Hetzner Cloud API token.
  2. Identify HETZNER_NETWORK_ID of internal-network from Cloud Console url HETZNER_NETWORK_ID of internal-network.
  3. Create secret with Hetzner Cloud API token and network id
kubectl -n kube-system create secret generic hcloud --from-literal=token=<HETZNER_API_TOKEN> --from-literal=network=<HETZNER_NETWORK_ID>
  1. Adding and updating Hetzner Cloud helm repository
helm repo add hcloud https://charts.hetzner.cloud && helm repo update hcloud && helm search repo hcloud
  1. Install Hetzner Cloud Controller
cat <<EOF | helm upgrade hcloud-cloud-controller-manager hcloud/hcloud-cloud-controller-manager --install --create-namespace -n kube-system --version 1.25.1 -f - env: HCLOUD_TOKEN: valueFrom: secretKeyRef: name: hcloud key: token HCLOUD_NETWORK: valueFrom: secretKeyRef: name: hcloud key: network HCLOUD_NETWORK_ROUTES_ENABLED: value: "false" EOF

Install Hetzner Container Storage Interface (CSI) driver

This driver allows Kubernetes to use Hetzner Cloud Volumes.

  1. Install Hetzner CSI driver
cat <<EOF | helm upgrade hcloud-csi hcloud/hcloud-csi --install --create-namespace -n kube-system --version 2.14.0 -f - EOF

Enable main Kuberntes node for scheduling (optional)

If you want to use the Kubernetes main node for your workload, you need to remove the default settings for the control plane:

kubectl taint nodes k8s-main-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule- kubectl taint nodes k8s-main-1 node-role.kubernetes.io/control-plane:NoSchedule- kubectl label node k8s-main-1 node.kubernetes.io/exclude-from-external-load-balancers-

Install and configure Node Autoscaler

This operator allows dynamically provision new Kubernetes nodes when there are insufficient resources for the workload on existing nodes.

  1. ssh to main Kubernetes node
ssh root@10.0.0.5
  1. Create long-living (1 year = 8760h) kubeadm join token in the Kubernetes main node (kubeadm init creates an initial token with a 24-hour TTL only)
[k8s-main-1] kubeadm token create --ttl 8760h --description "token for autoscaler" --print-join-command output: kubeadm join 10.0.0.5:6443 --token xxxxxxx.xxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  1. Copy kubeadm join command and export it as KUBERNETES_JOIN_CMD env variable in your workstation
export KUBERNETES_JOIN_CMD="kubeadm join 10.0.0.5:6443 --token xxxxxxx.xxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  1. Run the following command to prepare base64 encoded cloud-init script for Kubernetes worker nodes, and save it to the environment variable HCLOUD_CLOUD_INIT:
cat <<EOFEOF | base64 -w 0 | read -r HCLOUD_CLOUD_INIT #!/bin/bash # CHANGEME! export KUBERNETES_VERSION=v1.33 CRICTL_VERSION=v1.33.0 export KUBERNETES_JOIN_CMD="$KUBERNETES_JOIN_CMD" echo "cloud-init: configure enp7s0 default gateway" cat <<EOF > /etc/systemd/network/10-enp7s0.network # Custom network configuration added by cloud-init [Match] Name=enp7s0 [Network] DHCP=yes Gateway=10.0.0.1 EOF echo "cloud-init: waiting for enp7s0 network interface" /lib/systemd/systemd-networkd-wait-online --any -i enp7s0 --timeout=60 echo "cloud-init: restart systemd-networkd" systemctl restart systemd-networkd echo "cloud-init: configure DNS" cat <<EOF > /etc/systemd/resolved.conf # Custom network configuration added by cloud-init [Resolve] DNS=185.12.64.2 185.12.64.1 FallbackDNS=8.8.8.8 EOF echo "cloud-init: restart systemd-resolved" systemctl restart systemd-resolved export NODE_IP=\$(ifconfig enp7s0 | grep 'inet ' | awk '{ print \$2}') echo "cloud-init: NODE_IP=\$NODE_IP" echo "cloud-init: enable iptables Bridged Traffic" cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # sysctl params required by setup, params persist across reboots cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system echo "cloud-init: install CRI-O Runtime" sudo apt-get update -y sudo apt-get install -y software-properties-common gpg curl apt-transport-https ca-certificates curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /" | sudo tee /etc/apt/sources.list.d/cri-o.list sudo apt-get update -y sudo apt-get install -y cri-o wget https://github.com/kubernetes-sigs/cri-tools/releases/download/\$CRICTL_VERSION/crictl-\$CRICTL_VERSION-linux-amd64.tar.gz sudo tar zxvf crictl-\$CRICTL_VERSION-linux-amd64.tar.gz -C /usr/local/bin rm -f crictl-\$CRICTL_VERSION-linux-amd64.tar.gz echo "cloud-init: start CRI-O daemon" sudo systemctl daemon-reload sudo systemctl enable crio --now sudo systemctl start crio.service echo "cloud-init: install Kubeadm & Kubelet & Kubectl" sudo mkdir -p /etc/apt/keyrings curl -fsSL https://pkgs.k8s.io/core:/stable:/\$KUBERNETES_VERSION/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/\$KUBERNETES_VERSION/deb/ /" |\ sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update -y sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl echo "cloud-init: configure Kubelet extra args to support Hetzner Cloud (https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/856#issuecomment-2631043985)" export PROVIDER_ID=\$(dmidecode -t system | awk '/Serial Number/ {print \$3}') echo "KUBELET_EXTRA_ARGS=--cloud-provider=external --provider-id=hcloud://\$PROVIDER_ID --node-ip=\$NODE_IP" | tee /etc/default/kubelet echo "cloud-init: join Kubernetes cluster" \$KUBERNETES_JOIN_CMD echo "cloud-init: done" EOFEOF
  1. Define HETZNER_NETWORK_ID variable, we have already identified it
export HETZNER_NETWORK_ID=<HETZNER_NETWORK_ID>
  1. Define in HCLOUD_SSH_KEY name of your SSH key in Hetzner Console (Security section/SSH keys). Note that this is the name of the SSH key you uploaded in the beginning, not the key itself. In this guide it was defined as “hi@yourcompany.com”
export HCLOUD_SSH_KEY=<NAME_OF_YOUR_SSH_KEY_IN_HETZNER_CONSOLE>
  1. Add autoscaler Helm repository
helm repo add autoscaler https://kubernetes.github.io/autoscaler && helm repo update autoscaler && helm search repo autoscaler/cluster-autoscaler
  1. Install horizontal pod autoscaler and automatically provision worker node
cat <<EOF | helm upgrade cluster-autoscaler autoscaler/cluster-autoscaler --install --create-namespace -n cluster-autoscaler --version 9.46.6 -f - cloudProvider: hetzner autoscalingGroups: - name: pool1 minSize: 1 ## CHANGEME!! maxSize: 3 ## CHANGEME!! instanceType: CPX41 # Uppercase! region: NBG1 # Uppercase! extraEnv: HCLOUD_TOKEN: $(kubectl get secret hcloud -n kube-system -o jsonpath='{.data.token}' | base64 -d) HCLOUD_CLOUD_INIT: $HCLOUD_CLOUD_INIT HCLOUD_NETWORK: "$HETZNER_NETWORK_ID" HCLOUD_SSH_KEY: "$HCLOUD_SSH_KEY" HCLOUD_PUBLIC_IPV4: "false" HCLOUD_PUBLIC_IPV6: "false" HCLOUD_IMAGE: "ubuntu-24.04" extraArgs: scale-down-enabled: true enforce-node-group-min-size: true EOF

Once this configuration is deployed, an additional Kubernetes worker node will be provisioned to meet the specified minSize: 1 requirement for pool1.

kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-main-1 Ready control-plane 117m v1.33.1 pool1-36cea4ff75252677 Ready <none> 4m33s v1.33.1

If you want to change the pool configuration, simply make changes to the command in step 8 and run the deployment again.

Install Public and internal Ingress controller

  1. Add Ingress NGINX Helm repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && helm repo update ingress-nginx && helm search repo ingress-nginx
  1. Install Public Ingress NGINX controller
cat <<EOF | helm upgrade ingress-nginx-public ingress-nginx/ingress-nginx --install --create-namespace -n ingress-nginx-public --version 4.12.2 -f - controller: electionID: ingress-public-controller #kind: DaemonSet #dnsPolicy: ClusterFirstWithHostNet #hostNetwork: true service: annotations: load-balancer.hetzner.cloud/name: "public-lb" load-balancer.hetzner.cloud/location: "nbg1" load-balancer.hetzner.cloud/type: "lb11" load-balancer.hetzner.cloud/ipv6-disabled: "true" load-balancer.hetzner.cloud/use-private-ip: "true" # load-balancer.hetzner.cloud/protocol: "https" # load-balancer.hetzner.cloud/http-redirect-http: "true" enableHttp: true #targetPorts: # https: http EOF
  1. Install Internal Ingress NGINX controller
cat <<EOF | helm upgrade ingress-nginx-internal ingress-nginx/ingress-nginx --install --create-namespace -n ingress-nginx-internal --version 4.12.2 -f - controller: electionID: ingress-internal-controller #dnsPolicy: ClusterFirstWithHostNet #hostNetwork: true #kind: DaemonSet ingressClass: internal-nginx ingressClassResource: name: internal-nginx enabled: true default: false controllerValue: "k8s.io/internal-ingress-nginx" service: annotations: load-balancer.hetzner.cloud/name: "internal-lb" load-balancer.hetzner.cloud/location: "nbg1" load-balancer.hetzner.cloud/type: "lb11" load-balancer.hetzner.cloud/ipv6-disabled: "true" load-balancer.hetzner.cloud/use-private-ip: "true" load-balancer.hetzner.cloud/disable-public-network: "true" # load-balancer.hetzner.cloud/protocol: "https" # load-balancer.hetzner.cloud/http-redirect-http: "true" enableHttp: true #targetPorts: # https: http EOF

More about multiple Ingress controllers can be found in Ingress-nginx documentation.

  1. Check that Hetzner Cloud Public and Internal Load Balancers became healthy Check that Hetzner Cloud Balancers became healthy.
  2. Note PUBLIC_LB_PUBLIC_IP IP address

Deploy example application (without) TLS externally

  1. Add example application Helm repository
helm repo add bitnami https://charts.bitnami.com/bitnami && helm repo update bitnami && helm search repo bitnami/nginx
  1. Install example application
cat <<EOF | helm upgrade helloworld bitnami/nginx --install --create-namespace -n helloworld --version 20.0.3 -f - ingress: enabled: true ingressClassName: nginx hostname: hello.yourcompany.com EOF
  1. Check that example application is availabile at PUBLIC_LB_PUBLIC_IP, you should see Welcome to nginx! html page.
curl http://hello.yourcompany.com --connect-to hello.yourcompany.com:80:<PUBLIC_LB_PUBLIC_IP>

Install cert-manager

Certificate Manager allows you to automatically and freely obtain valid TLS certificates from Let’s Encrypt

  1. Add cert-manager Helm repository
helm repo add jetstack https://charts.jetstack.io --force-update && helm repo update jetstack && helm search repo jetstack/cert-manager
  1. Install cert-manager
cat <<EOF | helm upgrade cert-manager jetstack/cert-manager --install --create-namespace -n cert-manager --version v1.17.2 -f - crds: enabled: true EOF

Configure DNS in Cloudflare

To obtain valid TLS certificates for private services hosted internally (in the 10.0.0.0/16 network) and published at Internal Load Balancer, we have to use Let’s Encrypt DNS-01 challenge for certificate validation. This challenge asks you to prove that you control the DNS for your domain name by putting a specific value in a TXT record under that domain name. The most efficient and automated way to leverage the DNS-01 challenge is to use API-based DNS providers. Cert-manager supports various API-driven DNS providers, and in this guide, we will use Cloudflare DNS.

  1. Create Cloudflare account and add your domain yourcompany.com to it.
  2. Configure Cloudflare DNS for the domain yourcompany.com and create A record for the domain hello.yourcompany.com pointing to the external IP address of the Public Load Balancer (public-lb).
Type: A Name: hello IPv4 address: <PUBLIC_LB_PUBLIC_IP> Proxy status: Proxied TTL: Auto
  1. Configure Cloudflare DNS for the domain yourcompany.com and create A record for the domain *.int.yourcompany.com pointing to the internal(!) IP address of the Internal Load Balancer (internal-lb).
Type: A Name: *.int IPv4 address: 10.0.0.4 Proxy status: DNS only TTL: Auto

Configure Cloudflare DNS.

  1. Create API token in Cloudflare with permissions to manage DNS records for the domain yourcompany.com. Create API token in Cloudflare.
  2. Create a secret in Kubernetes with Cloudflare API token:
kubectl -n cert-manager create secret generic cloudflare-dns --from-literal=api-token=<YOUR_CLOUDFLARE_API_TOKEN>
  1. Create ClusterIssuer for Let’s Encrypt using Cloudflare DNS:
cat <<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-cloudflare spec: acme: email: hi@yourcompany.com # CHANGEME! server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-cloudflare solvers: - dns01: cloudflare: email: hi@yourcompany.com # CHANGEME! apiTokenSecretRef: name: cloudflare-dns key: api-token EOF

Redeploy public example application with TLS configuration

  1. To enable TLS certificate for example application, we need to update the Helm chart values to use the ClusterIssuer we just created.
cat <<EOF | helm upgrade helloworld bitnami/nginx --install --create-namespace -n helloworld --version 20.0.3 -f - ingress: enabled: true ingressClassName: nginx hostname: hello.yourcompany.com annotations: cert-manager.io/cluster-issuer: letsencrypt-cloudflare tls: true selfSigned: false EOF
  1. Check logs of cert-manager
kubectl get pods -n cert-manager cert-manager-xxxxx-xyzum 1/1 Running 0 31m cert-manager-cainjector-xxxxxx-xzp7f 1/1 Running 0 4h35m cert-manager-webhook-xxxxxx-78tvv 1/1 Running 0 4h35m
kubectl logs -f cert-manager-xxxxx-xyzum -n cert-manager 19:19:06.145704 1 acme.go:236] "certificate issued" logger="cert-manager.controller.sign" resource_name="hello.yourcompany.com-tls-1" resource_namespace="helloworld" resource_kind="CertificateRequest" resource_version="v1" related_resource_name="hello.yourcompany.com-tls-1-2578369879" related_resource_namespace="helloworld" related_resource_kind="Order" related_resource_version="v1"
  1. Check that example application is availabile with valid TLS certificate
curl https://hello.yourcompany.com

Deploy example application with TLS configuration internally

  1. Deploy example application with ingressClassName: internal-nginx
cat <<EOF | helm upgrade helloworld-internal bitnami/nginx --install --create-namespace -n helloworld-internal --version 20.0.3 -f - ingress: enabled: true ingressClassName: internal-nginx hostname: hello.int.yourcompany.com annotations: cert-manager.io/cluster-issuer: letsencrypt-cloudflare tls: true selfSigned: false EOF
  1. Check logs of cert-manager
kubectl get pods -n cert-manager cert-manager-xxxxx-xyzum 1/1 Running 0 31m cert-manager-cainjector-xxxxxx-xzp7f 1/1 Running 0 4h35m cert-manager-webhook-xxxxxx-78tvv 1/1 Running 0 4h35m
kubectl logs -f cert-manager-xxxxx-xyzum -n cert-manager 19:19:06.145704 1 acme.go:236] "certificate issued" logger="cert-manager.controller.sign" resource_name="hello.int.yourcompany.com-tls-1" resource_namespace="helloworld" resource_kind="CertificateRequest" resource_version="v1" related_resource_name="hello.int.yourcompany.com-tls-1-2578369879" related_resource_namespace="helloworld" related_resource_kind="Order" related_resource_version="v1"
  1. Check that private example application is availabile with valid TLS certificate
curl https://hello.int.yourcompany.com

Install Gitea internally

  1. Add Gitea Helm repository
helm repo add gitea-charts https://dl.gitea.io/charts && helm repo update gitea-charts && helm search repo gitea-charts
  1. Install Gitea
cat <<EOF | helm upgrade gitea gitea-charts/gitea --install --create-namespace -n gitea --version 12.0.0 -f - gitea: config: APP_NAME: "Gitea" repository: ROOT: "~/gitea-repositories" repository.pull-request: WORK_IN_PROGRESS_PREFIXES: "WIP:,[WIP]:" ingress: enabled: true className: internal-nginx annotations: cert-manager.io/cluster-issuer: letsencrypt-cloudflare hosts: - host: git.int.yourcompany.com ## CHANGEME! paths: - path: / tls: - hosts: - git.int.yourcompany.com ## CHANGEME! secretName: git.int.yourcompany.com-tls ## CHANGEME! persistence: enabled: true # https://github.com/hetznercloud/csi-driver/blob/main/docs/kubernetes/README.md#getting-started storageClass: hcloud-volumes accessModes: - ReadWriteOnce postgresql: enabled: true postgresql-ha: enabled: false memcached: enabled: false EOF
  1. Login in Gitea, create new account, upload ssh public key, create new repo

Install Gitea runner internally

Unfortunately, Gitea Actions helm chart still not released, we will deploy runner from custom manifest

  1. Get registration token for Gitea Get registration token for Gitea.
  2. Define it as GITEA_ACTIONS_TOKEN variable:
export GITEA_ACTIONS_TOKEN=$(echo <registration token from gitea UI> | base64)
  1. Deploy Gitea runner
cat <<EOF | kubectl create -f - apiVersion: v1 kind: Namespace metadata: name: gitea-runner labels: name: development --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: act-runner-vol namespace: gitea-runner spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: hcloud-volumes --- apiVersion: v1 kind: Secret metadata: name: runner-secret namespace: gitea-runner type: Opaque data: # The registration token can be obtained from the web UI, API or command-line. token: $GITEA_ACTIONS_TOKEN --- apiVersion: apps/v1 kind: Deployment metadata: name: act-runner namespace: gitea-runner labels: app: act-runner spec: replicas: 1 selector: matchLabels: app: act-runner strategy: {} template: metadata: creationTimestamp: null labels: app: act-runner spec: restartPolicy: Always volumes: - name: runner-data persistentVolumeClaim: claimName: act-runner-vol securityContext: fsGroup: 1000 containers: - name: runner image: gitea/act_runner:nightly-dind-rootless imagePullPolicy: Always # command: ["sh", "-c", "while ! nc -z localhost 2376 </dev/null; do echo 'waiting for docker daemon...'; sleep 5; done; /sbin/tini -- /opt/act/run.sh"] env: - name: DOCKER_HOST value: tcp://localhost:2376 - name: DOCKER_CERT_PATH value: /certs/client - name: DOCKER_TLS_VERIFY value: "1" - name: GITEA_INSTANCE_URL value: https://git.int.yourcompany.com ## CHANGEME! - name: GITEA_RUNNER_REGISTRATION_TOKEN valueFrom: secretKeyRef: name: runner-secret key: token securityContext: privileged: true volumeMounts: - name: runner-data mountPath: /data EOF
  1. Check that Gitea runner is registred Check that Gitea runner is registred.

Install ArgoCD internally

  1. Add ArgoCD Helm repository
helm repo add argo https://argoproj.github.io/argo-helm && helm repo update argo && helm search repo argo/argo-cd
  1. Install ArgoCD
cat <<EOF | helm upgrade argocd argo/argo-cd --install --create-namespace -n argocd --version 8.0.9 -f - global: domain: argocd.int.yourcompany.com ## CHANGEME! configs: params: server.insecure: true server: ingress: enabled: true ingressClassName: internal-nginx annotations: cert-manager.io/cluster-issuer: letsencrypt-cloudflare nginx.ingress.kubernetes.io/backend-protocol: "HTTP" extraTls: - hosts: - argocd.int.yourcompany.com ## CHANGEME! # Based on the ingress controller used secret might be optional secretName: argocd.int.yourcompany.com-tls EOF
  1. Check that ArgoCD is available at https://argocd.int.yourcompany.com
  2. Get admin password
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d; echo
  1. Login to ArgoCD using username admin and password from the previous step

Install Kubernetes Dashboard internally

  1. Add Kubernetes dashboard helm chart
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ && helm repo update kubernetes-dashboard && helm search repo kubernetes-dashboard/kubernetes-dashboard
  1. Install Kubernetes dashboard
cat <<EOF | helm upgrade kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --install --create-namespace -n kubernetes-dashboard --version 7.12.0 -f - app: ingress: enabled: true hosts: - k8s-dashboard.int.yourcompany.com ## CHANGEME! ingressClassName: internal-nginx issuer: name: letsencrypt-cloudflare scope: cluster EOF
  1. Generate bearer token
kubectl create serviceaccount k8s-dashboard kubectl create token k8s-dashboard
  1. Use token to login in Kubernetes dashboard at https://k8s-dashboard.int.yourcompany.com

Conclusion

This guide illustrates how to deploy a scalable, cost-effective development environment for cloud-native, Kubernetes-based workloads using Hetzner Cloud services. While advanced infrastructure provisioning tools like Terraform are not used here, this approach offers an approachable introduction to configuring Hetzner Cloud. Future guides can cover additional topics, including hybrid environments with cloud and dedicated servers, enhanced secret management, implementing single sign-on (SSO) across all services, etc.

Last updated on