Self-hosted Blog Guide for Engineers
A comprehensive guide to setting up a self-hosted blog with the use of Terraform, Ansible, Kubernetes on Ghost platform with MySQL.
I'd been searching for a comprehensive guide on setting up a cheap and easily maintainable personal site or blog lately and couldn't find a good enough one. I've decided to do it myself and create a step-by-step guide for others to reuse. Welcome to a self-hosted blog development tutorial utilizing a heck of a technology stack: Terraform, Ansible, and Kubernetes! As the first tutorial on Humble Thoughts, it's available to all-tier members but serves as an example of exclusive content available in the future only to the Exclusive subscription members. You can sign-up for a trial period and decide later if you want to continue supporting me.
Yeah, you've got this right; we will be using a stack that might look like a bit of over-engineering for this type of solution, but for a good reason — it's always great to learn and practice something new. Still, I was impressed by how easily one can maintain the result solution. And even though the stack may seem like an over-kill for the problem, I keep it as simple as possible, though easy to maintain and scale further, and relatively cheap as well (only around 5€/month according to the Hetzner Cloud prices in Jun 2023).
I'll guide you through the process of setting up servers (single-node or even a cluster) on the cheapest cloud platform provider (Hetzner) I know, with the use of Terraform, Ansible, and k3s (the Lightweight Kubernetes). The project we will be deploying is a Ghost blog, similar to the one you're reading this tutorial on.
My goal in this tutorial is to guide you through the process of setting up a web service using the mentioned tech stack. I will only dive into some of the details, but in general will cover only necessary things to keep the tutorial short and focused on practicalities. You won't see many theory on how Kubernetes or Ansible work.
Having a little experience with such tools and services as CLI terminal, SSH keys, Git, Docker, Python dependencies manager (pip), AWS, and GitLab would improve your chances of faster results, but I will leave notes for you to be able to do your own research on your way to the final setup. Besides, I prepared a Git repository with all the necessary code snippets you will need for the tutorial.
Here's the plan for this tutorial:
- Setup tools and services
- Design the target solution architecture to better understand the end goal
- Initialize the project on Hetzner and provision infrastructure
- Configure the provisioned server with Ansible setting up K3s
- Configure and deploy blog components using k3s
Let's jump in and get our hands on the tech without further ado!
Prerequisites
The tutorial is created on MacOS; thus, it will be possible to repeat all the steps without a difference on Linux, but I wonder how easy it will be to complete on Windows. All the tools are available on Windows, but the way to set them up might differ, so keep that in mind.
I will use several tools and services during the tutorial, so it's better to start preparing them in advance just not to block further steps.
We will use the following services, so make sure you have accounts on all of these:
- Gitlab for storing our infrastructure code and Terraform state. Create an account there and leave it as it is for now.
- Hetzner Cloud is a cheap (probably the most affordable) cloud provider with a decent provider selection for Terraform, Ansible, and K8s. Create an account there and leave it for now. We will get back there when the time to create a new project comes.
- You will need a domain name for this tutorial for the webserver to work with Let's Encrypt. You can skip this if you can reconfigure Nginx Ingress to proxy requests to the blog service by IP instead of the domain name. I recommend using Namecheap, one of the cheapest options, if you need a spare domain name for experiments.
- (optional) AWS can provide you a wide range of cloud services. You might need an SES service to send transactional emails from the blog.
Got them? Well done! We're almost ready to start; let's set up the necessary tools on your machine.
- We will use Terraform for cloud infrastructure provisioning and management, so you will need a CLI tool from them to apply changes to the infrastructure on the Hetzner cloud right from your machine (CLI terminal). Make sure to install one following the official documentation. I suggest going with the Homebrew installation if you're using MacOS.
- The following necessary tool will be Ansible. It will help us configure the provisioned servers in an automated way so that we won't need to manually log in to every single machine and set up all the necessary tools on the servers. It saves much time automating such routines as configuring firewalls, and web servers (e.g., nginx), updating apt-packages, and much more. Ansible is a Python-based tool, so it requires Python installation on your local machine, and it's all covered in the official installation guide, so please follow that and install Ansible. By the way, it's also available in Homebrew, so if you went for the Homebrew option for Terraform, I suggest doing the same with Ansible.
- There's an optional but excellent addition to the Python-driven projects, which our project is because of Ansible, — virtualenv toolset. Generally speaking, it's a way to split a global Python dependencies workspace into many separate and isolated spaces based on projects. It allows you to create isolated virtual environments for Python dependencies and ensure they never collide with the global ones. I suggest installing virtualenvwrapper and acquiring such new CLI commands as
mkvirtualenv
andworkon
. - Kubernetes will allow us to manage the cluster (even if it will be a single-node cluster). To use it, you will also need a local CLI tool installation (
kubectl
, to be precise), which is also available in the official documentation.
That's it! Now we're ready to move forward and experiment with the cloud!
Action!
Solution Architecture
Let's start from the point where every software project should begin — solution architecture. Every solution architecture starts from requirements. Let's keep our requirements simple for a smooth start. So, our requirements, for now, are a single-node server on Hetzner Cloud, running a k3s server and services, such as Ghost blog, a MySQL database for the blog, and a persistent volume from Hetzner to make sure our data is stored consistently and won't be gone if a Ghost pod or MySQL pod is replaced. We also want to allow only specific ports: 80/443 for HTTP(S), 6443 for K8s Server API, and 22 for SSH.
Here is a diagram of what such solution could look like:
Let's briefly go through the main component of the solution. The big red box represents Hetzner Cloud, and the orange blocks represent particular Hetzner Services (Firewall, Server, Volume). The Server is the most exciting part for us because it contains the main high-level logical aspects of the system — a webserver (nginx), Ghost blog, and the database for it. We will primarily focus on the green components in our configuration but also touch the K8s API server just a little bit. We will set up an automated Let's Encrypt certificate issuer to provide an SSL certificate for our service. We will also need a K8s Ingress Service to route all the incoming requests and make sure the HTTP(S) requests are proxied correctly to the right services — Certificate Issuer and the Ghost App. You can think about Ingress as a routing service with a compelling set of configuration tools, some of which we will save for the future. It will only help us expose the mentioned services and apps for now.
Let's code
We will start with setting up the project locally and setting up all the necessary connections to the services I listed above. To simplify the process, I've prepared a template repository on Github and Gitlab, containing all the required files we will work with during this tutorial. Fork it to a private repository and continue with your copy to keep all the changes you make versioned. Check out the repository locally and open it in your favorite code editor. Note that the repository files contain places marked with TODO
comments that you have to change to your own settings otherwise it won't work at all.
Infrastructure Provisioning
TF + Gitlab
As you already know, we will use Terraform for infrastructure provisioning. We must configure Terraform to communicate successfully with Hetzner API. But before that, we need to initiate Terraform, pointing its backend storage to the Gitlab project you're using for the tutorial. Terraform will use Gitlab storage for TF state file. To do this, we must provide a Gitlab API token to Terraform and the Gitlab Project ID.
So, make sure to create a new GitLab API token in the Access Tokens section in the settings, marking API in the Selected scopes section:

Create a .env
copy of the .env.example
file and save the generated token to the .env
file as a value for the GITLAB_TOKEN
variable. Also, change the GITLAB_USERNAME
to your username and make sure the GITLAB_PROJECT
variable equals the Project ID of the forked repo.
These variables are necessary for running the first command initializing Terraform state for our infrastructure. Save the .env
file changes and source the file to the CLI terminal session, typing the following command having the repository as a working directory for the session:
source .env
This will lead to variable exposure to the session, so they will also be available for further commands. Now, we're ready to initialize Terraform using the init
command having the terraform
subdirectory as a working directory:
cd terraform
terraform init \
-backend-config="address=https://gitlab.com/api/v4/projects/${GITLAB_PROJECT}/terraform/state/default" \
-backend-config="lock_address=https://gitlab.com/api/v4/projects/${GITLAB_PROJECT}/terraform/state/default/lock" \
-backend-config="unlock_address=https://gitlab.com/api/v4/projects/${GITLAB_PROJECT}/terraform/state/default/lock" \
-backend-config="username=${GITLAB_USERNAME}" \
-backend-config="password=${GITLAB_TOKEN}" \
-backend-config="lock_method=POST" \
-backend-config="unlock_method=DELETE" \
-backend-config="retry_wait_min=5"
The result should be successful and show it installed the hetznercloud/hcloud
provider plugin. This plugin allows Terraform to use Hetzner Cloud API for infrastructure provisioning. In the result of the command, you will also see a Terraform lock-file (.terraform.lock.hcl
) and a .terraform
folder containing the state and the provider plugin. The state hasn't been uploaded to Gitlab yet because it's empty, but it will do as soon as we change the infrastructure state in Hetzner.
TF + Hetzner
Now we're ready to connect Terraform to Hetzner and provision infrastructure for our project. For that, we will need a Hetzner Project.
- Log in to the Hetzner Cloud
- Create a new project
- Go to the Security → API Keys section and generate an API token with
Read & Write
access:

Copy the token and paste it to the .env
file as a value for the HCLOUD_TOKEN
variable. Repeat .env
variables sourcing with the source .env
command, and now we're ready to provision our Hetzner resources.
Now it's time to look into the terraform/main.tf
file, go through its content and make adjustments.
resource "hcloud_server" "server" {
# ...
ssh_keys = [hcloud_ssh_key.user-ssh.id]
labels = {
"k8s/server" = "true",
"k8s/agent" = "true"
}
firewall_ids = [hcloud_firewall.base.id, hcloud_firewall.k3s-tf-tutorial-server.id]
}
resource "hcloud_network" "k3s-tf-tutorial" {
name = "k3s-tf-tutorial"
ip_range = "10.29.0.0/16"
}
resource "hcloud_network_subnet" "k3s-tf-tutorial" {
network_id = hcloud_network.k3s-tf-tutorial.id
type = "cloud"
network_zone = "eu-central"
ip_range = "10.29.0.0/24"
}
resource "hcloud_server_network" "server" {
server_id = hcloud_server.server.id
network_id = hcloud_network.k3s-tf-tutorial.id
}
The code above is responsible for provisioning a Hetzner Cloud network and the first server in this network. The network's purpose is to make it possible to interconnect all the future resources if we need them later when scaling the service to a clustered solution. I'm going to create a part II of this tutorial, showing how to scale the result solution horizontally, so I'm not touching it in this post. The hcloud_server
resource configuration has a few interesting options; let's go though them:
ssh_keys
is responsible for including the configured public SSH key to the server to make it possible for you to connectfirewall_ids
is a set of firewalls applied to the server. It's possible to have only one firewall included there, but for the sake of future improvements, we might want to split our firewall rules into a couple of separate ones right away — the basic one (base
) and the one allowing K8s connect to the Kubernetes API and control the cluster withkubectl
labels
is simply a set of tags we put on our resources to mark them for further use. It's wise to use labels that will help you differentiate resources from one another. For example, at this point, we create only a single-node k8s cluster, which will serve as both the server and the agent. However, ideally, in a cluster with two servers, you'd have the server and agents separate and configure them differently. In that case, labels would help you automate further configuration setup with Ansible, ensuring Ansible applies proper configuration based on the node type.
Make sure to update the SSH-key configuration in the main.tf
listing your public_key
:
# TODO: provide your public SSH key here
resource "hcloud_ssh_key" "user-ssh" {
name = "yourname"
public_key = "ssh-ed25519 AAAAxxxxksdjfweiuwefiw username"
}
The result of this instruction will be registration of the provided public key as an authorized key for the root user of the server.
Let's test the Hetzner plugin configuration with terraform plan
. This command will plan provision and show the pending changes:
terraform plan
Acquiring state lock. This may take a few moments...
data.hcloud_location.location: Reading...
data.hcloud_location.location: Read complete after 1s [name=fsn1]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
...
Plan: 7 to add, 0 to change, 0 to destroy.
As you see, Terraform plans to create 7 resources for us according to the main.tf
configuration. Let's apply these changes (approve them by typing yes
when asked):
terraform apply
....
hcloud_ssh_key.user-ssh: Creating...
hcloud_ssh_key.user-ssh: Creation complete after 0s [id=10764191]
hcloud_server.server: Creating...
hcloud_server.server: Still creating... [10s elapsed]
hcloud_server.server: Creation complete after 11s [id=31007119]
hcloud_server_network.server: Creating...
hcloud_server_network.server: Still creating... [10s elapsed]
hcloud_server_network.server: Creation complete after 10s [id=31007119-2749286]
Apply complete! Resources: 7 added, 0 changed, 0 destroyed.
As the result, we've provisioned the network, firewall settings and the server we will further configure and run the blog on. You should be able to see the server and the other resources in the Hetzner Cloud console:

Also, the TF state should be now on GitLab:

Configure the server
The server we provisioned is a plain Ubuntu server with no custom configuration, and we've yet to set it up, and that's what we will use Ansible for.
First of all, let's tune the tooling a bit, and now is when virtualenv for Python dependencies management will be helpful. You can skip this step if you've decided not to use virtualenv
. Otherwise, go to the ansible
folder in the repository and create a new virtual environment for the project:
$ cd ../ansible
$ mkvirtualenv k3s-tutorial && workon k3s-tutorial
Now you're ready to install the dependencies to the newly created virtual env. Let's install them:
$ pip install -r requirements.txt
$ ansible-galaxy install -r requirements.yml
The Ansible inventory configuration is located at ansible/inventory/group_vars/all.yml
— ensure to provide valid values for the SSH keys settings there (ansible_ssh_private_key_file
and root_account__authorized_keys
).
You should have a working set of tools for running further Ansible commands. The first command we will use will be ansible-inventory --list
to check if Ansible now has access to our Hetzner resources and reads their labels correctly to form the inventory:
ansible-inventory --list
The output should shown you the server meta description like this:
{
"_meta": {
"hostvars": {
"server-0": {
"ansible_host": "167.235.230.119",
"ansible_ssh_private_key_file": "~/.ssh/id_ed25519",
"datacenter": "fsn1-dc14",
"id": "31007119",
"image_id": "67794396",
"image_name": "ubuntu-22.04",
"image_os_flavor": "ubuntu",
"ipv4": "167.235.230.119",
"ipv6_network": "2a01:4f8:c012:acf4::",
"ipv6_network_mask": "64",
"labels": {
"k8s/agent": "true",
"k8s/server": "true"
},
"location": "fsn1",
"name": "server-0",
"root_account__authorized_keys": [
"ssh-ed25519 AAAAC3Nza****"
],
"server_type": "cx11",
"status": "running",
"type": "cx11"
}
}
},
"agents": {
"hosts": [
"server-0"
]
},
"all": {
"children": [
"ungrouped",
"hcloud",
"servers",
"agents"
]
},
"hcloud": {
"hosts": [
"server-0"
]
},
"servers": {
"hosts": [
"server-0"
]
}
}
We can also ping our server with Ansible running this command to make sure the connection is setup properly:
ansible -m ping all
server-0 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
Great! The connection works, and we're ready to configure the server. As stated, our server will work as a K8s server and agent simultaneously. We will use a lightweight version of Kubernetes for it — k3s. Thus, our Ansible playbook will include roles, configuring k3s on the server (the k3s
role), and importing its config file locally, so you can access cluster resources with kubectl
from the local machine (the kubeconfig
role). Suppose the first part is pretty simple, and you're less likely to catch a problem here as it runs on the remote server. In that case, the second part depends on the local environment and the presence of local folders for Kubernetes configuration. So, I recommend ensuring you have the configurations folder in advance: ~/.kube/configs
. It's ok if it's empty; however essential to have it. After applying the playbook, we should have our configuration located there.
But first, let's try to run an ansible check and see what changes it's going to apply to our server:
ansible-playbook main.yml --check
The command execution will take some time to run all the Ansible roles checks. In the end you should see how many of the changes will be applied on the wet run similar to this one:
PLAY RECAP **************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
server-0 : ok=22 changed=9 unreachable=0 failed=0 skipped=6 rescued=0 ignored=2
Now, let's apply the changes:
ansible-playbook main.yml
It should be successful and you would get a result similar to this one:
PLAY RECAP ***************************************************************************
localhost : ok=5 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server-0 : ok=23 changed=10 unreachable=0 failed=0 skipped=5 rescued=0 ignored=1
If it's all good, we can continue with setting up the Kubernetes server and configuring its components, which I described in a separate article. To ensure everything's fine, check if the kube config file exists on your local machine now (~/.kube/configs/k3s-tutorial
or your filename if you modified it in the kubeconfig
role previously).