How I Turned a Living Room Computer Into a Production Blog — With Kubernetes
Self-hosting used to mean a $5 VPS and crossed fingers. This setup is different.
There is a machine sitting in my lounge that would be unremarkable in any other context — a personal computer, humming quietly, connected to a domestic internet router. But traffic arriving at jawadtahir.de does not end up in some gleaming data center in Frankfurt or Virginia. It ends at that machine. Every page load, every newsletter, every ActivityPub handshake from a Mastodon server somewhere — all of it terminates in a living room.
What makes this more than a hobbyist experiment is the layer of engineering beneath it. This is not a blog running on a shared host. It is running inside Kubernetes, behind a production-grade TLS-terminating gateway, with federated social capabilities, real-time analytics, and a persistent storage strategy designed to survive a pod restart without losing a single image upload.
This is a story about what it looks like when someone decides not to pay a hosting company — and does it properly.
The Machine at the Edge
The foundation is MicroK8s, a lightweight Kubernetes distribution built by Canonical that installs on a single Ubuntu machine in minutes. MicroK8s provides a fully conformant Kubernetes API — the same primitives that power deployments at massive scale — on hardware that draws less power than a light bulb.
The cluster runs one node. One machine. Which means every pod, every service, every volume lands on the same host. For a personal blog this is entirely acceptable. The constraints become design decisions rather than failures.
The first problem any home Kubernetes operator faces is the same one that makes bare-metal Kubernetes awkward in general: there is no cloud provider to hand out IP addresses. In a managed cluster on AWS or GCP, a LoadBalancer service gets a real IP automatically. On bare metal, that request sits unanswered.
The answer here is MetalLB, a load balancer implementation for Kubernetes that speaks the same networking primitives as cloud providers — but runs entirely on your own hardware. MetalLB watches for LoadBalancer services and assigns IP addresses from a pool you define. In this setup, that pool is a single IP on the local network. The Envoy Gateway picks it up, and suddenly the cluster has a real, addressable front door.
The Router in the Middle
The gap between a LAN IP and the public internet is bridged the old-fashioned way: port forwarding.
The router is configured to forward inbound traffic on ports 80 and 443 to the MetalLB IP. This is the only piece of the architecture that lives outside Kubernetes, and it is deliberately thin — a static NAT rule that requires no maintenance once it is set.
The notable design consequence is that the cluster's external threat surface is limited to those two ports. Nothing else is reachable from the internet by default.
The Gateway Layer
Inbound HTTP and HTTPS traffic are handled by Envoy Gateway, an implementation of the Kubernetes Gateway API — a modern, more expressive successor to the older Ingress resource. Where Ingress was a single flat list of routing rules, the Gateway API separates concerns cleanly: a Gateway resource defines the network listener and TLS configuration, and HTTPRoute resources define the routing logic independently.
This separation matters architecturally. TLS termination happens once, at the Gateway, using a wildcard certificate issued by Let's Encrypt via cert-manager and validated through DNS-01 challenges against an AWS Route 53 hosted zone. No traffic enters the cluster unencrypted.
The routing layer handles several distinct behaviors simultaneously:
- jawadtahir.de serves the public Ghost blog
- admin.jawadtahir.de routes exclusively to the Ghost admin panel, keeping editorial tooling on a separate hostname
- www.jawadtahir.de and blog.jawadtahir.de issue permanent redirects to the canonical apex domain
- Paths beneath /.ghost/analytics/ are transparently rewritten and proxied to the analytics service
- Paths beneath /.ghost/activitypub/ and the /.well-known/ discovery endpoints are routed to the ActivityPub service
All of this runs as declarative YAML. The gateway does not know or care what software sits behind it.
The Application Stack
Behind the gateway, four services run as Kubernetes workloads:
Ghost is the blog engine — an open-source Node.js publishing platform with a clean editor, member subscriptions, and a rich API. It runs as a single-replica Deployment pinned to the storage node via a node affinity rule. Ghost reads its configuration entirely from Kubernetes ConfigMaps and Secrets injected as environment variables, following the twelve-factor model. No config files are baked into the container image.
MySQL 8 runs as a StatefulSet rather than a plain Deployment. The distinction is meaningful: a StatefulSet guarantees stable network identity and ordered pod lifecycle, which a database expects. A custom init script creates multiple databases on first startup — one for Ghost, one for ActivityPub — using a ConfigMap-mounted shell script that MySQL's entrypoint executes automatically.
ActivityPub is the component that makes the blog a citizen of the fediverse. Based on Ghost's official ActivityPub service, it allows the blog to send and receive activities in the ActivityPub protocol — meaning a post published at jawadtahir.de can be followed from Mastodon, Pixelfed, or any other federated platform. The service runs its own database schema via a dedicated migration init container that runs to completion before the main container starts. Media uploaded via ActivityPub is stored in a shared volume alongside Ghost's own content.
Traffic Analytics is a lightweight proxy service — ghost/traffic-analytics — that receives page-hit events from the Ghost tracker script, hashes visitor IPs for privacy, and forwards anonymized events to Tinybird, a real-time analytics data platform. Ghost's admin dashboard surfaces these analytics natively. The proxy sits between the browser and the Tinybird API so that no raw visitor data is ever sent directly from a reader's browser to a third party.
Persistence Without a Cloud Block Store
Storing data on a Kubernetes cluster without a cloud provider requires deliberate choices. The solution here is hostPath persistent volumes — a Kubernetes volume type that maps a directory on the host machine's filesystem directly into a pod.
Two volumes are provisioned: one for MySQL data and the other for Ghost content — themes, images, and media uploads. Both are declared with a Retain reclaim policy, meaning that even if the PersistentVolumeClaim is deleted, the underlying data remains on disk until explicitly removed.
A custom StorageClass named ghost-hostpath ties the volumes together without using any dynamic provisioner. PersistentVolumes and PersistentVolumeClaims bind by name, making the relationship explicit and immune to unexpected rebinding.
The operational trade-off is well understood: hostPath volumes are tied to a single node. This is acceptable for a single-node cluster, and the node affinity rules on every storage-dependent workload make the constraint explicit rather than implicit.
Secrets Management
No credentials, tokens, or passwords live in any file tracked by version control. Application secrets — database passwords, mail provider credentials, Tinybird API tokens, AWS access keys — are created imperatively in the cluster via a shell script that reads from a local, untracked environment file. A checked-in template documents every required variable without containing any actual values.
The result is a clean separation: the repository contains the complete structural definition of the infrastructure, while the secret material lives only in the cluster's etcd and on the operator's local machine.
What This Proves
The architecture described here is not exotic. Every component — cert-manager, Envoy Gateway, MetalLB, Ghost, ActivityPub — is open source, well-documented, and free of licensing costs. The total cloud spend is a Route 53 hosted zone at fifty-three cents per month, and the occasional DNS query charge.
What it demonstrates is that the primitives of production infrastructure — TLS automation, declarative configuration, health probes, rolling restarts, secrets management — are not the exclusive province of cloud providers. They are available to anyone willing to run a cluster in their home, configure a port forward on their router, and write a few hundred lines of YAML.
The blog at jawadtahir.de is not a proof of concept. It is the production system.