How to Deploy Ghost CMS on Google Kubernetes Engine (GKE) Using Google Cloud SQL?
You want Ghost.
You have GKE.
You have Cloud SQL.
This guide gives you a clean and practical deployment of Ghost CMS on Google Kubernetes Engine (GKE) using:
- Cloud SQL (MySQL) as the database
- Cloud SQL Auth proxy sidecar for secure DB connectivity
- PersistentVolumeClaim for Ghost content
- BackendConfig + health-sidecar to keep the GCP load balancer happy
- Gateway API (HTTPRoute) for routing
Everything below is structured so you can follow it in order.
⚠️ All values inside code blocks are placeholders. Replace them before applying.
⚠️ Do NOT commit secrets to Git.
⚠️ Prefer Workload Identity over JSON keys in production.
Before You Start
Replace these placeholders throughout the YAML:
- YOUR_NAMESPACE
- APP_NAME
- DOMAIN
- GATEWAY_NAME
- BACKENDCONFIG_NAME
- PVC_NAME
- IMAGE_GHOST
- HEALTH_PORT
- GHOST_PORT
- PROJECT_ID
- CLOUDSQL_INSTANCE_CONN
- CLOUDSQL_DB_NAME
- CLOUDSQL_DB_USER
- CLOUDSQL_DB_PASSWORD
- CLOUDSQL_SA_KEY_PATH
1. Create Local Secrets (Do Not Commit These)
We first create a local .env.ghost file.
This file must stay local. Do not commit it.
.env.ghost (LOCAL ONLY)
database__connection__user=CLOUDSQL_DB_USER
database__connection__password=CLOUDSQL_DB_PASSWORD
url=https://DOMAIN
mail__options__auth__user=EMAIL_USER
mail__options__auth__pass=EMAIL_PASS
Now create Kubernetes secrets.
# Create Cloud SQL service-account secret (LOCAL key file)
kubectl create secret generic cloudsql-instance-credentials \ --from-file=credentials.json=/path/to/CLOUDSQL_SA_KEY_PATH \ -n YOUR_NAMESPACE
# Create Ghost environment secret from .env.ghost
kubectl create secret generic ghost-env \ --from-env-file=.env.ghost \ -n YOUR_NAMESPACE
💡 If you’re using Workload Identity, skip mounting JSON keys and follow GKE Workload Identity best practices instead.
2. Persistent Volume Claim (Ghost Content Storage)
Ghost needs persistent storage for images and content.
Save as ghost-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: PVC_NAME
namespace: YOUR_NAMESPACE
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard-rwo
resources:
requests:
storage: 20Gi
Apply it:
kubectl apply -f ghost-pvc.yaml
3. BackendConfig (Load Balancer Health Check)
GKE needs a stable health endpoint.
Because Ghost redirects HTTP → HTTPS, health checks can fail.
We fix this using a health-sidecar and point BackendConfig to it.
Save as backendconfig.yaml:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: BACKENDCONFIG_NAME
namespace: YOUR_NAMESPACE
spec:
healthCheck:
type: HTTP
port: HEALTH_PORT
requestPath: /
checkIntervalSec: 10
timeoutSec: 5
healthyThreshold: 1
unhealthyThreshold: 3
Apply:
kubectl apply -f backendconfig.yaml
4. Deployment (Ghost + Proxy + Health Sidecar + InitContainer)
This is the heart of the setup.
We run four containers inside one Pod:
- Ghost
- Cloud SQL Auth proxy
- Health-sidecar
- initContainer (runs once to fix file permissions)
Save as deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: APP_NAME
namespace: YOUR_NAMESPACE
spec:
replicas: 2
selector:
matchLabels:
app: APP_NAME
template:
metadata:
labels:
app: APP_NAME
spec:
# initContainers: run one-off pre-start tasks (e.g., set PVC ownership)
initContainers:
- name: init-chown
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /var/lib/ghost/content"]
volumeMounts:
- name: content
mountPath: /var/lib/ghost/content
containers:
- name: ghost
image: IMAGE_GHOST
ports:
- containerPort: GHOST_PORT
name: http
env:
- name: database__client
value: "mysql"
- name: database__connection__host
value: "127.0.0.1"
- name: database__connection__port
value: "3306"
- name: database__connection__database
value: "CLOUDSQL_DB_NAME"
envFrom:
- secretRef:
name: ghost-env
volumeMounts:
- name: content
mountPath: /var/lib/ghost/content
- name: cloudsql-creds
mountPath: /secrets/cloudsql
readOnly: true
readinessProbe:
tcpSocket:
port: GHOST_PORT
initialDelaySeconds: 10
periodSeconds: 10
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.33.1
args:
- "-instances=CLOUDSQL_INSTANCE_CONN=tcp:3306"
- "-credential_file=/secrets/cloudsql/credentials.json"
volumeMounts:
- name: cloudsql-creds
mountPath: /secrets/cloudsql
readOnly: true
- name: health-sidecar
image: hashicorp/http-echo:0.2.3
args: ["-text=ok", "-listen=:"HEALTH_PORT""]
ports:
- containerPort: HEALTH_PORT
name: health
volumes:
- name: content
persistentVolumeClaim:
claimName: PVC_NAME
- name: cloudsql-creds
secret:
secretName: cloudsql-instance-credentials
Apply:
kubectl apply -f deployment.yaml
5. Service (NEG + BackendConfig)
Save as service.yaml:
apiVersion: v1
kind: Service
metadata:
name: APP_NAME-svc
namespace: YOUR_NAMESPACE
annotations:
cloud.google.com/backend-config: '{"default":"BACKENDCONFIG_NAME"}'
cloud.google.com/neg: '{"exposed_ports":{"GHOST_PORT":{},"HEALTH_PORT":{}}}'
spec:
selector:
app: APP_NAME
ports:
- name: http
port: 80
targetPort: GHOST_PORT
- name: health
port: HEALTH_PORT
targetPort: HEALTH_PORT
type: ClusterIP
Apply:
kubectl apply -f service.yaml
6. HTTPRoute (Gateway API)
Save as httproute.yaml:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: APP_NAME-route
namespace: YOUR_NAMESPACE
spec:
parentRefs:
- name: GATEWAY_NAME
kind: Gateway
hostnames:
- "YOUR_DOMAIN"
rules:
- backendRefs:
- name: APP_NAME-svc
port: 80
Apply:
kubectl apply -f httproute.yaml
If using Ingress instead of Gateway API, create an Ingress mapping host → APP_NAME-svc.
7. Recommended Deploy Order
# 1. Create secrets (done earlier)
# 2. PVC
kubectl apply -f ghost-pvc.yaml
# 3. BackendConfig
kubectl apply -f backendconfig.yaml
# 4. Deployment
kubectl apply -f deployment.yaml
# 5. Service
kubectl apply -f service.yaml
# 6. HTTPRoute / Ingress
kubectl apply -f httproute.yaml
# 7. Wait for rollout
kubectl rollout status deployment/APP_NAME -n YOUR_NAMESPACE
8. Quick Verification Commands
Health Check
kubectl run --rm -it --image=curlimages/curl curltest --restart=Never -n YOUR_NAMESPACE -- \ curl -v http://APP_NAME-svc.YOUR_NAMESPACE.svc.cluster.local:HEALTH_PORT/
Internal Ghost Test
kubectl run --rm -it --image=curlimages/curl curltest --restart=Never -n YOUR_NAMESPACE -- \ curl -v http://APP_NAME-svc.YOUR_NAMESPACE.svc.cluster.local/
Logs
kubectl logs -l app=APP_NAME -c cloud-sql-proxy -n YOUR_NAMESPACE --tail=200
kubectl logs -l app=APP_NAME -c ghost -n YOUR_NAMESPACE --tail=200
External Test
Visit:
https://YOUR_DOMAIN
9. Cleanup
kubectl delete -f httproute.yaml
kubectl delete -f service.yaml
kubectl delete -f deployment.yaml
kubectl delete -f backendconfig.yaml
kubectl delete -f ghost-pvc.yaml
kubectl delete secret ghost-env -n YOUR_NAMESPACE
kubectl delete secret cloudsql-instance-credentials -n YOUR_NAMESPACE
🔍 Why This Architecture Works
Cloud SQL Auth Proxy Sidecar
- Secure IAM-based authentication
- No public DB exposure
- Runs continuously
- Exposes
127.0.0.1:3306inside the pod
Ghost connects locally, securely.
initContainers
- Run once before main containers
- Fix PVC permissions
- Prevent “permission denied” issues
- Exit after completing
Not suitable for long-running services.
health-sidecar
This is the secret sauce.
Ghost redirects HTTP → HTTPS.
GCP load balancers expect HTTP 200.
Without the sidecar:
- Health checks fail
- Backend marked unhealthy
- Traffic drops
With the sidecar:
- LB probes HEALTH_PORT
- Always gets HTTP 200
- Traffic still routes to Ghost
Stable. Predictable. Production-ready.
Final Thoughts
This setup gives you:
- Secure DB connectivity
- Persistent content
- Load balancer compatibility
- Horizontal scaling (replicas: 2+)
- Clean separation of concerns
It’s production-grade, Kubernetes-native, and avoids the classic Ghost + GCP health check headache.
Deploy it once properly — and you won’t have to debug mysterious “backend unhealthy” errors again.
Happy publishing 🚀