Module 12: Kubernetes Deployment¶
This module covers deploying the Solana DApps stack to Kubernetes, from local development with minikube to production-ready configurations with Helm charts.
Learning Objectives¶
By the end of this module, you will be able to:
- Set up a local Kubernetes cluster with minikube
- Create Helm charts for each service
- Configure Traefik ingress for routing
- Deploy the complete stack locally and to production
- Implement observability with Prometheus and Grafana
Prerequisites¶
- Docker Desktop or Docker Engine installed
- kubectl CLI installed
- Helm 3.x installed
- Basic understanding of containerization
Part A: Local Development with Minikube¶
Installing Minikube¶
Minikube creates a single-node Kubernetes cluster on your local machine, perfect for development and testing.
macOS (Homebrew):
Windows (Chocolatey):
Linux:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
Starting the Cluster¶
Configure minikube with sufficient resources for our stack:
# Start with recommended resources
minikube start \
--cpus=4 \
--memory=8192 \
--disk-size=50g \
--driver=docker
# Enable required addons
minikube addons enable ingress
minikube addons enable metrics-server
minikube addons enable dashboard
Verifying the Installation¶
# Check cluster status
minikube status
# Verify kubectl context
kubectl config current-context
# List running pods
kubectl get pods -A
Expected output shows system pods running:
NAMESPACE NAME READY STATUS
ingress-nginx ingress-nginx-controller-xxx 1/1 Running
kube-system coredns-xxx 1/1 Running
kube-system etcd-minikube 1/1 Running
kube-system kube-apiserver-minikube 1/1 Running
kube-system kube-controller-manager-minikube 1/1 Running
kube-system kube-scheduler-minikube 1/1 Running
Local Container Registry¶
For development, use minikube's built-in Docker daemon:
# Point your shell to minikube's Docker daemon
eval $(minikube docker-env)
# Build images directly in minikube
docker build -t solana-dapps/api:dev ./api
docker build -t solana-dapps/app:dev ./app
docker build -t solana-dapps/indexer:dev ./services/indexer
docker build -t solana-dapps/relay:dev ./services/relay
Accessing Services¶
Minikube provides several ways to access services:
# Get minikube IP
minikube ip
# Create a tunnel for LoadBalancer services
minikube tunnel
# Access a specific service
minikube service <service-name> -n <namespace>
# Open Kubernetes dashboard
minikube dashboard
Part B: Kubernetes Fundamentals¶
Core Concepts¶
Before diving into our deployment, let's review essential Kubernetes concepts:
| Concept | Description |
|---|---|
| Pod | Smallest deployable unit, contains one or more containers |
| Deployment | Manages pod replicas and rolling updates |
| Service | Stable network endpoint for accessing pods |
| ConfigMap | External configuration data |
| Secret | Sensitive data (passwords, tokens) |
| Ingress | HTTP/HTTPS routing rules |
| PersistentVolume | Durable storage |
| Namespace | Virtual cluster for resource isolation |
Namespace Strategy¶
We'll use separate namespaces for isolation:
# k8s/namespaces.yaml
apiVersion: v1
kind: Namespace
metadata:
name: solana-dapps
labels:
app.kubernetes.io/name: solana-dapps
environment: development
---
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
app.kubernetes.io/name: monitoring
Apply namespaces:
Resource Quotas¶
Limit resource consumption per namespace:
# k8s/resource-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: solana-dapps-quota
namespace: solana-dapps
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "20"
services: "10"
secrets: "20"
configmaps: "20"
Part C: Helm Charts¶
Why Helm?¶
Helm is the package manager for Kubernetes. It provides:
- Templating: Generate Kubernetes manifests from templates
- Versioning: Track chart versions and rollback
- Dependencies: Manage complex application dependencies
- Values: Override configuration per environment
Chart Structure¶
Our Helm chart structure:
k8s/helm/
├── solana-dapps/ # Umbrella chart
│ ├── Chart.yaml
│ ├── values.yaml # Default values
│ ├── values-local.yaml # Local overrides
│ ├── values-dev.yaml # Devnet overrides
│ ├── values-prod.yaml # Production overrides
│ └── templates/
│ ├── _helpers.tpl # Template helpers
│ ├── namespace.yaml
│ └── NOTES.txt
├── api/ # FastAPI subchart
│ ├── Chart.yaml
│ ├── values.yaml
│ └── templates/
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── configmap.yaml
│ ├── secret.yaml
│ └── hpa.yaml
├── app/ # NextJS subchart
│ ├── Chart.yaml
│ ├── values.yaml
│ └── templates/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── configmap.yaml
├── indexer/ # Indexer subchart
│ └── ...
└── relay/ # Relay subchart
└── ...
Umbrella Chart¶
The main chart that orchestrates all subcharts:
# k8s/helm/solana-dapps/Chart.yaml
apiVersion: v2
name: solana-dapps
description: Complete Solana DApps deployment
type: application
version: 0.1.0
appVersion: "1.0.0"
dependencies:
- name: api
version: "0.1.0"
repository: "file://../api"
condition: api.enabled
- name: app
version: "0.1.0"
repository: "file://../app"
condition: app.enabled
- name: indexer
version: "0.1.0"
repository: "file://../indexer"
condition: indexer.enabled
- name: relay
version: "0.1.0"
repository: "file://../relay"
condition: relay.enabled
- name: postgresql
version: "12.x.x"
repository: "https://charts.bitnami.com/bitnami"
condition: postgresql.enabled
- name: redis
version: "17.x.x"
repository: "https://charts.bitnami.com/bitnami"
condition: redis.enabled
Default Values¶
# k8s/helm/solana-dapps/values.yaml
global:
# Solana network configuration
solana:
rpcUrl: "https://api.devnet.solana.com"
wsUrl: "wss://api.devnet.solana.com"
commitment: "confirmed"
# Program IDs
programs:
tokenEscrow: "Escr1111111111111111111111111111111111111111"
nftMarketplace: "NFTM1111111111111111111111111111111111111111"
defiAmm: "AMM11111111111111111111111111111111111111111"
daoGovernance: "Gov11111111111111111111111111111111111111111"
# Image settings
image:
registry: ""
pullPolicy: IfNotPresent
# Ingress settings
ingress:
enabled: true
className: nginx
annotations: {}
hosts:
- host: solana-dapps.local
paths:
- path: /
pathType: Prefix
# API service configuration
api:
enabled: true
replicaCount: 2
image:
repository: solana-dapps/api
tag: "latest"
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilization: 70
# Frontend app configuration
app:
enabled: true
replicaCount: 2
image:
repository: solana-dapps/app
tag: "latest"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
# Indexer service configuration
indexer:
enabled: true
replicaCount: 1
image:
repository: solana-dapps/indexer
tag: "latest"
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
# Relay service configuration
relay:
enabled: true
replicaCount: 2
image:
repository: solana-dapps/relay
tag: "latest"
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
# PostgreSQL configuration
postgresql:
enabled: true
auth:
postgresPassword: "" # Set via secret
database: solana_dapps
primary:
persistence:
size: 10Gi
# Redis configuration
redis:
enabled: true
auth:
enabled: false
master:
persistence:
size: 1Gi
API Deployment Template¶
# k8s/helm/api/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "api.fullname" . }}
labels:
{{- include "api.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "api.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
labels:
{{- include "api.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ include "api.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8000
protocol: TCP
envFrom:
- configMapRef:
name: {{ include "api.fullname" . }}-config
- secretRef:
name: {{ include "api.fullname" . }}-secret
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 5
periodSeconds: 10
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Service Template¶
# k8s/helm/api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "api.fullname" . }}
labels:
{{- include "api.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "api.selectorLabels" . | nindent 4 }}
ConfigMap Template¶
# k8s/helm/api/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "api.fullname" . }}-config
labels:
{{- include "api.labels" . | nindent 4 }}
data:
APP_NAME: "Solana DApps API"
DEBUG: "{{ .Values.debug }}"
API_PREFIX: "/api/v1"
SOLANA_RPC_URL: "{{ .Values.global.solana.rpcUrl }}"
SOLANA_WS_URL: "{{ .Values.global.solana.wsUrl }}"
COMMITMENT: "{{ .Values.global.solana.commitment }}"
TOKEN_ESCROW_PROGRAM_ID: "{{ .Values.global.programs.tokenEscrow }}"
NFT_MARKETPLACE_PROGRAM_ID: "{{ .Values.global.programs.nftMarketplace }}"
DEFI_AMM_PROGRAM_ID: "{{ .Values.global.programs.defiAmm }}"
DAO_GOVERNANCE_PROGRAM_ID: "{{ .Values.global.programs.daoGovernance }}"
REDIS_URL: "redis://{{ .Release.Name }}-redis-master:6379"
CACHE_TTL: "60"
RATE_LIMIT_REQUESTS: "100"
RATE_LIMIT_PERIOD: "60"
Horizontal Pod Autoscaler¶
# k8s/helm/api/templates/hpa.yaml
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "api.fullname" . }}
labels:
{{- include "api.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "api.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilization }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilization }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilization }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilization }}
{{- end }}
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
{{- end }}
Part D: Traefik Ingress¶
Why Traefik?¶
Traefik is a modern reverse proxy and load balancer:
- Auto-discovery: Automatically discovers services
- Let's Encrypt: Built-in TLS certificate management
- Middleware: Rate limiting, authentication, headers
- Dashboard: Visual traffic monitoring
Installing Traefik¶
Using Helm:
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik \
--namespace traefik \
--create-namespace \
--values k8s/traefik/values.yaml
Traefik values:
# k8s/traefik/values.yaml
deployment:
replicas: 2
ingressRoute:
dashboard:
enabled: true
matchRule: Host(`traefik.solana-dapps.local`)
entryPoints:
- web
providers:
kubernetesCRD:
enabled: true
namespaces: []
kubernetesIngress:
enabled: true
namespaces: []
ports:
web:
port: 8000
exposedPort: 80
expose: true
websecure:
port: 8443
exposedPort: 443
expose: true
tls:
enabled: true
service:
type: LoadBalancer
logs:
general:
level: INFO
access:
enabled: true
metrics:
prometheus:
entryPoint: metrics
addEntryPointsLabels: true
addServicesLabels: true
IngressRoute Configuration¶
Traefik-specific routing with IngressRoute CRD:
# k8s/helm/solana-dapps/templates/ingressroute.yaml
{{- if .Values.traefik.enabled }}
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: {{ .Release.Name }}-web
namespace: {{ .Release.Namespace }}
spec:
entryPoints:
- web
- websecure
routes:
# API routes
- match: Host(`{{ .Values.global.ingress.host }}`) && PathPrefix(`/api`)
kind: Rule
services:
- name: {{ .Release.Name }}-api
port: 8000
middlewares:
- name: {{ .Release.Name }}-ratelimit
- name: {{ .Release.Name }}-cors
# WebSocket routes for relay
- match: Host(`{{ .Values.global.ingress.host }}`) && PathPrefix(`/ws`)
kind: Rule
services:
- name: {{ .Release.Name }}-relay
port: 3001
# Indexer WebSocket
- match: Host(`{{ .Values.global.ingress.host }}`) && PathPrefix(`/indexer`)
kind: Rule
services:
- name: {{ .Release.Name }}-indexer
port: 3002
# Frontend catch-all
- match: Host(`{{ .Values.global.ingress.host }}`)
kind: Rule
services:
- name: {{ .Release.Name }}-app
port: 3000
tls:
secretName: {{ .Release.Name }}-tls
{{- end }}
Middleware Configuration¶
# k8s/helm/solana-dapps/templates/middleware.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: {{ .Release.Name }}-ratelimit
namespace: {{ .Release.Namespace }}
spec:
rateLimit:
average: 100
burst: 200
period: 1m
sourceCriterion:
ipStrategy:
depth: 1
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: {{ .Release.Name }}-cors
namespace: {{ .Release.Namespace }}
spec:
headers:
accessControlAllowMethods:
- GET
- POST
- PUT
- DELETE
- OPTIONS
accessControlAllowHeaders:
- Content-Type
- Authorization
- X-Requested-With
accessControlAllowOriginList:
- "https://{{ .Values.global.ingress.host }}"
accessControlMaxAge: 100
addVaryHeader: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: {{ .Release.Name }}-security-headers
namespace: {{ .Release.Namespace }}
spec:
headers:
frameDeny: true
contentTypeNosniff: true
browserXssFilter: true
referrerPolicy: "strict-origin-when-cross-origin"
contentSecurityPolicy: "default-src 'self'"
TLS with cert-manager¶
Install cert-manager for automatic certificate management:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
Create a ClusterIssuer for Let's Encrypt:
# k8s/cert-manager/cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: traefik
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-staging-key
solvers:
- http01:
ingress:
class: traefik
Certificate request:
# k8s/helm/solana-dapps/templates/certificate.yaml
{{- if .Values.tls.enabled }}
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .Release.Name }}-tls
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ .Release.Name }}-tls
issuerRef:
name: {{ .Values.tls.issuer | default "letsencrypt-prod" }}
kind: ClusterIssuer
commonName: {{ .Values.global.ingress.host }}
dnsNames:
- {{ .Values.global.ingress.host }}
- "*.{{ .Values.global.ingress.host }}"
{{- end }}
Part E: Secrets Management¶
Kubernetes Secrets¶
Basic secret creation:
# k8s/helm/api/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ include "api.fullname" . }}-secret
labels:
{{- include "api.labels" . | nindent 4 }}
type: Opaque
data:
{{- range $key, $value := .Values.secrets }}
{{ $key }}: {{ $value | b64enc }}
{{- end }}
Sealed Secrets¶
For GitOps workflows, use sealed-secrets to encrypt secrets that can be safely stored in Git:
# Install sealed-secrets controller
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm install sealed-secrets sealed-secrets/sealed-secrets \
--namespace kube-system
# Install kubeseal CLI
brew install kubeseal # macOS
Create a sealed secret:
# Create a regular secret
kubectl create secret generic api-secrets \
--namespace solana-dapps \
--from-literal=DATABASE_URL='postgresql://user:pass@host:5432/db' \
--from-literal=REDIS_PASSWORD='secret' \
--dry-run=client -o yaml > secret.yaml
# Seal the secret
kubeseal --format yaml < secret.yaml > sealed-secret.yaml
# Apply sealed secret (safe to commit to Git)
kubectl apply -f sealed-secret.yaml
External Secrets Operator¶
For production, use External Secrets Operator with AWS Secrets Manager, HashiCorp Vault, or similar:
# k8s/external-secrets/secret-store.yaml
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secrets
namespace: solana-dapps
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: api-secrets
namespace: solana-dapps
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets
kind: SecretStore
target:
name: api-secrets
creationPolicy: Owner
data:
- secretKey: DATABASE_URL
remoteRef:
key: solana-dapps/prod/database
property: url
- secretKey: REDIS_PASSWORD
remoteRef:
key: solana-dapps/prod/redis
property: password
Part F: Observability Stack¶
Prometheus¶
Install Prometheus using the kube-prometheus-stack:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--values k8s/monitoring/prometheus-values.yaml
Prometheus values:
# k8s/monitoring/prometheus-values.yaml
prometheus:
prometheusSpec:
retention: 15d
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
serviceMonitorSelectorNilUsesHelmValues: false
podMonitorSelectorNilUsesHelmValues: false
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
grafana:
enabled: true
adminPassword: "admin" # Change in production
persistence:
enabled: true
size: 10Gi
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
ServiceMonitor for API¶
# k8s/helm/api/templates/servicemonitor.yaml
{{- if .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "api.fullname" . }}
labels:
{{- include "api.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "api.selectorLabels" . | nindent 6 }}
endpoints:
- port: http
path: /metrics
interval: 30s
scrapeTimeout: 10s
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
{{- end }}
Grafana Dashboard¶
Create a custom dashboard for Solana DApps:
{
"dashboard": {
"title": "Solana DApps Overview",
"panels": [
{
"title": "API Request Rate",
"type": "graph",
"targets": [
{
"expr": "sum(rate(http_requests_total{service=\"api\"}[5m]))",
"legendFormat": "Requests/s"
}
]
},
{
"title": "API Response Time (p95)",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.95, rate(http_request_duration_seconds_bucket{service=\"api\"}[5m]))",
"legendFormat": "p95 latency"
}
]
},
{
"title": "Transaction Success Rate",
"type": "stat",
"targets": [
{
"expr": "sum(rate(solana_tx_success_total[5m])) / sum(rate(solana_tx_total[5m])) * 100",
"legendFormat": "Success %"
}
]
},
{
"title": "Active WebSocket Connections",
"type": "gauge",
"targets": [
{
"expr": "sum(websocket_connections_active)",
"legendFormat": "Connections"
}
]
}
]
}
}
Loki for Logs¶
helm repo add grafana https://grafana.github.io/helm-charts
helm install loki grafana/loki-stack \
--namespace monitoring \
--set grafana.enabled=false \
--set prometheus.enabled=false \
--set loki.persistence.enabled=true \
--set loki.persistence.size=50Gi
Part G: Deployment Workflow¶
Local Deployment¶
# Build dependencies
helm dependency build k8s/helm/solana-dapps
# Install locally
helm install solana-dapps k8s/helm/solana-dapps \
--namespace solana-dapps \
--create-namespace \
--values k8s/helm/solana-dapps/values-local.yaml
# Check status
kubectl get pods -n solana-dapps
kubectl get svc -n solana-dapps
kubectl get ingress -n solana-dapps
# Access locally
echo "$(minikube ip) solana-dapps.local" | sudo tee -a /etc/hosts
open http://solana-dapps.local
Upgrade Deployment¶
# Upgrade with new values
helm upgrade solana-dapps k8s/helm/solana-dapps \
--namespace solana-dapps \
--values k8s/helm/solana-dapps/values-local.yaml \
--set api.image.tag=v1.1.0
# Check rollout status
kubectl rollout status deployment/solana-dapps-api -n solana-dapps
Rollback¶
# View history
helm history solana-dapps -n solana-dapps
# Rollback to previous version
helm rollback solana-dapps 1 -n solana-dapps
Production Deployment¶
# Dry run first
helm upgrade --install solana-dapps k8s/helm/solana-dapps \
--namespace solana-dapps \
--values k8s/helm/solana-dapps/values-prod.yaml \
--dry-run
# Deploy to production
helm upgrade --install solana-dapps k8s/helm/solana-dapps \
--namespace solana-dapps \
--values k8s/helm/solana-dapps/values-prod.yaml \
--wait \
--timeout 10m
Part H: Development Workflow with Skaffold¶
Skaffold automates the build-push-deploy cycle for development:
# skaffold.yaml
apiVersion: skaffold/v4beta5
kind: Config
metadata:
name: solana-dapps
build:
local:
push: false
artifacts:
- image: solana-dapps/api
context: api
docker:
dockerfile: Dockerfile
- image: solana-dapps/app
context: app
docker:
dockerfile: Dockerfile
- image: solana-dapps/indexer
context: services/indexer
docker:
dockerfile: Dockerfile
- image: solana-dapps/relay
context: services/relay
docker:
dockerfile: Dockerfile
manifests:
helm:
releases:
- name: solana-dapps
chartPath: k8s/helm/solana-dapps
namespace: solana-dapps
createNamespace: true
valuesFiles:
- k8s/helm/solana-dapps/values-local.yaml
setValueTemplates:
api.image.tag: "{{.IMAGE_TAG_solana_dapps_api}}"
app.image.tag: "{{.IMAGE_TAG_solana_dapps_app}}"
indexer.image.tag: "{{.IMAGE_TAG_solana_dapps_indexer}}"
relay.image.tag: "{{.IMAGE_TAG_solana_dapps_relay}}"
deploy:
helm: {}
portForward:
- resourceType: service
resourceName: solana-dapps-api
namespace: solana-dapps
port: 8000
localPort: 8000
- resourceType: service
resourceName: solana-dapps-app
namespace: solana-dapps
port: 3000
localPort: 3000
profiles:
- name: dev
activation:
- kubeContext: minikube
build:
local:
push: false
- name: prod
activation:
- kubeContext: prod-cluster
build:
googleCloudBuild:
projectId: my-project
Usage:
# Development with hot reload
skaffold dev
# One-time build and deploy
skaffold run
# Build only
skaffold build
Summary¶
In this module, you learned:
- Minikube Setup: Creating a local Kubernetes cluster for development
- Helm Charts: Packaging applications for Kubernetes deployment
- Traefik Ingress: Configuring HTTP routing and middleware
- TLS Certificates: Automatic certificate management with cert-manager
- Secrets Management: Secure handling of sensitive configuration
- Observability: Monitoring with Prometheus and Grafana
- Development Workflow: Using Skaffold for rapid iteration
Key Takeaways¶
- Use Helm charts for reproducible deployments across environments
- Separate configuration per environment (local, dev, prod)
- Implement horizontal pod autoscaling for production workloads
- Use sealed-secrets or external secrets for GitOps workflows
- Monitor everything with Prometheus and create meaningful dashboards
Next Steps¶
In Module 13: Production Patterns, we'll cover:
- CI/CD pipelines with GitHub Actions
- Blue-green and canary deployments
- Security best practices for smart contracts
- Incident response and disaster recovery