Node Pools ¶
Load characteristics ¶
Before the migration to Kubernetes, memory consumption for the two production containers was:
- Laravel application:
152mb
- PostgreSQL database:
160mb
Node allocations ¶
Node Pool | Plan | Nodes | Pool ID |
---|---|---|---|
Production | Linode 4GB | 2 | 41824 |
Staging | Linode 4GB | 2 | 41825 |
Sandbox | Linode 8GB | 1 | 41826 |
Layout ¶
flowchart LR
subgraph Sandbox
PR1_CATS[CATS PR1]
PR2_CATS[CATS PR2]
PRN_CATS[CATS PR*]
end
subgraph Staging
GRAF[Grafana]
STG_CATS[CATS]
PROM[Prometheus]
PGREP[PG Replica]
GRAF --> PROM
end
subgraph Production
PRD_CATS[CATS]
INGRESS[Ingress]
INGRESS --> PRD_CATS
end
PGREP --> PRD_CATS
INGRESS --> STG_CATS
INGRESS --> GRAF
INGRESS --> PR1_CATS
INGRESS --> PR2_CATS
INGRESS --> PRN_CATS
Labels ¶
kubectl label nodes -l lke.linode.com/pool-id=41824 environment=production
kubectl label nodes -l lke.linode.com/pool-id=41825 environment=staging
kubectl label nodes -l lke.linode.com/pool-id=41826 environment=sandbox
Taints ¶
Taints must be applied manually to all the nodes in the pools, and will need to be re-applied any time nodes are added or recycled:
-
production node pool taint:
kubectl taint nodes -l lke.linode.com/pool-id=41824 environment=production:NoSchedule
-
staging node pool taint:
kubectl taint nodes -l lke.linode.com/pool-id=41826 environment=sandbox:NoSchedule
These taints prevent workloads without explicit tolerations from running on either the production node pool (because it must be kept stable) or the sandbox node pool (because it is inherently unstable). Therefor, cluster services with no explicit tolerations should end up running on the staging node pool because it is both stable and non-critical.