Complete Guide to Kubernetes Production Setup with Typescript
A comprehensive guide to implementing Kubernetes Production Setup using Typescript, covering architecture, code examples, and production-ready patterns.
Muneer Puthiya Purayil 15 min read
TypeScript has become a viable choice for Kubernetes-native applications through the Node.js runtime, particularly for teams already invested in the TypeScript ecosystem. This guide covers containerization, deployment patterns, and production concerns specific to running TypeScript services on Kubernetes.
Container Image Optimization
Node.js images tend toward bloat. A disciplined multi-stage build keeps production images lean:
The separate deps stage installs only production dependencies, excluding TypeScript, testing frameworks, and build tools from the runtime image. This typically reduces node_modules size by 40-60%.
Node.js's single-threaded event loop handles concurrent connections efficiently for I/O-bound workloads. A single Node.js process can handle thousands of concurrent connections because it never blocks on I/O operations.
Using Fastify for Production
typescript
1importFastifyfrom"fastify";
2import { Pool } from"pg";
3import metricsPlugin from"fastify-metrics";
4
5const pool = newPool({
6connectionString: process.env.DATABASE_URL,
7max: 20,
8min: 5,
9});
10
11const app = Fastify({
12logger: {
13level: "info",
14transport: undefined, // JSON output by default in production
Fastify is 2-3x faster than Express for JSON serialization workloads and includes schema validation, structured logging, and a plugin system. The fastify-metrics plugin exposes Prometheus-compatible metrics automatically.
Kubernetes Deployment
yaml
1apiVersion:apps/v1
2kind:Deployment
3metadata:
4name:api-service
5spec:
6replicas:3
7selector:
8matchLabels:
9app:api-service
10template:
11metadata:
12labels:
13app:api-service
14annotations:
15prometheus.io/scrape:"true"
16prometheus.io/port:"8080"
17prometheus.io/path:"/metrics"
18spec:
19terminationGracePeriodSeconds:30
20containers:
21-name:api
22image:registry.example.com/api-service:v1.5.0
23ports:
24-containerPort:8080
25name:http
26env:
27-name:NODE_ENV
28value:"production"
29-name:DATABASE_URL
30valueFrom:
31secretKeyRef:
32name:api-secrets
33key:database-url
34-name:NODE_OPTIONS
35value:"--max-old-space-size=384"
36resources:
37requests:
38cpu:250m
39memory:256Mi
40limits:
41memory:512Mi
42readinessProbe:
43httpGet:
44path:/readyz
45port:http
46initialDelaySeconds:3
47periodSeconds:10
48livenessProbe:
49httpGet:
50path:/healthz
51port:http
52initialDelaySeconds:5
53periodSeconds:15
54securityContext:
55allowPrivilegeEscalation:false
56readOnlyRootFilesystem:true
57runAsNonRoot:true
58runAsUser:1001
59capabilities:
60drop: ["ALL"]
61volumeMounts:
62-name:tmp
63mountPath:/tmp
64volumes:
65-name:tmp
66emptyDir: {}
67
Memory Configuration
--max-old-space-size=384 sets the V8 heap limit to 384MB inside a 512Mi container. The remaining 128MB covers V8 overhead (new space, code space, external allocations) and the Node.js runtime itself. Without this flag, V8 defaults to a heap limit based on the host machine's memory, not the container's limit, which leads to OOM kills.
In Kubernetes, you have two options for horizontal scaling:
Single-process pods with more replicas. Simpler, and Kubernetes manages the distribution. Each pod runs one Node.js process.
Cluster mode within pods with fewer replicas. More memory-efficient because workers share the V8 code cache. Each pod runs N Node.js processes.
For most services, option 1 is simpler and sufficient. Option 2 helps when you need to reduce the number of pods (e.g., to reduce database connection count) while maintaining throughput.
Deploy workers as a separate Kubernetes Deployment from the API:
yaml
1apiVersion:apps/v1
2kind:Deployment
3metadata:
4name:email-worker
5spec:
6replicas:2
7selector:
8matchLabels:
9app:email-worker
10template:
11spec:
12containers:
13-name:worker
14image:registry.example.com/api-service:v1.5.0
15command: ["node", "dist/workers/email.js"]
16resources:
17requests:
18cpu:250m
19memory:256Mi
20limits:
21memory:512Mi
22
Anti-Patterns to Avoid
Using npm start as the container command. npm adds a process wrapper that doesn't forward SIGTERM correctly. Always use node dist/server.js directly.
Not setting NODE_ENV=production. Express (and many npm packages) run in development mode by default, which enables verbose logging, disables response caching, and includes stack traces in error responses.
Ignoring event loop blocking. A synchronous operation (JSON.parse on a 100MB payload, RegExp backtracking, synchronous file I/O) blocks the entire Node.js event loop. Use --diagnostic-report-on-signal and send SIGUSR2 to generate a diagnostic report when requests are slow.
Running npm install at container startup. All dependencies must be baked into the image. Container startup should be deterministic and fast — executing npm install adds 10-60 seconds of unpredictable startup time and fails when the npm registry is unreachable.
Using latest or lts Node.js base images. Pin to a specific major version (node:22-alpine) to prevent surprise runtime changes during image rebuilds.
Conclusion
TypeScript on Kubernetes works well for I/O-bound services where the team's existing TypeScript expertise outweighs Node.js's single-threaded limitation. The key operational requirements are explicit V8 heap sizing via --max-old-space-size, proper SIGTERM handling for graceful shutdown, and production-only dependency installation for lean images.
For compute-bound workloads, Node.js is not the optimal choice — Go or Rust provide better per-core performance. But for API services, webhook handlers, queue consumers, and BFF (Backend for Frontend) layers, TypeScript on Kubernetes delivers productive development with acceptable operational characteristics.
FAQ
Need expert help?
Building with CI/CD pipelines?
I help teams ship production-grade systems. From architecture review to hands-on builds.
For teams building at scale: SaaS platforms, agentic AI systems, and enterprise mobile infrastructure. Scope and fit are evaluated before any engagement begins.