Back to Journal
DevOps

Complete Guide to Zero-Downtime Deployments with Typescript

A comprehensive guide to implementing Zero-Downtime Deployments using Typescript, covering architecture, code examples, and production-ready patterns.

Muneer Puthiya Purayil 16 min read

TypeScript applications on Node.js and Next.js handle zero-downtime deployments differently than compiled languages. The event loop model means graceful shutdown is straightforward — but container orchestration, health checks, and database migrations still require careful implementation. This guide covers production patterns for deploying TypeScript services with zero user-visible downtime.

Graceful Server Shutdown

Node.js receives SIGTERM from container orchestrators. Handle it properly:

typescript
1// server.ts
2import http from 'http';
3import { app } from './app';
4 
5let isShuttingDown = false;
6let activeConnections = 0;
7 
8const server = http.createServer(app);
9 
10// Track connections
11server.on('connection', (socket) => {
12 activeConnections++;
13 socket.on('close', () => activeConnections--);
14});
15 
16// Graceful shutdown handler
17async function shutdown(signal: string) {
18 console.log(`Received ${signal}, starting graceful shutdown`);
19 isShuttingDown = true;
20 
21 // Phase 1: Stop accepting new connections on health check
22 // (LB will stop sending traffic)
23 
24 // Phase 2: Wait for LB to deregister
25 await new Promise((r) => setTimeout(r, 15000));
26 
27 // Phase 3: Close server (stops accepting new connections)
28 server.close(() => {
29 console.log('All connections closed');
30 process.exit(0);
31 });
32 
33 // Phase 4: Force close after timeout
34 setTimeout(() => {
35 console.log(`Forcing shutdown with ${activeConnections} connections`);
36 process.exit(1);
37 }, 30000);
38}
39 
40process.on('SIGTERM', () => shutdown('SIGTERM'));
41process.on('SIGINT', () => shutdown('SIGINT'));
42 
43// Health endpoint
44app.get('/health/ready', (req, res) => {
45 if (isShuttingDown) {
46 return res.status(503).json({ status: 'shutting_down' });
47 }
48 res.json({ status: 'healthy' });
49});
50 
51server.listen(8080, () => console.log('Server ready on :8080'));
52 

Next.js App Router Health Checks

typescript
1// app/api/health/ready/route.ts
2import { NextResponse } from 'next/server';
3import { prisma } from '@/lib/prisma';
4 
5export async function GET() {
6 try {
7 // Verify database connectivity
8 await prisma.$queryRaw`SELECT 1`;
9 
10 return NextResponse.json({
11 status: 'healthy',
12 timestamp: new Date().toISOString(),
13 });
14 } catch {
15 return NextResponse.json(
16 { status: 'unhealthy' },
17 { status: 503 }
18 );
19 }
20}
21 
22// app/api/health/live/route.ts
23export async function GET() {
24 return NextResponse.json({ status: 'alive' });
25}
26 

Database Migrations with Prisma

Separate migrations from application deployment:

typescript
1// scripts/migrate.ts
2import { execSync } from 'child_process';
3 
4async function runMigrations() {
5 console.log('Running database migrations...');
6 
7 try {
8 execSync('npx prisma migrate deploy', {
9 stdio: 'inherit',
10 env: { ...process.env },
11 });
12 console.log('Migrations completed successfully');
13 } catch (error) {
14 console.error('Migration failed:', error);
15 process.exit(1);
16 }
17}
18 
19runMigrations();
20 
yaml
1# GitHub Actions: run migrations before deploy
2jobs:
3 migrate:
4 runs-on: ubuntu-latest
5 steps:
6 - uses: actions/checkout@v4
7 - run: npx prisma migrate deploy
8 env:
9 DATABASE_URL: ${{ secrets.DATABASE_URL }}
10 
11 deploy:
12 needs: migrate
13 runs-on: ubuntu-latest
14 steps:
15 - name: Deploy new version
16 run: |
17 # Platform-specific deploy command
18 fly deploy --strategy rolling
19

Safe Prisma migration rules:

1✅ model User { emailVerified Boolean? } // Nullable column add
2CREATE INDEX CONCURRENTLY // Non-blocking index
3❌ model User { email String @unique } // Adding unique constraint locks
4Dropping columns // Must be a separate deploy
5 

Request Tracking Middleware

typescript
1// middleware/tracking.ts
2import { Request, Response, NextFunction } from 'express';
3 
4let activeRequests = 0;
5let totalRequests = 0;
6 
7export function requestTracker(
8 req: Request,
9 res: Response,
10 next: NextFunction
11) {
12 activeRequests++;
13 totalRequests++;
14 
15 res.on('finish', () => {
16 activeRequests--;
17 });
18 
19 next();
20}
21 
22export function getActiveRequests() {
23 return activeRequests;
24}
25 
26export async function waitForDrain(timeoutMs = 30000): Promise<boolean> {
27 const deadline = Date.now() + timeoutMs;
28 while (Date.now() < deadline) {
29 if (activeRequests === 0) return true;
30 await new Promise((r) => setTimeout(r, 100));
31 }
32 return activeRequests === 0;
33}
34 

Need a second opinion on your DevOps pipelines architecture?

I run free 30-minute strategy calls for engineering teams tackling this exact problem.

Book a Free Call

Feature Flags

typescript
1// lib/features.ts
2import { createHash } from 'crypto';
3 
4interface Flag {
5 key: string;
6 enabled: boolean;
7 rolloutPercent: number;
8 allowedTenants?: string[];
9}
10 
11const flags = new Map<string, Flag>();
12 
13export function isEnabled(key: string, tenantId: string): boolean {
14 const flag = flags.get(key);
15 if (!flag?.enabled) return false;
16 
17 if (flag.allowedTenants?.includes(tenantId)) return true;
18 
19 const hash = createHash('md5')
20 .update(`${key}:${tenantId}`)
21 .digest('hex');
22 const percent = parseInt(hash.slice(0, 8), 16) % 100;
23 return percent < flag.rolloutPercent;
24}
25 
26// Refresh from Redis every 10 seconds
27setInterval(async () => {
28 const keys = await redis.keys('flag:*');
29 for (const k of keys) {
30 const raw = await redis.get(k);
31 if (raw) {
32 const flag: Flag = JSON.parse(raw);
33 flags.set(flag.key, flag);
34 }
35 }
36}, 10000);
37 

Docker and Kubernetes Configuration

dockerfile
1FROM node:20-alpine AS builder
2WORKDIR /app
3COPY package.json bun.lockb ./
4RUN npm install --frozen-lockfile
5COPY . .
6RUN npm run build
7 
8FROM node:20-alpine
9WORKDIR /app
10COPY --from=builder /app/dist ./dist
11COPY --from=builder /app/node_modules ./node_modules
12COPY --from=builder /app/package.json ./
13 
14HEALTHCHECK --interval=10s --timeout=5s --retries=3 \
15 CMD wget -q --spider http://localhost:8080/health/ready || exit 1
16 
17USER node
18CMD ["node", "dist/server.js"]
19 
yaml
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: api-server
5spec:
6 replicas: 3
7 strategy:
8 type: RollingUpdate
9 rollingUpdate:
10 maxSurge: 1
11 maxUnavailable: 0
12 template:
13 spec:
14 terminationGracePeriodSeconds: 45
15 containers:
16 - name: api
17 image: api-server:latest
18 ports:
19 - containerPort: 8080
20 readinessProbe:
21 httpGet:
22 path: /health/ready
23 port: 8080
24 initialDelaySeconds: 5
25 periodSeconds: 5
26 livenessProbe:
27 httpGet:
28 path: /health/live
29 port: 8080
30 initialDelaySeconds: 10
31 periodSeconds: 10
32 lifecycle:
33 preStop:
34 exec:
35 command: ["/bin/sh", "-c", "sleep 15"]
36 resources:
37 requests:
38 memory: "256Mi"
39 cpu: "250m"
40 limits:
41 memory: "512Mi"
42 cpu: "500m"
43 

Cache Warming

typescript
1// lib/warmup.ts
2import { prisma } from './prisma';
3import { redis } from './redis';
4 
5export async function warmCaches(): Promise<void> {
6 const tasks = [
7 warmPlanCache(),
8 warmConfigCache(),
9 warmTopTenants(),
10 ];
11 
12 await Promise.allSettled(tasks);
13 console.log('Cache warming complete');
14}
15 
16async function warmPlanCache() {
17 const plans = await prisma.plan.findMany({
18 where: { active: true },
19 });
20 const pipe = redis.pipeline();
21 for (const plan of plans) {
22 pipe.set(`plan:${plan.id}`, JSON.stringify(plan), 'EX', 3600);
23 }
24 await pipe.exec();
25}
26 
27async function warmTopTenants() {
28 const tenants = await prisma.tenant.findMany({
29 orderBy: { requestCount: 'desc' },
30 take: 100,
31 });
32 const pipe = redis.pipeline();
33 for (const t of tenants) {
34 pipe.set(`tenant:${t.id}:config`, JSON.stringify(t.settings), 'EX', 3600);
35 }
36 await pipe.exec();
37}
38 

Platform-Specific Deployment

Vercel (Next.js)

Zero-downtime is automatic. Each deployment is immutable and traffic switches atomically.

Fly.io

bash
# Zero-downtime deploy with rolling strategy fly deploy --strategy rolling

Railway

Configure health check path in Railway dashboard. With 2+ instances, Railway performs rolling updates automatically.

FAQ

Need expert help?

Building with CI/CD pipelines?

I help teams ship production-grade systems. From architecture review to hands-on builds.

Muneer Puthiya Purayil

SaaS Architect & AI Systems Engineer. 10+ years shipping production infrastructure across fintech, automotive, e-commerce, and healthcare.

Engage

Start a
Conversation.

For teams building at scale: SaaS platforms, agentic AI systems, and enterprise mobile infrastructure. Scope and fit are evaluated before any engagement begins.

Limited availability · Q3 / Q4 2026