Back to Journal
DevOps

Complete Guide to CI/CD Pipeline Design with Typescript

A comprehensive guide to implementing CI/CD Pipeline Design using Typescript, covering architecture, code examples, and production-ready patterns.

Muneer Puthiya Purayil 15 min read

Introduction

Why This Matters

TypeScript has become the default language for Node.js backend services, CLI tooling, and full-stack applications — yet CI/CD pipelines for TypeScript projects are often an afterthought. Teams glue together shell scripts, miss type errors in CI configuration code, and ship slow pipelines that erode developer trust.

Getting CI/CD right for TypeScript means more than running npm test. It means: fast feedback loops (under 5 minutes for gate checks), reproducible builds across developer machines and runners, artifact promotion strategies that prevent "it works on staging" surprises, and pipeline code that's as well-typed and tested as your application.

TypeScript's ecosystem has matured significantly. tsc --noEmit catches type errors before runtime. ts-node and tsx enable direct execution without a build step for scripts. Turborepo and Nx provide monorepo-aware build orchestration. The tooling exists — this guide shows you how to assemble it into a production-grade pipeline.

Who This Is For

This guide is for engineers who:

  • Maintain TypeScript backends (Express, Fastify, NestJS) or full-stack applications (Next.js, Remix) and want to professionalize their pipeline
  • Are building custom CI/CD tooling in TypeScript — deployment scripts, artifact managers, environment validators
  • Have working pipelines that are too slow (over 10 minutes for gate checks) or too fragile (flaky tests, non-deterministic builds)

You should be familiar with npm/pnpm/bun, TypeScript basics, and GitHub Actions YAML. Monorepo experience is helpful but not required.

What You Will Learn

By the end of this guide you will be able to:

  • Design a multi-stage CI/CD pipeline for TypeScript projects with proper caching
  • Implement type-safe CI scripts using tsx and zod for environment validation
  • Configure matrix builds for cross-platform and multi-Node-version testing
  • Write production-grade deployment workflows with health checks and rollback
  • Apply observability patterns to make pipeline failures fast to diagnose

Core Concepts

Key Terminology

Type checking vs compilation: tsc --noEmit validates types without producing output. It's fast and catches errors that Jest/Vitest won't. Separate it from the build step that produces deployable JS.

Module resolution: TypeScript supports multiple module resolution strategies (node, bundler, node16, nodenext). Mismatches between development and CI cause "works locally, fails in CI" bugs. Pin moduleResolution in tsconfig.json.

Build cache: Tools like Turborepo maintain a content-addressed cache of build outputs. If nothing changed in a package, outputs are restored from cache without recompilation.

Lockfile integrity: package-lock.json, pnpm-lock.yaml, or bun.lockb must be committed. CI should always install from the lockfile (npm ci, pnpm install --frozen-lockfile, bun install --frozen-lockfile) — never npm install which can silently update dependencies.

Artifact: For TypeScript services, artifacts are typically Docker images (not raw JS files). Build once in CI, push to a registry, deploy the immutable image to each environment.

OIDC authentication: Use OpenID Connect to authenticate GitHub Actions runners to cloud providers (AWS, GCP, Azure) without storing long-lived credentials as secrets. Supported natively by all major cloud providers.

Mental Models

The pipeline as typed code: Treat your GitHub Actions workflows and CI scripts like application code. Use TypeScript for scripts, validate inputs with Zod, handle errors explicitly. A pipeline that crashes with an unhandled promise rejection is just as bad as a production bug.

Cache invalidation as a dependency graph: Your cache key should encode everything that affects outputs. For a TypeScript project: Node.js version + lockfile hash + tsconfig hash. If any of these change, the cache is invalid. Over-broad cache keys waste storage; under-broad ones cause stale cache hits.

The three-environment contract: Local development, CI, and production should all resolve dependencies identically. This means the same Node.js version (use .nvmrc or engines field), the same lockfile, and the same environment variables (validated at startup).

Foundational Principles

  1. Type everything: CI scripts written in TypeScript with proper types catch misconfiguration before the script runs. A string | undefined environment variable validated with Zod is safer than a raw process.env.FOO.

  2. Fail fast: Run the cheapest checks first. Type checking (tsc --noEmit, ~30s) before tests (~5-10m). Lint (eslint, ~15s) before type check. Order jobs by cost, cheapest first.

  3. Build artifacts once: The Docker image built by CI is the exact image that runs in production. No rebuilding per environment. Tag images with the git SHA and promote them through environments.

  4. Measure everything: Gate on metrics: test coverage threshold (e.g., 80%), bundle size limit, build time. Regressions caught automatically don't become production incidents.


Architecture Overview

High-Level Design

A production TypeScript CI/CD pipeline has four phases:

1[Push/PR][Gate][Build & Package][Deploy]
2 ↓ ↓ ↓
3 (lint, (Docker (staging →
4 typecheck, image, prod with
5 test, SBOM) approval)
6 coverage)
7 

Gate phase runs on every push and PR. Must complete in under 8 minutes. Blocks merges if any check fails.

Build & Package phase runs on main merges. Produces a Docker image tagged with the git SHA and pushed to a container registry.

Deploy phase promotes the image. Staging is automatic after build; production requires a merge to a release tag or manual approval.

Component Breakdown

1.github/
2 workflows/
3 gate.yml # PR checks: lint, typecheck, test, coverage
4 build.yml # Docker build + push
5 deploy.yml # Environment promotion
6.nvmrc # Node.js version pin
7tsconfig.json # Shared TypeScript config
8tsconfig.build.json # Build-specific config (excludes test files)
9scripts/
10 validate-env.ts # Environment variable validation
11 check-deploy.ts # Pre-deployment checks
12 seed.ts # Database seeding for test environments
13 

Runner choice: GitHub-hosted ubuntu-latest for most jobs. Self-hosted only if you need persistent npm/pnpm cache on disk, which is rarely worth the operational overhead.

Package manager: pnpm is recommended for TypeScript projects. Faster than npm for installs, strict about phantom dependencies, and excellent monorepo support. bun is a viable alternative if your team is comfortable with it.

Data Flow

1Git Push
2
3 ├─ gate.yml triggered
4 │ ├─ pnpm install --frozen-lockfile (cached)
5 │ ├─ tsc --noEmit (type check)
6 │ ├─ eslint src/ (lint)
7 │ ├─ vitest run --coverage (tests + coverage gate)
8 │ └─ Upload coverage report
9
10 └─ (on merge to main) build.yml triggered
11 ├─ pnpm install --frozen-lockfile
12 ├─ pnpm build (tsc → dist/)
13 ├─ docker build --platform linux/amd64,linux/arm64
14 ├─ Push image: registry/app:${GITHUB_SHA}
15 └─ Trigger deploy.yml (staging auto-deploy)
16 

Implementation Steps

Step 1: Project Setup

Pin the Node.js version with .nvmrc:

20.14.0

And enforce it in package.json:

json
1{
2 "engines": {
3 "node": ">=20.14.0",
4 "pnpm": ">=9.0.0"
5 }
6}
7 

Configure TypeScript for strictness. Start with a shared base config:

json
1// tsconfig.json
2{
3 "compilerOptions": {
4 "target": "ES2022",
5 "module": "Node16",
6 "moduleResolution": "Node16",
7 "lib": ["ES2022"],
8 "strict": true,
9 "noUncheckedIndexedAccess": true,
10 "exactOptionalPropertyTypes": true,
11 "noImplicitReturns": true,
12 "noFallthroughCasesInSwitch": true,
13 "paths": {
14 "@/*": ["./src/*"]
15 }
16 },
17 "include": ["src/**/*", "scripts/**/*"],
18 "exclude": ["node_modules", "dist"]
19}
20 

For the build step, use a separate config that excludes test files:

json
1// tsconfig.build.json
2{
3 "extends": "./tsconfig.json",
4 "compilerOptions": {
5 "outDir": "dist",
6 "noEmit": false
7 },
8 "exclude": ["node_modules", "dist", "**/*.test.ts", "**/*.spec.ts"]
9}
10 

Step 2: Core Logic

The gate workflow is where most of the value lives. Here's a production-grade gate.yml:

yaml
1name: Gate
2 
3on:
4 pull_request:
5 branches: [main, "release/**"]
6 push:
7 branches: [main]
8 
9env:
10 NODE_VERSION: "20.14.0"
11 PNPM_VERSION: "9.4.0"
12 
13jobs:
14 lint-typecheck:
15 name: Lint & Type Check
16 runs-on: ubuntu-latest
17 steps:
18 - uses: actions/checkout@v4
19
20 - uses: pnpm/action-setup@v4
21 with:
22 version: ${{ env.PNPM_VERSION }}
23
24 - uses: actions/setup-node@v4
25 with:
26 node-version: ${{ env.NODE_VERSION }}
27 cache: pnpm
28
29 - name: Install dependencies
30 run: pnpm install --frozen-lockfile
31
32 - name: Type check
33 run: pnpm tsc --noEmit
34
35 - name: Lint
36 run: pnpm eslint src/ --max-warnings 0
37 
38 test:
39 name: Test (Node ${{ matrix.node }})
40 runs-on: ubuntu-latest
41 strategy:
42 fail-fast: false
43 matrix:
44 node: ["18.20.0", "20.14.0", "22.3.0"]
45 steps:
46 - uses: actions/checkout@v4
47
48 - uses: pnpm/action-setup@v4
49 with:
50 version: ${{ env.PNPM_VERSION }}
51
52 - uses: actions/setup-node@v4
53 with:
54 node-version: ${{ matrix.node }}
55 cache: pnpm
56
57 - name: Install dependencies
58 run: pnpm install --frozen-lockfile
59
60 - name: Run tests with coverage
61 run: pnpm vitest run --coverage --coverage.thresholds.lines=80
62
63 - name: Upload coverage
64 if: matrix.node == '20.14.0'
65 uses: codecov/codecov-action@v4
66 with:
67 token: ${{ secrets.CODECOV_TOKEN }}
68 
69 required:
70 name: Required Gates
71 runs-on: ubuntu-latest
72 needs: [lint-typecheck, test]
73 if: always()
74 steps:
75 - name: Check all gates passed
76 run: |
77 if [[ "${{ needs.lint-typecheck.result }}" != "success" || \
78 "${{ needs.test.result }}" != "success" ]]; then
79 echo "Required gates failed"
80 exit 1
81 fi
82

Step 3: Integration

The build workflow produces Docker images:

yaml
1name: Build & Package
2 
3on:
4 push:
5 branches: [main]
6 tags: ["v*"]
7 
8jobs:
9 build:
10 name: Build Docker Image
11 runs-on: ubuntu-latest
12 permissions:
13 id-token: write
14 contents: read
15 packages: write
16 outputs:
17 image: ${{ steps.meta.outputs.tags }}
18 digest: ${{ steps.build.outputs.digest }}
19 steps:
20 - uses: actions/checkout@v4
21
22 - uses: pnpm/action-setup@v4
23 with:
24 version: "9.4.0"
25
26 - uses: actions/setup-node@v4
27 with:
28 node-version: "20.14.0"
29 cache: pnpm
30
31 - name: Install and build
32 run: |
33 pnpm install --frozen-lockfile
34 pnpm build
35
36 - name: Set up Docker Buildx
37 uses: docker/setup-buildx-action@v3
38 
39 - name: Login to GHCR
40 uses: docker/login-action@v3
41 with:
42 registry: ghcr.io
43 username: ${{ github.actor }}
44 password: ${{ secrets.GITHUB_TOKEN }}
45 
46 - name: Extract metadata
47 id: meta
48 uses: docker/metadata-action@v5
49 with:
50 images: ghcr.io/${{ github.repository }}
51 tags: |
52 type=sha,prefix=,format=long
53 type=ref,event=branch
54 type=semver,pattern={{version}}
55
56 - name: Build and push
57 id: build
58 uses: docker/build-push-action@v5
59 with:
60 context: .
61 push: true
62 platforms: linux/amd64,linux/arm64
63 tags: ${{ steps.meta.outputs.tags }}
64 labels: ${{ steps.meta.outputs.labels }}
65 cache-from: type=gha
66 cache-to: type=gha,mode=max
67 

Need a second opinion on your DevOps pipelines architecture?

I run free 30-minute strategy calls for engineering teams tackling this exact problem.

Book a Free Call

Code Examples

Basic Implementation

A type-safe environment validator using Zod — run this at the start of every pipeline script:

typescript
1// scripts/validate-env.ts
2import { z } from "zod";
3 
4const EnvSchema = z.object({
5 NODE_ENV: z.enum(["development", "test", "production"]),
6 DATABASE_URL: z.string().url("DATABASE_URL must be a valid URL"),
7 REDIS_URL: z.string().url("REDIS_URL must be a valid URL"),
8 PORT: z.coerce.number().int().min(1024).max(65535).default(3000),
9 LOG_LEVEL: z.enum(["debug", "info", "warn", "error"]).default("info"),
10 SENTRY_DSN: z.string().url().optional(),
11});
12 
13type Env = z.infer<typeof EnvSchema>;
14 
15export function validateEnv(): Env {
16 const result = EnvSchema.safeParse(process.env);
17
18 if (!result.success) {
19 const errors = result.error.flatten().fieldErrors;
20 const messages = Object.entries(errors)
21 .map(([key, msgs]) => ` ${key}: ${msgs?.join(", ")}`)
22 .join("\n");
23
24 console.error("Environment validation failed:\n" + messages);
25 process.exit(1);
26 }
27
28 return result.data;
29}
30 
31// When run directly
32if (process.argv[1] === import.meta.url.replace("file://", "")) {
33 const env = validateEnv();
34 console.log("✓ Environment valid");
35 console.log(` NODE_ENV=${env.NODE_ENV}`);
36 console.log(` PORT=${env.PORT}`);
37}
38 

Run as tsx scripts/validate-env.ts in your deploy workflow before any deployment step.

Advanced Patterns

Typed release notes generator — extract changelog from git commits between tags:

typescript
1// scripts/generate-release-notes.ts
2import { execSync } from "child_process";
3import { z } from "zod";
4 
5const CommitSchema = z.object({
6 sha: z.string(),
7 message: z.string(),
8 author: z.string(),
9});
10 
11type Commit = z.infer<typeof CommitSchema>;
12 
13const COMMIT_TYPES: Record<string, string> = {
14 feat: "Features",
15 fix: "Bug Fixes",
16 perf: "Performance",
17 refactor: "Refactoring",
18 docs: "Documentation",
19 test: "Tests",
20 chore: "Maintenance",
21};
22 
23function getCommitsBetween(from: string, to: string): Commit[] {
24 const output = execSync(
25 `git log --pretty=format:"%H|%s|%an" ${from}..${to}`,
26 { encoding: "utf8" }
27 );
28
29 return output
30 .trim()
31 .split("\n")
32 .filter(Boolean)
33 .map((line) => {
34 const [sha, message, author] = line.split("|");
35 return CommitSchema.parse({ sha, message, author });
36 });
37}
38 
39function categorizeCommits(commits: Commit[]): Map<string, Commit[]> {
40 const categories = new Map<string, Commit[]>();
41
42 for (const commit of commits) {
43 // Parse conventional commit: type(scope): message
44 const match = commit.message.match(/^(\w+)(?:\([^)]+\))?:\s+(.+)/);
45 const type = match?.[1] ?? "other";
46 const label = COMMIT_TYPES[type] ?? "Other Changes";
47
48 if (!categories.has(label)) {
49 categories.set(label, []);
50 }
51 categories.get(label)!.push(commit);
52 }
53
54 return categories;
55}
56 
57function generateMarkdown(
58 version: string,
59 categories: Map<string, Commit[]>
60): string {
61 const lines: string[] = [`## ${version}\n`];
62
63 for (const [category, commits] of categories) {
64 lines.push(`### ${category}\n`);
65 for (const commit of commits) {
66 const shortSha = commit.sha.slice(0, 8);
67 const msg = commit.message.replace(/^\w+(?:\([^)]+\))?:\s+/, "");
68 lines.push(`- ${msg} (\`${shortSha}\`) — ${commit.author}`);
69 }
70 lines.push("");
71 }
72
73 return lines.join("\n");
74}
75 
76// Main
77const [, , fromTag, toTag = "HEAD"] = process.argv;
78 
79if (!fromTag) {
80 console.error("Usage: tsx generate-release-notes.ts <from-tag> [to-tag]");
81 process.exit(1);
82}
83 
84const commits = getCommitsBetween(fromTag, toTag);
85const version = toTag === "HEAD" ? "Unreleased" : toTag;
86const categories = categorizeCommits(commits);
87const notes = generateMarkdown(version, categories);
88 
89console.log(notes);
90 

Monorepo change detection using Turborepo's dry-run output:

typescript
1// scripts/affected-packages.ts
2import { execSync } from "child_process";
3import { z } from "zod";
4 
5const TurboOutputSchema = z.object({
6 packages: z.array(z.string()),
7});
8 
9export function getAffectedPackages(task: string): string[] {
10 const output = execSync(
11 `pnpm turbo run ${task} --dry-run=json`,
12 { encoding: "utf8" }
13 );
14
15 try {
16 // Turborepo outputs JSON describing what would run
17 const parsed = JSON.parse(output);
18 const tasks = z.array(z.object({ package: z.string() })).parse(
19 parsed.tasks ?? []
20 );
21
22 return [...new Set(tasks.map((t) => t.package))];
23 } catch {
24 console.error("Failed to parse turbo output");
25 return [];
26 }
27}
28 
29const affected = getAffectedPackages("build");
30console.log("Affected packages:", affected);
31// Output to GitHub Actions
32console.log(`::set-output name=packages::${JSON.stringify(affected)}`);
33 

Production Hardening

Health-check aware deployment with typed retry logic:

typescript
1// scripts/check-deploy.ts
2import { z } from "zod";
3 
4const DeployConfigSchema = z.object({
5 healthCheckUrl: z.string().url(),
6 maxRetries: z.number().int().min(1).max(60).default(30),
7 retryIntervalMs: z.number().int().min(1000).max(30000).default(10000),
8 expectedStatus: z.number().int().default(200),
9 service: z.string(),
10 imageTag: z.string(),
11});
12 
13type DeployConfig = z.infer<typeof DeployConfigSchema>;
14 
15async function checkHealth(
16 url: string,
17 expectedStatus: number
18): Promise<{ ok: boolean; status: number; latencyMs: number }> {
19 const start = Date.now();
20 try {
21 const response = await fetch(url, {
22 signal: AbortSignal.timeout(5000),
23 });
24 return {
25 ok: response.status === expectedStatus,
26 status: response.status,
27 latencyMs: Date.now() - start,
28 };
29 } catch (err) {
30 return { ok: false, status: 0, latencyMs: Date.now() - start };
31 }
32}
33 
34async function waitForHealthy(config: DeployConfig): Promise<void> {
35 let attempts = 0;
36
37 while (attempts < config.maxRetries) {
38 attempts++;
39 const result = await checkHealth(
40 config.healthCheckUrl,
41 config.expectedStatus
42 );
43
44 if (result.ok) {
45 console.log(
46 `✓ Service healthy after ${attempts} attempts (${result.latencyMs}ms)`
47 );
48 return;
49 }
50
51 console.log(
52 ` Attempt ${attempts}/${config.maxRetries}: ` +
53 `status=${result.status}, latency=${result.latencyMs}ms`
54 );
55
56 if (attempts < config.maxRetries) {
57 await new Promise((resolve) =>
58 setTimeout(resolve, config.retryIntervalMs)
59 );
60 }
61 }
62
63 throw new Error(
64 `Service failed to become healthy after ${config.maxRetries} attempts`
65 );
66}
67 
68async function main(): Promise<void> {
69 const config = DeployConfigSchema.parse({
70 healthCheckUrl: process.env["HEALTH_CHECK_URL"],
71 maxRetries: Number(process.env["MAX_RETRIES"] ?? 30),
72 retryIntervalMs: Number(process.env["RETRY_INTERVAL_MS"] ?? 10000),
73 service: process.env["SERVICE_NAME"],
74 imageTag: process.env["IMAGE_TAG"],
75 });
76
77 console.log(`Checking deployment: ${config.service}:${config.imageTag}`);
78
79 try {
80 await waitForHealthy(config);
81 } catch (err) {
82 console.error("Deployment health check failed:", err);
83 process.exit(1);
84 }
85}
86 
87main();
88 

Performance Considerations

Latency Optimization

The biggest wins for TypeScript CI speed:

pnpm caching — The actions/setup-node action supports cache: pnpm which caches the pnpm store between runs. Combined with --frozen-lockfile, dependency installs drop from 45-90 seconds to 5-10 seconds on a warm cache.

esbuild for compilationtsc is slow for large projects. Use esbuild or tsup for the production build step; keep tsc --noEmit only for type checking:

json
1// package.json
2{
3 "scripts": {
4 "typecheck": "tsc --noEmit",
5 "build": "tsup src/index.ts --format cjs,esm --dts",
6 "build:app": "esbuild src/server.ts --bundle --platform=node --target=node20 --outfile=dist/server.js"
7 }
8}
9 

esbuild builds in 200-500ms vs. tsc in 10-30+ seconds for large codebases.

Vitest over Jest — Vitest is 2-5x faster than Jest for TypeScript projects because it uses esbuild natively and doesn't require a separate transform configuration. Migration from Jest is usually straightforward.

Parallelism in jobs — Run type checking and linting in one job, tests in a matrix. They don't depend on each other. GitHub Actions parallelizes jobs automatically.

Memory Management

Node.js has a default heap size of ~1.5GB. Large TypeScript projects with many imports can hit this during compilation. Increase it:

yaml
1- name: Type check
2 env:
3 NODE_OPTIONS: "--max-old-space-size=4096"
4 run: pnpm tsc --noEmit
5 

On GitHub Actions, the ubuntu-latest runner has 7GB RAM — 4GB for Node is reasonable. For Docker builds, the 7GB limit applies to docker buildx too; keep build stages minimal to avoid OOM.

Watch for memory leaks in test suites. Vitest's --pool=forks runs each test file in a separate process, which isolates leaks but increases startup overhead. Use --pool=threads (the default) unless you see shared state issues.

Load Testing

After deploying to staging, run a brief load test to catch regressions:

typescript
1// scripts/load-test.ts
2const BASE_URL = process.env["STAGING_URL"] ?? "http://localhost:3000";
3const DURATION_SECONDS = 60;
4const CONCURRENCY = 50;
5 
6interface Result {
7 totalRequests: number;
8 successRate: number;
9 p50Ms: number;
10 p99Ms: number;
11}
12 
13async function runLoadTest(): Promise<Result> {
14 const latencies: number[] = [];
15 let success = 0;
16 let total = 0;
17 const end = Date.now() + DURATION_SECONDS * 1000;
18
19 const worker = async () => {
20 while (Date.now() < end) {
21 const start = Date.now();
22 try {
23 const res = await fetch(`${BASE_URL}/api/health`);
24 if (res.ok) success++;
25 } catch {
26 // counted as failure
27 }
28 latencies.push(Date.now() - start);
29 total++;
30 }
31 };
32
33 await Promise.all(Array.from({ length: CONCURRENCY }, worker));
34
35 latencies.sort((a, b) => a - b);
36
37 return {
38 totalRequests: total,
39 successRate: success / total,
40 p50Ms: latencies[Math.floor(latencies.length * 0.5)] ?? 0,
41 p99Ms: latencies[Math.floor(latencies.length * 0.99)] ?? 0,
42 };
43}
44 
45const result = await runLoadTest();
46console.log("Load test results:", result);
47 
48if (result.successRate < 0.999 || result.p99Ms > 500) {
49 console.error("Load test failed acceptance criteria");
50 process.exit(1);
51}
52 

Testing Strategy

Unit Tests

Use Vitest for unit tests. It's fast, TypeScript-native, and compatible with Jest's API:

typescript
1// src/lib/artifact.test.ts
2import { describe, it, expect } from "vitest";
3import { Artifact, tagForCommit } from "./artifact";
4 
5describe("Artifact", () => {
6 it("generates short SHA tag", () => {
7 const artifact: Artifact = {
8 name: "myapp",
9 registry: "ghcr.io/myorg",
10 };
11
12 const sha = "a1b2c3d4e5f6789012345678901234567890abcd";
13 const tag = tagForCommit(artifact, sha);
14
15 expect(tag).toBe("ghcr.io/myorg/myapp:a1b2c3d4e5f6");
16 });
17
18 it("rejects short SHAs", () => {
19 const artifact: Artifact = { name: "app", registry: "registry.io" };
20 expect(() => tagForCommit(artifact, "abc")).toThrowError(
21 "SHA must be at least 12 characters"
22 );
23 });
24});
25 

Run with vitest run --reporter=verbose in CI for clear output on failures.

Integration Tests

Integration tests verify that components work together — database queries, HTTP handlers, message queue consumers:

typescript
1// tests/integration/api.test.ts
2import { describe, it, expect, beforeAll, afterAll } from "vitest";
3import { createApp } from "../../src/app";
4import type { FastifyInstance } from "fastify";
5 
6describe("API Integration", () => {
7 let app: FastifyInstance;
8
9 beforeAll(async () => {
10 app = await createApp({
11 databaseUrl: process.env["TEST_DATABASE_URL"] ?? "",
12 logLevel: "silent",
13 });
14 await app.ready();
15 });
16
17 afterAll(async () => {
18 await app.close();
19 });
20
21 it("returns 200 from health endpoint", async () => {
22 const response = await app.inject({
23 method: "GET",
24 url: "/api/health",
25 });
26
27 expect(response.statusCode).toBe(200);
28 expect(response.json()).toMatchObject({
29 status: "ok",
30 version: expect.any(String),
31 });
32 });
33
34 it("rejects unauthenticated requests to protected routes", async () => {
35 const response = await app.inject({
36 method: "GET",
37 url: "/api/users/me",
38 });
39
40 expect(response.statusCode).toBe(401);
41 });
42});
43 

In CI, spin up Postgres and Redis with Docker services in the workflow:

yaml
1services:
2 postgres:
3 image: postgres:16-alpine
4 env:
5 POSTGRES_DB: testdb
6 POSTGRES_USER: test
7 POSTGRES_PASSWORD: test
8 options: >-
9 --health-cmd pg_isready
10 --health-interval 5s
11 --health-timeout 3s
12 --health-retries 5
13
14 redis:
15 image: redis:7-alpine
16 options: >-
17 --health-cmd "redis-cli ping"
18 --health-interval 5s
19

End-to-End Validation

E2E tests run against a fully deployed staging environment using Playwright or a typed HTTP client:

typescript
1// tests/e2e/smoke.test.ts
2import { describe, it, expect } from "vitest";
3 
4const BASE_URL = process.env["E2E_BASE_URL"];
5if (!BASE_URL) throw new Error("E2E_BASE_URL is required");
6 
7describe("Smoke Tests", () => {
8 it("health endpoint returns ok", async () => {
9 const res = await fetch(`${BASE_URL}/api/health`);
10 const body = await res.json();
11
12 expect(res.status).toBe(200);
13 expect(body.status).toBe("ok");
14 });
15
16 it("static assets load", async () => {
17 const res = await fetch(`${BASE_URL}/`);
18 expect(res.status).toBe(200);
19 expect(res.headers.get("content-type")).toContain("text/html");
20 });
21
22 it("API rejects invalid auth", async () => {
23 const res = await fetch(`${BASE_URL}/api/users/me`, {
24 headers: { Authorization: "Bearer invalid-token" },
25 });
26 expect(res.status).toBe(401);
27 });
28});
29 

Run E2E tests as the final step in your deployment workflow, after the service is confirmed healthy. Fail the deployment if E2E fails, and trigger a rollback automatically.


Conclusion

TypeScript's type system is its defining advantage for CI/CD pipeline design. When you validate environment variables with zod, type-check your pipeline scripts with tsc --noEmit, and enforce strict tsconfig settings like noUncheckedIndexedAccess, you catch an entire category of pipeline failures at development time rather than at 2am during a deployment. Combined with the npm ecosystem's breadth — first-party SDKs for every major cloud provider and the official @actions/toolkit — TypeScript gives you both safety and ecosystem reach.

The practical starting point is straightforward: pin your Node.js version with .nvmrc, use pnpm with --frozen-lockfile in CI, and separate your type-checking config (tsconfig.json) from your build config (tsconfig.build.json). Run the cheapest checks first — eslint in 15 seconds, tsc --noEmit in 30 seconds, then vitest for the full test suite. Build a single Docker image tagged with the git SHA, push it to your registry, and promote that exact artifact through staging and production. The same discipline you apply to application code — strict types, comprehensive tests, immutable deployments — applies directly to the pipeline code that ships it.

FAQ

Need expert help?

Building with CI/CD pipelines?

I help teams ship production-grade systems. From architecture review to hands-on builds.

Muneer Puthiya Purayil

SaaS Architect & AI Systems Engineer. 10+ years shipping production infrastructure across fintech, automotive, e-commerce, and healthcare.

Engage

Start a
Conversation.

For teams building at scale: SaaS platforms, agentic AI systems, and enterprise mobile infrastructure. Scope and fit are evaluated before any engagement begins.

Limited availability · Q3 / Q4 2026