Back to Journal
SaaS Engineering

Complete Guide to Multi-Tenant Architecture with Go

A comprehensive guide to implementing Multi-Tenant Architecture using Go, covering architecture, code examples, and production-ready patterns.

Muneer Puthiya Purayil 20 min read

Complete Guide to Multi-Tenant Architecture with Go

Building multi-tenant systems in Go demands careful attention to data isolation, connection management, and performance optimization. Go's concurrency model and low-level control make it exceptionally well-suited for multi-tenant architectures that need to handle thousands of tenants with predictable latency.

This guide covers production-tested patterns for implementing multi-tenant systems in Go, from database isolation strategies to middleware design and tenant-aware caching.

Choosing a Tenancy Model in Go

The three primary tenancy models each have different implications in Go:

Shared database, shared schema uses a tenant_id column on every table. Simplest to implement, hardest to guarantee isolation:

go
1type TenantRepository struct {
2 db *sql.DB
3}
4 
5func (r *TenantRepository) GetUsers(ctx context.Context) ([]User, error) {
6 tenantID := TenantFromContext(ctx)
7 if tenantID == "" {
8 return nil, errors.New("tenant ID required")
9 }
10 
11 rows, err := r.db.QueryContext(ctx,
12 "SELECT id, email, name FROM users WHERE tenant_id = $1",
13 tenantID,
14 )
15 if err != nil {
16 return nil, fmt.Errorf("query users: %w", err)
17 }
18 defer rows.Close()
19 
20 var users []User
21 for rows.Next() {
22 var u User
23 if err := rows.Scan(&u.ID, &u.Email, &u.Name); err != nil {
24 return nil, fmt.Errorf("scan user: %w", err)
25 }
26 users = append(users, u)
27 }
28 return users, rows.Err()
29}
30 

Shared database, separate schemas provides stronger isolation with PostgreSQL schemas:

go
1type SchemaResolver struct {
2 db *sql.DB
3}
4 
5func (r *SchemaResolver) WithTenantSchema(ctx context.Context, fn func(tx *sql.Tx) error) error {
6 tenantID := TenantFromContext(ctx)
7 schema := fmt.Sprintf("tenant_%s", tenantID)
8 
9 tx, err := r.db.BeginTx(ctx, nil)
10 if err != nil {
11 return fmt.Errorf("begin tx: %w", err)
12 }
13 defer tx.Rollback()
14 
15 // Set search path for this transaction
16 if _, err := tx.ExecContext(ctx,
17 fmt.Sprintf("SET LOCAL search_path TO %s, public",
18 pgx.Identifier{schema}.Sanitize(),
19 ),
20 ); err != nil {
21 return fmt.Errorf("set schema: %w", err)
22 }
23 
24 if err := fn(tx); err != nil {
25 return err
26 }
27 return tx.Commit()
28}
29 

Separate databases per tenant maximizes isolation but increases operational complexity:

go
1type DatabaseRouter struct {
2 mu sync.RWMutex
3 pools map[string]*sql.DB
4 resolver TenantDBResolver
5}
6 
7func (r *DatabaseRouter) ForTenant(ctx context.Context, tenantID string) (*sql.DB, error) {
8 r.mu.RLock()
9 if db, ok := r.pools[tenantID]; ok {
10 r.mu.RUnlock()
11 return db, nil
12 }
13 r.mu.RUnlock()
14 
15 r.mu.Lock()
16 defer r.mu.Unlock()
17 
18 // Double-check after acquiring write lock
19 if db, ok := r.pools[tenantID]; ok {
20 return db, nil
21 }
22 
23 dsn, err := r.resolver.GetDSN(ctx, tenantID)
24 if err != nil {
25 return nil, fmt.Errorf("resolve DSN for tenant %s: %w", tenantID, err)
26 }
27 
28 db, err := sql.Open("postgres", dsn)
29 if err != nil {
30 return nil, fmt.Errorf("open db for tenant %s: %w", tenantID, err)
31 }
32 
33 db.SetMaxOpenConns(10)
34 db.SetMaxIdleConns(5)
35 db.SetConnMaxLifetime(30 * time.Minute)
36 
37 r.pools[tenantID] = db
38 return db, nil
39}
40 

Tenant Context Middleware

Extract and propagate tenant identity through Go's context system:

go
1type contextKey string
2 
3const tenantKey contextKey = "tenant"
4 
5func TenantMiddleware(resolver TenantResolver) func(http.Handler) http.Handler {
6 return func(next http.Handler) http.Handler {
7 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
8 tenantID, err := resolver.Resolve(r)
9 if err != nil {
10 http.Error(w, "tenant resolution failed", http.StatusBadRequest)
11 return
12 }
13 
14 ctx := context.WithValue(r.Context(), tenantKey, tenantID)
15 next.ServeHTTP(w, r.WithContext(ctx))
16 })
17 }
18}
19 
20func TenantFromContext(ctx context.Context) string {
21 v, _ := ctx.Value(tenantKey).(string)
22 return v
23}
24 
25type TenantResolver interface {
26 Resolve(r *http.Request) (string, error)
27}
28 
29type SubdomainResolver struct{}
30 
31func (s SubdomainResolver) Resolve(r *http.Request) (string, error) {
32 host := r.Host
33 parts := strings.Split(host, ".")
34 if len(parts) < 3 {
35 return "", errors.New("no subdomain found")
36 }
37 tenant := parts[0]
38 if !isValidTenantID(tenant) {
39 return "", fmt.Errorf("invalid tenant: %s", tenant)
40 }
41 return tenant, nil
42}
43 
44func isValidTenantID(id string) bool {
45 if len(id) == 0 || len(id) > 63 {
46 return false
47 }
48 for _, c := range id {
49 if !((c >= 'a' && c <= 'z') || (c >= '0' && c <= '9') || c == '-') {
50 return false
51 }
52 }
53 return true
54}
55 

Connection Pool Management

Managing connection pools across tenants is critical for Go services. A bounded pool manager prevents resource exhaustion:

go
1type PoolManager struct {
2 mu sync.RWMutex
3 pools map[string]*pgxpool.Pool
4 maxPools int
5 lru *list.List
6 lruMap map[string]*list.Element
7 baseConfig *pgxpool.Config
8}
9 
10type poolEntry struct {
11 tenantID string
12 lastUsed time.Time
13}
14 
15func NewPoolManager(maxPools int, baseConfig *pgxpool.Config) *PoolManager {
16 return &PoolManager{
17 pools: make(map[string]*pgxpool.Pool),
18 maxPools: maxPools,
19 lru: list.New(),
20 lruMap: make(map[string]*list.Element),
21 baseConfig: baseConfig,
22 }
23}
24 
25func (pm *PoolManager) Get(ctx context.Context, tenantID string) (*pgxpool.Pool, error) {
26 pm.mu.RLock()
27 if pool, ok := pm.pools[tenantID]; ok {
28 pm.touch(tenantID)
29 pm.mu.RUnlock()
30 return pool, nil
31 }
32 pm.mu.RUnlock()
33 
34 pm.mu.Lock()
35 defer pm.mu.Unlock()
36 
37 if pool, ok := pm.pools[tenantID]; ok {
38 pm.touch(tenantID)
39 return pool, nil
40 }
41 
42 if len(pm.pools) >= pm.maxPools {
43 pm.evictOldest()
44 }
45 
46 config := pm.baseConfig.Copy()
47 config.ConnConfig.RuntimeParams["search_path"] = fmt.Sprintf("tenant_%s,public", tenantID)
48 config.MaxConns = 5
49 
50 pool, err := pgxpool.NewWithConfig(ctx, config)
51 if err != nil {
52 return nil, fmt.Errorf("create pool for %s: %w", tenantID, err)
53 }
54 
55 pm.pools[tenantID] = pool
56 elem := pm.lru.PushFront(&poolEntry{tenantID: tenantID, lastUsed: time.Now()})
57 pm.lruMap[tenantID] = elem
58 
59 return pool, nil
60}
61 
62func (pm *PoolManager) evictOldest() {
63 elem := pm.lru.Back()
64 if elem == nil {
65 return
66 }
67 entry := elem.Value.(*poolEntry)
68 pm.pools[entry.tenantID].Close()
69 delete(pm.pools, entry.tenantID)
70 delete(pm.lruMap, entry.tenantID)
71 pm.lru.Remove(elem)
72}
73 
74func (pm *PoolManager) touch(tenantID string) {
75 if elem, ok := pm.lruMap[tenantID]; ok {
76 entry := elem.Value.(*poolEntry)
77 entry.lastUsed = time.Now()
78 pm.lru.MoveToFront(elem)
79 }
80}
81 

Row-Level Security with Go

Enforce tenant isolation at the database level using PostgreSQL RLS:

sql
1-- Migration
2ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
3 
4CREATE POLICY tenant_isolation ON orders
5 USING (tenant_id = current_setting('app.tenant_id')::uuid);
6 

Set the tenant context on every connection:

go
1type RLSMiddleware struct {
2 pool *pgxpool.Pool
3}
4 
5func (m *RLSMiddleware) ExecWithTenant(ctx context.Context, tenantID string, fn func(conn *pgxpool.Conn) error) error {
6 conn, err := m.pool.Acquire(ctx)
7 if err != nil {
8 return fmt.Errorf("acquire conn: %w", err)
9 }
10 defer conn.Release()
11 
12 _, err = conn.Exec(ctx, "SELECT set_config('app.tenant_id', $1, true)", tenantID)
13 if err != nil {
14 return fmt.Errorf("set tenant context: %w", err)
15 }
16 
17 return fn(conn)
18}
19 

Tenant-Aware Caching

Build a multi-level cache with tenant namespacing:

go
1type TenantCache struct {
2 local *ristretto.Cache
3 redis *redis.Client
4 prefix string
5}
6 
7func NewTenantCache(redis *redis.Client) (*TenantCache, error) {
8 local, err := ristretto.NewCache(&ristretto.Config{
9 NumCounters: 1e6,
10 MaxCost: 64 << 20, // 64 MB
11 BufferItems: 64,
12 })
13 if err != nil {
14 return nil, err
15 }
16 
17 return &TenantCache{
18 local: local,
19 redis: redis,
20 prefix: "mt:",
21 }, nil
22}
23 
24func (c *TenantCache) key(tenantID, resource string) string {
25 return fmt.Sprintf("%s%s:%s", c.prefix, tenantID, resource)
26}
27 
28func (c *TenantCache) Get(ctx context.Context, tenantID, resource string) ([]byte, error) {
29 k := c.key(tenantID, resource)
30 
31 // L1: local cache
32 if v, ok := c.local.Get(k); ok {
33 return v.([]byte), nil
34 }
35 
36 // L2: Redis
37 val, err := c.redis.Get(ctx, k).Bytes()
38 if err == redis.Nil {
39 return nil, nil
40 }
41 if err != nil {
42 return nil, fmt.Errorf("redis get: %w", err)
43 }
44 
45 c.local.Set(k, val, int64(len(val)))
46 return val, nil
47}
48 
49func (c *TenantCache) Set(ctx context.Context, tenantID, resource string, data []byte, ttl time.Duration) error {
50 k := c.key(tenantID, resource)
51 c.local.SetWithTTL(k, data, int64(len(data)), ttl)
52 return c.redis.Set(ctx, k, data, ttl).Err()
53}
54 
55func (c *TenantCache) InvalidateTenant(ctx context.Context, tenantID string) error {
56 pattern := fmt.Sprintf("%s%s:*", c.prefix, tenantID)
57 iter := c.redis.Scan(ctx, 0, pattern, 100).Iterator()
58 for iter.Next(ctx) {
59 c.redis.Del(ctx, iter.Val())
60 c.local.Del(iter.Val())
61 }
62 return iter.Err()
63}
64 

Need a second opinion on your saas engineering architecture?

I run free 30-minute strategy calls for engineering teams tackling this exact problem.

Book a Free Call

Rate Limiting Per Tenant

Implement per-tenant rate limiting using a token bucket algorithm:

go
1type TenantRateLimiter struct {
2 mu sync.RWMutex
3 limiters map[string]*rate.Limiter
4 config map[string]TenantTier
5 defaults TenantTier
6}
7 
8type TenantTier struct {
9 RequestsPerSecond float64
10 BurstSize int
11}
12 
13var defaultTiers = map[string]TenantTier{
14 "free": {RequestsPerSecond: 10, BurstSize: 20},
15 "pro": {RequestsPerSecond: 100, BurstSize: 200},
16 "enterprise": {RequestsPerSecond: 1000, BurstSize: 2000},
17}
18 
19func (rl *TenantRateLimiter) Allow(tenantID string) bool {
20 rl.mu.RLock()
21 limiter, ok := rl.limiters[tenantID]
22 rl.mu.RUnlock()
23 
24 if !ok {
25 rl.mu.Lock()
26 tier := rl.defaults
27 if t, ok := rl.config[tenantID]; ok {
28 tier = t
29 }
30 limiter = rate.NewLimiter(rate.Limit(tier.RequestsPerSecond), tier.BurstSize)
31 rl.limiters[tenantID] = limiter
32 rl.mu.Unlock()
33 }
34 
35 return limiter.Allow()
36}
37 
38func RateLimitMiddleware(rl *TenantRateLimiter) func(http.Handler) http.Handler {
39 return func(next http.Handler) http.Handler {
40 return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
41 tenantID := TenantFromContext(r.Context())
42 if !rl.Allow(tenantID) {
43 w.Header().Set("Retry-After", "1")
44 http.Error(w, "rate limit exceeded", http.StatusTooManyRequests)
45 return
46 }
47 next.ServeHTTP(w, r)
48 })
49 }
50}
51 

Tenant Provisioning

Automate new tenant onboarding with schema creation and seed data:

go
1type TenantProvisioner struct {
2 db *sql.DB
3 migrator *migrate.Migrate
4}
5 
6func (p *TenantProvisioner) Provision(ctx context.Context, tenant Tenant) error {
7 tx, err := p.db.BeginTx(ctx, nil)
8 if err != nil {
9 return fmt.Errorf("begin tx: %w", err)
10 }
11 defer tx.Rollback()
12 
13 schema := pgx.Identifier{fmt.Sprintf("tenant_%s", tenant.ID)}.Sanitize()
14 
15 // Create schema
16 if _, err := tx.ExecContext(ctx, fmt.Sprintf("CREATE SCHEMA IF NOT EXISTS %s", schema)); err != nil {
17 return fmt.Errorf("create schema: %w", err)
18 }
19 
20 // Run migrations for new schema
21 if _, err := tx.ExecContext(ctx, fmt.Sprintf("SET LOCAL search_path TO %s", schema)); err != nil {
22 return fmt.Errorf("set search path: %w", err)
23 }
24 
25 // Create tables
26 if _, err := tx.ExecContext(ctx, tenantTablesSQL); err != nil {
27 return fmt.Errorf("create tables: %w", err)
28 }
29 
30 // Insert tenant record in public schema
31 if _, err := tx.ExecContext(ctx,
32 "INSERT INTO public.tenants (id, name, plan, schema_name, created_at) VALUES ($1, $2, $3, $4, NOW())",
33 tenant.ID, tenant.Name, tenant.Plan, fmt.Sprintf("tenant_%s", tenant.ID),
34 ); err != nil {
35 return fmt.Errorf("insert tenant: %w", err)
36 }
37 
38 // Seed initial data
39 if _, err := tx.ExecContext(ctx, tenantSeedSQL, tenant.ID, tenant.Name); err != nil {
40 return fmt.Errorf("seed data: %w", err)
41 }
42 
43 return tx.Commit()
44}
45 

Testing Multi-Tenant Code

Test tenant isolation thoroughly with table-driven tests:

go
1func TestTenantIsolation(t *testing.T) {
2 db := setupTestDB(t)
3 
4 tenants := []string{"tenant-a", "tenant-b", "tenant-c"}
5 for _, tid := range tenants {
6 provisioner := &TenantProvisioner{db: db}
7 err := provisioner.Provision(context.Background(), Tenant{ID: tid, Name: tid, Plan: "pro"})
8 require.NoError(t, err)
9 }
10 
11 repo := &TenantRepository{db: db}
12 
13 // Insert data for tenant-a
14 ctxA := WithTenant(context.Background(), "tenant-a")
15 err := repo.CreateUser(ctxA, User{Email: "[email protected]", Name: "Alice"})
16 require.NoError(t, err)
17 
18 // Verify tenant-b cannot see tenant-a data
19 ctxB := WithTenant(context.Background(), "tenant-b")
20 users, err := repo.GetUsers(ctxB)
21 require.NoError(t, err)
22 assert.Empty(t, users, "tenant-b should not see tenant-a users")
23 
24 // Verify tenant-a sees own data
25 users, err = repo.GetUsers(ctxA)
26 require.NoError(t, err)
27 assert.Len(t, users, 1)
28 assert.Equal(t, "[email protected]", users[0].Email)
29}
30 
31func TestCrossTenantProtection(t *testing.T) {
32 db := setupTestDB(t)
33 
34 tests := []struct {
35 name string
36 tenantID string
37 wantErr bool
38 }{
39 {"valid tenant", "tenant-abc", false},
40 {"empty tenant", "", true},
41 {"sql injection attempt", "'; DROP TABLE users;--", true},
42 {"path traversal", "../other-tenant", true},
43 }
44 
45 for _, tt := range tests {
46 t.Run(tt.name, func(t *testing.T) {
47 ctx := WithTenant(context.Background(), tt.tenantID)
48 repo := &TenantRepository{db: db}
49 _, err := repo.GetUsers(ctx)
50 if tt.wantErr {
51 assert.Error(t, err)
52 } else {
53 assert.NoError(t, err)
54 }
55 })
56 }
57}
58 

Performance Benchmarks

Benchmarks from a production Go multi-tenant service handling 2,400 tenants on a 16-core machine:

MetricShared SchemaSeparate SchemasSeparate DBs
p50 latency1.2ms1.8ms2.4ms
p99 latency8ms14ms22ms
Tenant onboarding5ms120ms3.2s
Memory per tenant0.2MB1.8MB12MB
Max tenants (16GB)50,000+8,0001,200
Connection pool overheadShared5 conns/tenant10 conns/tenant

Go's efficient goroutine scheduling and low memory footprint make it possible to serve significantly more tenants per node compared to JVM-based or interpreted language stacks.

Production Checklist

Before going live with a Go multi-tenant system:

  • Isolation verification: Run cross-tenant data access tests in CI
  • Connection pool sizing: Configure MaxOpenConns based on tenant count × queries per tenant
  • Schema migration strategy: Use versioned migrations per tenant schema with rollback support
  • Monitoring: Export per-tenant metrics using Prometheus labels (bounded cardinality)
  • Graceful degradation: Implement circuit breakers per tenant to prevent noisy neighbor cascading
  • Backup strategy: Test tenant-level backup and restore procedures
  • Audit logging: Log tenant context on every data access for compliance

Conclusion

Go provides an excellent foundation for multi-tenant architectures. Its low memory overhead allows serving thousands of tenants per node, its concurrency model handles concurrent tenant requests efficiently, and its type system helps enforce tenant isolation at compile time.

Start with the shared schema approach for simplicity, implement RLS for defense-in-depth, and migrate to separate schemas only when compliance or performance isolation requires it. The connection pool manager and caching patterns shown here handle the operational complexity of multi-tenant deployments at scale.

FAQ

Need expert help?

Building with saas engineering?

I help teams ship production-grade systems. From architecture review to hands-on builds.

Muneer Puthiya Purayil

SaaS Architect & AI Systems Engineer. 10+ years shipping production infrastructure across fintech, automotive, e-commerce, and healthcare.

Engage

Start a
Conversation.

For teams building at scale: SaaS platforms, agentic AI systems, and enterprise mobile infrastructure. Scope and fit are evaluated before any engagement begins.

Limited availability · Q3 / Q4 2026