Back to Journal
Mobile/Frontend

Progressive Web Apps at Scale: Lessons from Production

Real-world lessons from implementing Progressive Web Apps in production, including architecture decisions, measurable results, and honest retrospectives.

Muneer Puthiya Purayil 10 min read

We migrated a logistics management platform serving 2,400 daily active users from a native iOS/Android app to a Progressive Web App. This case study covers the architecture decisions, implementation challenges, and measured results over 14 months of production operation.

Context and Motivation

The platform — a fleet management tool for a mid-size logistics company — had separate iOS and Android apps maintained by a team of four mobile developers. Feature parity between platforms lagged by 3-6 weeks, and each app store submission added 2-5 days to release cycles. The desktop experience was a separate React SPA with its own API integration layer.

The business case for PWA migration was straightforward:

  • Eliminate platform fragmentation: One codebase instead of three
  • Reduce release cycle time: Deploy updates instantly without app store review
  • Lower maintenance burden: Reduce the mobile team from four developers to two
  • Improve field worker experience: Drivers needed offline access to delivery manifests and route data

Architecture Decisions

Offline-First for Field Operations

Drivers operate in areas with intermittent connectivity — warehouses, rural delivery routes, underground parking structures. We designed the offline layer around two principles:

  1. Read data is always available: Delivery manifests, route data, and customer information are cached aggressively
  2. Write operations queue and retry: Delivery confirmations, signature captures, and status updates queue locally and sync when connectivity returns
typescript
1// Caching strategy by resource type
2const CACHE_STRATEGIES: Record<string, CacheStrategy> = {
3 '/api/manifests/*': {
4 strategy: 'cache-first',
5 maxAge: 4 * 60 * 60 * 1000, // 4 hours
6 maxEntries: 200,
7 },
8 '/api/routes/*': {
9 strategy: 'cache-first',
10 maxAge: 2 * 60 * 60 * 1000, // 2 hours
11 maxEntries: 50,
12 },
13 '/api/customers/*': {
14 strategy: 'stale-while-revalidate',
15 maxAge: 24 * 60 * 60 * 1000, // 24 hours
16 maxEntries: 1000,
17 },
18 '/api/deliveries': {
19 strategy: 'network-first',
20 timeout: 3000, // Fall back to cache after 3s
21 maxEntries: 500,
22 },
23};
24 

Service Worker Architecture

We chose a layered service worker design:

typescript
1// sw.ts — Main service worker
2importScripts('/sw-cache.js');
3importScripts('/sw-sync.js');
4importScripts('/sw-push.js');
5 
6self.addEventListener('fetch', (event: FetchEvent) => {
7 const url = new URL(event.request.url);
8 
9 // Static assets: cache-first
10 if (url.pathname.match(/\.(js|css|png|svg|woff2)$/)) {
11 event.respondWith(cacheFirst(event.request, 'static-v12'));
12 return;
13 }
14 
15 // API calls: strategy based on endpoint
16 if (url.pathname.startsWith('/api/')) {
17 const strategy = matchStrategy(url.pathname);
18 event.respondWith(executeStrategy(event.request, strategy));
19 return;
20 }
21 
22 // Navigation: app shell
23 if (event.request.mode === 'navigate') {
24 event.respondWith(
25 caches.match('/index.html').then(cached => cached || fetch(event.request))
26 );
27 }
28});
29 

Background Sync for Delivery Confirmations

Delivery confirmations are the most critical write operation. A dropped confirmation means a driver might re-deliver a package or a customer gets charged incorrectly.

typescript
1// sw-sync.ts
2self.addEventListener('sync', (event: SyncEvent) => {
3 if (event.tag === 'delivery-confirmations') {
4 event.waitUntil(syncDeliveryConfirmations());
5 }
6});
7 
8async function syncDeliveryConfirmations(): Promise<void> {
9 const db = await openDB('sync-queue', 1);
10 const tx = db.transaction('confirmations', 'readonly');
11 const confirmations = await tx.store.getAll();
12 
13 for (const confirmation of confirmations) {
14 try {
15 const response = await fetch('/api/deliveries/confirm', {
16 method: 'POST',
17 headers: { 'Content-Type': 'application/json' },
18 body: JSON.stringify(confirmation.data),
19 });
20 
21 if (response.ok) {
22 const deleteTx = db.transaction('confirmations', 'readwrite');
23 await deleteTx.store.delete(confirmation.id);
24 } else if (response.status === 409) {
25 // Conflict: delivery already confirmed by another driver
26 const deleteTx = db.transaction('confirmations', 'readwrite');
27 await deleteTx.store.delete(confirmation.id);
28 await notifyConflict(confirmation);
29 }
30 // 5xx errors: leave in queue for next sync
31 } catch {
32 // Network error: leave in queue
33 break;
34 }
35 }
36}
37 

Performance Results

Before (Native Apps) vs. After (PWA)

MetricNative iOSNative AndroidPWA
First load3.2s4.1s1.8s (cached)
App size45MB38MB2.3MB (initial)
Offline capabilityFullFullFull
Update delivery2-5 days1-3 daysInstant
Time to interactive2.8s3.5s1.2s (repeat visit)

Core Web Vitals (90-day P75)

MetricTargetActual
LCP< 2.5s1.9s
FID< 100ms45ms
CLS< 0.10.04
INP< 200ms120ms

Business Metrics (14-Month Comparison)

MetricPre-PWAPost-PWAChange
Daily active users2,4002,650+10.4%
Delivery confirmation time45s avg12s avg-73%
Support tickets (app issues)120/month35/month-71%
Release frequencyBiweekly3x/week+6x
Mobile team size42-50%
Annual mobile dev cost$480K$240K-50%

Need a second opinion on your mobile/frontend architecture?

I run free 30-minute strategy calls for engineering teams tackling this exact problem.

Book a Free Call

Challenges and Honest Retrospectives

iOS Safari Limitations

iOS Safari's PWA support lagged behind Chrome throughout the project. Specific issues we encountered:

  • Push notifications: Not supported until iOS 16.4 (released March 2023). We maintained a parallel push system via email for iOS users during the first 6 months.
  • Background Sync API: Not supported on iOS. We implemented a polling-based sync that runs every 30 seconds when the app is in the foreground.
  • Storage quota: iOS limits PWA storage to ~50MB. We implemented aggressive cache eviction and stored only the current day's delivery data locally.

Service Worker Update Complexity

The service worker update lifecycle caused several production incidents:

  1. Stale cache after deployment: Users received cached API responses with an outdated data format after a backend schema change. Fix: added API version headers and invalidated the API cache on service worker activation.
  2. Partial update state: Some users had a new HTML shell but old JavaScript bundles. Fix: switched to content-hash-based filenames and precached all assets as a single unit.
  3. Infinite refresh loop: A buggy service worker returned a cached 500 error page for the root URL. Fix: added a kill switch endpoint that bypasses the service worker cache.

Offline Conflict Resolution

When two dispatchers assigned the same delivery to different drivers while both were offline, the sync created duplicate assignments. We resolved this by implementing server-side last-write-wins with a dispatcher notification:

typescript
1// Server-side conflict resolution
2async function confirmDelivery(data: DeliveryConfirmation): Promise<ConfirmationResult> {
3 const existing = await db.delivery.findUnique({
4 where: { id: data.deliveryId },
5 });
6 
7 if (existing.status === 'confirmed' && existing.confirmedBy !== data.driverId) {
8 // Already confirmed by another driver
9 await notifyDispatcher({
10 type: 'DUPLICATE_CONFIRMATION',
11 deliveryId: data.deliveryId,
12 originalDriver: existing.confirmedBy,
13 duplicateDriver: data.driverId,
14 timestamp: data.timestamp,
15 });
16 return { status: 'conflict', message: 'Already confirmed by another driver' };
17 }
18 
19 await db.delivery.update({
20 where: { id: data.deliveryId },
21 data: { status: 'confirmed', confirmedBy: data.driverId, confirmedAt: new Date(data.timestamp) },
22 });
23 
24 return { status: 'success' };
25}
26 

What We Would Do Differently

  1. Start with Workbox: We wrote a custom service worker from scratch. Workbox would have saved 3-4 weeks of development time for standard caching patterns while still allowing customization for sync logic.

  2. Implement feature flags from day one: Rolling out PWA features to user segments would have reduced the blast radius of service worker bugs. We retrofitted feature flags after the second production incident.

  3. Build the offline indicator earlier: Users were confused about whether they were online or offline and whether their actions had synced. A visible sync status indicator should be a day-one feature, not a month-three addition.

  4. Test on real devices in the field: Our test suite passed on fast Wi-Fi connections. The first field deployment revealed timeout issues with service worker registration on 3G connections that we had never encountered.

Conclusion

The PWA migration delivered on its core promise: a single codebase that replaced three platform-specific applications while maintaining full offline capability. The 50% reduction in mobile development costs and 6x improvement in release frequency justified the 4-month migration investment.

The technology works. The challenges were operational — service worker lifecycle management, iOS compatibility gaps, and offline conflict resolution. These are solvable problems, but they require explicit engineering investment. Teams considering a similar migration should budget 30-40% of development time for offline sync, service worker testing, and cross-browser compatibility rather than treating these as afterthoughts.

FAQ

Need expert help?

Building with mobile/frontend?

I help teams ship production-grade systems. From architecture review to hands-on builds.

Muneer Puthiya Purayil

SaaS Architect & AI Systems Engineer. 10+ years shipping production infrastructure across fintech, automotive, e-commerce, and healthcare.

Engage

Start a
Conversation.

For teams building at scale: SaaS platforms, agentic AI systems, and enterprise mobile infrastructure. Scope and fit are evaluated before any engagement begins.

Limited availability · Q3 / Q4 2026