We migrated a logistics management platform serving 2,400 daily active users from a native iOS/Android app to a Progressive Web App. This case study covers the architecture decisions, implementation challenges, and measured results over 14 months of production operation.
Context and Motivation
The platform — a fleet management tool for a mid-size logistics company — had separate iOS and Android apps maintained by a team of four mobile developers. Feature parity between platforms lagged by 3-6 weeks, and each app store submission added 2-5 days to release cycles. The desktop experience was a separate React SPA with its own API integration layer.
The business case for PWA migration was straightforward:
- Eliminate platform fragmentation: One codebase instead of three
- Reduce release cycle time: Deploy updates instantly without app store review
- Lower maintenance burden: Reduce the mobile team from four developers to two
- Improve field worker experience: Drivers needed offline access to delivery manifests and route data
Architecture Decisions
Offline-First for Field Operations
Drivers operate in areas with intermittent connectivity — warehouses, rural delivery routes, underground parking structures. We designed the offline layer around two principles:
- Read data is always available: Delivery manifests, route data, and customer information are cached aggressively
- Write operations queue and retry: Delivery confirmations, signature captures, and status updates queue locally and sync when connectivity returns
Service Worker Architecture
We chose a layered service worker design:
Background Sync for Delivery Confirmations
Delivery confirmations are the most critical write operation. A dropped confirmation means a driver might re-deliver a package or a customer gets charged incorrectly.
Performance Results
Before (Native Apps) vs. After (PWA)
| Metric | Native iOS | Native Android | PWA |
|---|---|---|---|
| First load | 3.2s | 4.1s | 1.8s (cached) |
| App size | 45MB | 38MB | 2.3MB (initial) |
| Offline capability | Full | Full | Full |
| Update delivery | 2-5 days | 1-3 days | Instant |
| Time to interactive | 2.8s | 3.5s | 1.2s (repeat visit) |
Core Web Vitals (90-day P75)
| Metric | Target | Actual |
|---|---|---|
| LCP | < 2.5s | 1.9s |
| FID | < 100ms | 45ms |
| CLS | < 0.1 | 0.04 |
| INP | < 200ms | 120ms |
Business Metrics (14-Month Comparison)
| Metric | Pre-PWA | Post-PWA | Change |
|---|---|---|---|
| Daily active users | 2,400 | 2,650 | +10.4% |
| Delivery confirmation time | 45s avg | 12s avg | -73% |
| Support tickets (app issues) | 120/month | 35/month | -71% |
| Release frequency | Biweekly | 3x/week | +6x |
| Mobile team size | 4 | 2 | -50% |
| Annual mobile dev cost | $480K | $240K | -50% |
Need a second opinion on your mobile/frontend architecture?
I run free 30-minute strategy calls for engineering teams tackling this exact problem.
Book a Free CallChallenges and Honest Retrospectives
iOS Safari Limitations
iOS Safari's PWA support lagged behind Chrome throughout the project. Specific issues we encountered:
- Push notifications: Not supported until iOS 16.4 (released March 2023). We maintained a parallel push system via email for iOS users during the first 6 months.
- Background Sync API: Not supported on iOS. We implemented a polling-based sync that runs every 30 seconds when the app is in the foreground.
- Storage quota: iOS limits PWA storage to ~50MB. We implemented aggressive cache eviction and stored only the current day's delivery data locally.
Service Worker Update Complexity
The service worker update lifecycle caused several production incidents:
- Stale cache after deployment: Users received cached API responses with an outdated data format after a backend schema change. Fix: added API version headers and invalidated the API cache on service worker activation.
- Partial update state: Some users had a new HTML shell but old JavaScript bundles. Fix: switched to content-hash-based filenames and precached all assets as a single unit.
- Infinite refresh loop: A buggy service worker returned a cached 500 error page for the root URL. Fix: added a kill switch endpoint that bypasses the service worker cache.
Offline Conflict Resolution
When two dispatchers assigned the same delivery to different drivers while both were offline, the sync created duplicate assignments. We resolved this by implementing server-side last-write-wins with a dispatcher notification:
What We Would Do Differently
-
Start with Workbox: We wrote a custom service worker from scratch. Workbox would have saved 3-4 weeks of development time for standard caching patterns while still allowing customization for sync logic.
-
Implement feature flags from day one: Rolling out PWA features to user segments would have reduced the blast radius of service worker bugs. We retrofitted feature flags after the second production incident.
-
Build the offline indicator earlier: Users were confused about whether they were online or offline and whether their actions had synced. A visible sync status indicator should be a day-one feature, not a month-three addition.
-
Test on real devices in the field: Our test suite passed on fast Wi-Fi connections. The first field deployment revealed timeout issues with service worker registration on 3G connections that we had never encountered.
Conclusion
The PWA migration delivered on its core promise: a single codebase that replaced three platform-specific applications while maintaining full offline capability. The 50% reduction in mobile development costs and 6x improvement in release frequency justified the 4-month migration investment.
The technology works. The challenges were operational — service worker lifecycle management, iOS compatibility gaps, and offline conflict resolution. These are solvable problems, but they require explicit engineering investment. Teams considering a similar migration should budget 30-40% of development time for offline sync, service worker testing, and cross-browser compatibility rather than treating these as afterthoughts.