Supabase Local-First: Offline Database That Actually Syncs When Back Online
I built a field inspection app for a construction company. Workers go to sites with zero cell reception, fill out forms, take photos, and mark issues on floor plans. Everything needs to work offline. When they drive back to the office, all data syncs. I tried Firebase offline persistence (lost data on conflicts), CouchDB (setup nightmare), and finally landed on Supabase + PowerSync. Here's what actually works.
Why Local-First Matters
"Offline-first" isn't just about caching. A true local-first architecture means:
- Reads and writes go to local SQLite first. No network call needed. Instant UI response.
- Sync happens in the background. When connectivity returns, changes upload automatically.
- Conflicts are resolved deterministically. Two workers editing the same inspection doesn't corrupt data.
- The app works fully offline. Not a degraded mode — the full experience.
Supabase alone doesn't do this. Supabase's realtime subscriptions and offline caching in PostgREST are "eventually consistent" — they work when online but don't persist writes offline. PowerSync adds the local SQLite layer and bidirectional sync.
Setting Up Supabase + PowerSync
# Install PowerSync client
npm install @powersync/node @supabase/supabase-js
# PowerSync connects to your Supabase PostgreSQL
# Set up the PowerSync service (cloud or self-hosted)
# It reads your Supabase schema and creates sync rules
// lib/powersync.ts
import { PowerSyncBackendFactory } from "@powersync/node"
import { SupabaseBackendConnector } from "./connector"
// Define which tables sync
const schema = {
tables: [
{ name: "inspections", columns: ["id", "site_id", "status", "notes", "photos", "inspector_id", "created_at", "updated_at"] },
{ name: "issues", columns: ["id", "inspection_id", "description", "severity", "photo_url", "resolved"] },
{ name: "sites", columns: ["id", "name", "address", "lat", "lng"] },
],
// Sync rules — which rows each user can sync
// PowerSync resolves these against Supabase RLS policies
}
export const powerSync = new PowerSyncBackendFactory({
backendConnector: new SupabaseBackendConnector(),
schema,
})
// Initialize on app start
await powerSync.initialize()
// Using local database — works offline
import { powerSync } from "./lib/powersync"
async function createInspection(data: InspectionInput) {
// Write to local SQLite — instant, no network
const db = powerSync.db()
await db.execute(
`INSERT INTO inspections (id, site_id, status, notes, inspector_id)
VALUES (?, ?, ?, ?, ?)`,
[uuid(), data.siteId, "draft", data.notes, currentUserId]
)
// PowerSync will sync this to Supabase when online
// No await needed — the UI responds immediately
}
async function getInspections(siteId: string) {
// Read from local SQLite — instant, no network
const db = powerSync.db()
return db.getAll(
`SELECT * FROM inspections WHERE site_id = ? ORDER BY created_at DESC`,
[siteId]
)
}
Problem
Supabase realtime subscriptions use WebSockets. In our mobile app, WebSocket connections would drop when the phone switched between WiFi and cellular. The subscription wouldn't reconnect, and the UI stopped receiving updates. No error was thrown.
What I Tried
Tried manually calling supabase.removeAllChannels() and re-subscribing on network change events. That worked but caused duplicate subscriptions and memory leaks.
Actual Fix
With PowerSync, you don't need Supabase realtime subscriptions for the local database. PowerSync handles its own sync connection with automatic reconnect. For showing real-time updates to other users (like a dashboard), use a health-check pattern:
// For online-only dashboard views
import { supabase } from "./lib/supabase"
function subscribeToChanges(table: string, callback: () => void) {
let channel = supabase
.channel(`${table}-changes`)
.on("postgres_changes", { event: "*", schema: "public", table }, callback)
.subscribe()
// Handle reconnect
const unsubscribe = supabase.onAuthStateChange(() => {
channel.unsubscribe()
channel = supabase
.channel(`${table}-changes-${Date.now()}`)
.on("postgres_changes", { event: "*", schema: "public", table }, callback)
.subscribe()
})
return () => {
channel.unsubscribe()
unsubscribe.data.subscription.unsubscribe()
}
}
Handling Conflict Resolution
Two workers inspect the same site offline. Both mark issue #42 as "resolved" with different notes. When they both sync, what happens?
// Conflict resolution in PowerSync is configured per table
// Default: last-write-wins (based on updated_at timestamp)
// For custom resolution, use Supabase triggers
// migrations/conflict_resolution.sql
CREATE OR REPLACE FUNCTION resolve_inspection_conflict()
RETURNS TRIGGER AS $$
BEGIN
-- If status changed from 'draft' to 'submitted', always prefer submitted
IF OLD.status = 'draft' AND NEW.status = 'submitted' THEN
RETURN NEW;
END IF;
-- Otherwise, last-write-wins
IF NEW.updated_at > OLD.updated_at THEN
RETURN NEW;
END IF;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER inspection_conflict
BEFORE UPDATE ON inspections
FOR EACH ROW
EXECUTE FUNCTION resolve_inspection_conflict();
Problem
Our inspections table has 500K rows. RLS policies check organization membership on every query. A simple SELECT * FROM inspections WHERE site_id = ? went from 5ms to 120ms because RLS was joining against the organizations table for every row scan.
Actual Fix
Two optimizations that brought query time back to under 10ms:
-- 1. Add org_id directly to the inspections table (denormalize)
ALTER TABLE inspections ADD COLUMN org_id UUID REFERENCES organizations(id);
CREATE INDEX idx_inspections_org ON inspections(org_id);
-- 2. Simplify RLS policy to check the direct column
CREATE POLICY "Users see their org's inspections"
ON inspections FOR SELECT
USING (org_id = get_current_user_org_id());
-- get_current_user_org_id() is a simple JWT claim extraction
CREATE FUNCTION get_current_user_org_id() RETURNS UUID AS $$
SELECT (auth.jwt() -> 'app_metadata' ->> 'org_id')::UUID;
$$ LANGUAGE SQL STABLE;
-- Before: RLS joined organizations → members → users for each row
-- After: RLS compares a single column value — index scan, not join
Real-time Subscriptions That Don't Break
For the offline app itself, PowerSync handles sync. But the office dashboard needs real-time updates when field workers sync their data. Here's the pattern:
// Dashboard: show live inspection updates
import { supabase } from "./lib/supabase"
function useLiveInspections(siteId: string) {
const [inspections, setInspections] = useState([])
useEffect(() => {
// Initial load
supabase.from("inspections")
.select("*").eq("site_id", siteId)
.then(({ data }) => setInspections(data || []))
// Subscribe to changes
const channel = supabase
.channel(`inspections-${siteId}`)
.on("postgres_changes", {
event: "INSERT",
schema: "public",
table: "inspections",
filter: `site_id=eq.${siteId}`,
}, (payload) => {
setInspections((prev) => [...prev, payload.new])
})
.on("postgres_changes", {
event: "UPDATE",
schema: "public",
table: "inspections",
filter: `site_id=eq.${siteId}`,
}, (payload) => {
setInspections((prev) =>
prev.map((i) => i.id === payload.new.id ? payload.new : i)
)
})
.subscribe()
return () => { channel.unsubscribe() }
}, [siteId])
return inspections
}
RLS Performance Tips
- Denormalize org_id into every table. Don't rely on joins for RLS checks. The extra column costs nothing in storage but saves orders of magnitude in query time.
- Index the RLS columns. If your policy checks
org_id, make sure there's an index on it. - Use
STABLEfunctions for JWT extraction. Marking the helper function as stable lets PostgreSQL cache the result per query. - Test with
EXPLAIN ANALYZEas a non-admin user. RLS policies only apply to non-admin roles. Always test query plans as the actual user role.
What I Learned
- PowerSync + Supabase is the real local-first stack. Supabase alone isn't offline-first. PowerSync adds the missing local SQLite layer and bidirectional sync.
- Conflict resolution needs to be designed per table. Default last-write-wins is fine for most cases. For critical data, use SQL triggers.
- Denormalize for RLS performance. Adding org_id to every table sounds wrong, but it's the standard pattern for Supabase multi-tenant apps.
- Don't use realtime subscriptions in offline apps. Use PowerSync for sync. Use Supabase realtime only for online dashboards.
- Test sync with airplane mode. Seriously. Flip the switch, make changes, come back online. Do it 50 times. That's how you find sync bugs.
Wrapping Up
Building a truly offline-first app is harder than most people think. "Cache some API responses" isn't enough. You need a local database that accepts writes, a sync engine that handles conflicts, and a server that enforces access control. Supabase + PowerSync covers all three. After six months in production with 50 field workers, the sync has been reliable. Zero data loss, even when workers are offline for a full day.