// CASE STUDY · LIVE PRODUCT · 3+ TECHNICIANS

Field Alpha.

SaaS for field service companies — built for the van, not the conference room.

A two-surface platform: a web portal where dispatchers intake, schedule, and invoice, and a React Native app where technicians live the job — from AI-assisted intake, to navigation, to on-site lifecycle, to parts inventory, to payment.

Next.js 16 TypeScript Tailwind v4 Expo SDK 55 React Native 0.83 Supabase (RLS) PostgreSQL Gemini 2.0 Flash Twilio SendGrid Maestro E2E
Live · 3+ technicians
// TODAY · 6 JOBS
Morning dispatch.
08:30 · Kirkland
Water heater · diagnostic
10:00 · Bellevue
HVAC · install
13:15 · Redmond
Dryer · follow-up
// routed · 27 mi · $0 overtime

Two products, one database.

Dispatch and the technician share a single source of truth. When one changes a job, the other sees it before the tap finishes.

// Web
Dispatch portal
Intake, scheduling, assignment, invoicing, parts inventory. 335 TS/TSX files and counting.
// Mobile
Technician app
~28.4k lines of Expo + React Native. The whole on-site lifecycle: arrival, diagnosis, parts, signoff, payment.
// AI
Gemini-assisted intake
Natural-language inbound converted to structured work orders with priority signals.
// Ops
Route optimization
Daily runs organized by geography and job duration, so technicians drive less and earn more.
// Quality
Maestro E2E
End-to-end tests on both surfaces. If it breaks in staging, it never reaches the van.
// Problem

The back office and the van don't speak the same language.

Before Field Alpha, the dispatcher wrote the job on a notepad, read it over the phone, and hoped the technician wrote it down the same way. Addresses got mangled. The customer's actual complaint ("it buzzes when the heat kicks on") turned into "HVAC noise" by the time it reached the van, and by the time the tech arrived, nobody remembered whether the unit was under warranty.

I watched a dispatcher keep a paper schedule next to Google Maps next to a Twilio tab next to a QuickBooks window. Every job touched at least four surfaces, none of which knew about the others. A same-day reschedule meant five phone calls and a prayer.

Field Alpha is the thing I kept sketching on napkins. One database. Two surfaces: one for the desk, one for gloves and a moving truck. A real-time link between them so nobody retypes anything.

// Research

Five ride-alongs, then the build.

I rode in the truck for five full days across two companies before writing a line of production code. The notebook got messier than the code did. Two quotes stuck.

"I don't care how fancy it is. If it takes more than two taps to mark a job done, I'll just text the office like I always have." // Field observation · HVAC technician · Kirkland ride-along
"The worst part of my day is at 4:45pm when I realize I forgot to invoice a job from Tuesday." // User interview · Dispatcher / Owner · 8-person shop

So the technician surface had to be big buttons and one-thumb flows, with the entire on-site lifecycle squeezed onto a phone screen. And invoicing had to fire at signoff, without the dispatcher chasing it down the next morning.

// Architecture

Two surfaces, one source of truth.

Supabase sits in the middle. Postgres with row-level security does the heavy lifting, so both apps hit the same tables without a separate middleware tier owning auth. The dashed box is the RLS boundary: every query runs as the authenticated user, not a service account.

// SUPABASE RLS BOUNDARY WEB PORTAL Next.js · dispatcher SHARED DB Postgres + RLS TECH APP Expo · React Native GEMINI 2.0 FLASH intake → job record TWILIO · SendGrid outbound comms ROUTE ENGINE geo · duration heuristic
// Decisions

Three calls I'd defend.

Supabase RLS over custom JWT middleware

Writing an auth/middleware layer for two clients would have doubled the surface area and the bugs. Postgres row-level security with role claims on the JWT means the database itself refuses to leak a job between companies. It also means a junior can add a table without thinking through ten ways to forget the auth check.

Gemini 2.0 Flash over GPT-4o for intake

Intake calls are short and the schema is fixed: name, address, symptom, urgency. Flash is fast and cheap enough to run on every inbound SMS without watching the bill, and structured output mode keeps the job record from going off the rails. If intake ever gets fancier I'll revisit.

Expo managed over bare React Native

The van doesn't need a custom native module. EAS builds and OTA updates mean I can ship a fix to a technician in the field in under an hour, without waiting on App Store review. The day I need Bluetooth printers or CoreNFC I'll eject, but that day keeps not coming.

// Outcome

Three technicians, real jobs, real money.

Field Alpha is live with three technicians at a Seattle-area shop. Dispatch is off the notepad, invoicing happens at signoff, and the tech app survived its first week of rain, gloves, and truck-cab glare.

3+
// Techs live
335
// Web TS/TSX files
28.4k
// Mobile LOC
~18m
// Avg intake → dispatch
// Hindsight

What I'd do differently.

I'd factor the dispatch state machine out on day one. Right now the lifecycle (quoted → scheduled → en route → on site → signed off → invoiced) is spread across three files and two clients, and every new state transition is a small archaeology dig. A single typed state machine (xstate or a hand-rolled reducer) would have saved me a week of "why is this job stuck in scheduled" debugging.

I'd also wire TelemetryDeck or PostHog from commit one instead of adding it after the fact. The shop owner keeps asking great product questions ("which job types take the longest on average?") and I keep having to answer with gut feel because the events weren't in the pipe when I needed them.

Curious how it runs?

Field Alpha is a private product with paying users. Happy to walk through architecture if it's relevant to your team.

Get In Touch