# trusted.works - the technical blog of trusted, inc. Notes from the tech team at Trusted, where we're building an AI-first operating system to (em)power the healthcare workforce. ## Posts - [The Agent-Ready Manifesto](/posts/agent-ready-manifesto): There are two kinds of company being built under the 'AI-first' banner. One bolts model calls onto the existing product. The other treats AI as a forcing function to redesign the company so a software agent can do real work, under the same constraints, permissions, and accountability as a human teammate. This is the manifesto for the second kind. - [Every Open Shift](/posts/every-open-shift): Healthcare staffing has spent a decade arguing whether it's a relationship business or a quantifiable one. Every services-heavy industry has had this argument, and they've all resolved it the same way: the relationship layer moves out of the broker and into the infrastructure. This is the operating system we've spent nearly a decade building, and how AI is accelerating the head start. - [Speed matters: from job posting to submitted candidate in 10 minutes](/posts/submission-speed-story): Submission speed in travel nursing usually gets framed as one number. It isn't. Three clocks run at once, each gated by a different bottleneck. The pipeline itself now runs under ten minutes for a pre-qualified clinician on Auto Apply, and the population median still reads in days. Here's why both are true. - [Auto-curation: agents that fill out forms](/posts/the-form-pattern): Job curation used to be a queue of humans normalizing raw VMS snapshots into postable shifts. Now it's four specialized agents filling out the same form a human would, hitting the same validations, with a feedback loop that keeps them honest in production. Cost is around thirteen cents per curated job. - [Scrubs: built for the frontier](/posts/built-for-the-next-model): We gave a small team thirty days to rebuild Trusted's core clinician journey from an empty repo, with one architectural rule: assume the models keep getting better, and build for that. Here are the five choices that came out of it: what we'd do again, what shifts when most of the code is written by AI, and what stays the same. - [IronTrail: catching every write](/posts/irontrail): There's a class of Rails bug where a record changed and paper_trail saw nothing: an `update_columns` bypass, a console fix at 2am, an agent issuing raw SQL. IronTrail does change capture at the Postgres trigger layer instead of the ActiveRecord callback layer, so the audit catches every write. MIT licensed, on RubyGems, running in production at Trusted for over a year. - [Auto-Apply: What worked, and what didn't](/posts/auto-apply-retro): Auto-Apply inverts clinical job search: the clinician opts in once, and the system applies to matching shifts on a preference profile. It took a manual spreadsheet pilot and four product iterations before it was safe to leave on. Every guardrail traces back to a specific failure in production. The deeper lesson: an opt-in preference is a weaker intent signal than a per-job tap. - [Mission Control: the command center for VMS program managers](/posts/mission-control): VMS program managers have been running their work out of spreadsheets, Slack threads, and email chains. None of those screens were designed for the job. Mission Control is the surface we built around their actual unit of work (the program), backed by an integration-status dashboard and in-product messages that keep the live state of every connection visible. - [Integrating with UKG: caches, credentials, and worker sync at hospital scale](/posts/ukg-integration): UKG is the HR and timekeeping backbone for many large hospital systems, which means integrating with it is less of a vendor connection and more of a sustained engineering project. This is the multi-read cache that took ~67,000 redundant API calls a day off our floor, the mTLS credential management per environment, and the worker-sync pipeline that handles the messy parts. - [From form to conversation: rebuilding clinician validations as chat](/posts/conversational-validation): Onboarding used to be a twelve-page form that asked humans to fit their lives into our schema. We rebuilt the validation flow as a chat that drives the same structured form underneath, then extended the pattern to an agent that greets stalled signups already knowing who they are. - [Megazord: rebuilding shift ingestion for Works](/posts/megazord): Shift ingestion is the hot path under every decision Works makes: pricing, balancing, fill-rate analytics, dashboards. Megazord is the rewrite that decoupled ingestion from query, normalized demand into 15-minute blocks, and made a year of fill-rate data queryable in sub-250ms. This post is the design and what it unlocked. - [Automatic Qualifications: Keeping Humans in the Loop](/posts/rules-engine-ai-qualification): Qualifying a clinician against a job used to be an advocate reading prose requirements in one tab and clicking through credential tabs in another. We rebuilt the layer. Every requirement is a typed JobRule, every credential is structured data, and an AI layer at temperature 0.0 handles the ambiguous cases, with a human feedback record captured on every execution. - [Nursys in production: real-time RN license verification at scale](/posts/nursys-license-verification): Our first Nursys integration shipped in 2018 as a one-time upfront call. For seven years a back-office team still read the results, pasted screenshots, and re-checked on a slow clock. In August 2025 we made verification a real-time rule inside the submission pipeline: compact-license logic across 40 NLC states as a graph, structured disciplinary handling, and a Trusted-branded Quick Confirm report. - [Hard constraints, soft preferences: the load balancer for clinical schedules](/posts/load-balancer): A nurse manager runs eight hundred constraint checks in her head between 6 and 7 AM every Tuesday. That's the work we're replacing: not her judgment, the mechanical part. This is how we wrapped a constraint solver in a pipeline she can trust: the constraint model, the run loop, the worker-sync prerequisite, and what gets handed back as output. - [Premium pay tied to commitment: the FTE attainment model](/posts/fte-linked-incentives): Flat shift bonuses pay everyone the same premium for the same hard shift, whether they're a core nurse covering for the unit or a per-diem chasing the highest-paying slot. That sends the wrong signal and leaks dollars. This walks through how we gated premium pay on FTE attainment, the snapshot model underneath it, and the payroll plumbing that keeps it auditable. - [Reverse-invoicing automation: 50 vendors, one ledger](/posts/vendor-reconciliation): Every week, fifty-plus VMSs send us invoice data in their own dialect. The vendor knows a worker by name string; we know her by UUID. One system calls a unit 'ICU-MED-3W'; we have a clinical unit ID. This walks through the four-step pipeline, the identity bridge, and the two design moves we keep reaching for: derived state, and query-enforced idempotency. - [Interactions: every conversation in one record](/posts/interactions-layer): For years, every channel between Trusted and a clinician lived in a different vendor's console. We pulled all of it onto the clinician record as a single interaction event. The interesting problem wasn't ingestion. It was identity resolution: phone numbers and emails aren't people, and a person is a record with a history. - [GLaDOS: the LLM layer](/posts/glados-llm-layer): Most Rails apps grow five different patterns for the same five LLM problems: retries, logging, validating shape, surfacing failures, chaining calls. GLaDOS is the single internal pack we built to handle all of it: prompts as first-class objects, structured-output validation, async execution, audit logs, and a multi-turn agentic loop with tool calling. - [Submission packet generation](/posts/submission-packet): Every VMS wants the packet built differently: which documents, in what order, in what format, by which channel. We modeled that variation as three rule tables resolved hierarchically against job, program, and global defaults. Adding a new client is now a config change instead of a code change. - [Automated reference capture](/posts/reference-automation): A travel nurse needs two professional references to submit to most jobs. Until recently, getting them was almost entirely manual: a person sitting between the clinician, the contact, and the submission deadline. This walks through what we automated, what we left alone, and why the design decisions that mattered most were operational rather than architectural. - [Job Curation: what curation actually is](/posts/curation-foundation): Curation isn't a quality step on top of an existing job record. It's the step that makes the record exist. This post unpacks the four tasks curation is really made of and the domain model underneath them. Get the model right and autocuration follows. Skip it and you automate a screen instead of a domain. - [VMS Ingestion Automation: 30 sources, one job model](/posts/vms-ingestion): Thirty-plus VMS platforms sit in our ingestion path, each with its own dialect. The naive answer (one importer per vendor) turns into code nobody wants to touch. This walks through the pipeline we built instead and the design decision that did the most work: pushing classification into a rule table so adding a new vendor is a small PR, not a rewrite. - [Comp Calculator v2: PayBill Unification](/posts/comp-calculator-v2): Three places in the codebase computed nurse pay packages. They had drifted from each other and none kept a record of what produced a given number. Comp Calculator v2 collapsed them into one engine backed by immutable snapshots, plus a migration bug that taught us what semantic drift between V1 and V8 actually costs.