Nursys in production: real-time RN license verification at scale

Sep 23, 2025 · 8 min
marketplace

A nurse submits to a shift on a Friday afternoon. The shift starts Monday at 7 a.m. Somewhere in the back office, a queue ticks up by one. A coordinator opens the Nursys record we pulled at onboarding, eyeballs the result, switches to a different tab to confirm nothing’s changed since the last lookup, and pastes a screenshot into a comment. Maybe the license is clean. Maybe there’s a flag on it. Maybe the nurse holds a compact multi-state license and the coordinator now has to remember which 40 states are in the Nurse Licensure Compact this quarter.

That was the workflow into 2025. We’d had Nursys wired in since 2018, but the integration ran as an upfront one-time call: pull the license record when the nurse signed up, store it, surface it to the back office. Verification still lived between systems and was acted on by humans, with latency measured in hours and an audit trail measured in screenshots. The integration was a data feed for a manual workflow, not a real-time rule.

In August 2025 we rebuilt it. License verification is now a real-time qualification rule, backed by the same Nursys integration but evaluated inside the submission pipeline on every relevant change. The upstream call is the easy part; the work was everything we had to build around it.

Why license verification sits on the hot path

Three things make license verification a different kind of compliance check than, say, an EMR proficiency form.

It is blocking. A clinician without a verifiable, current license cannot work the shift. If the verification is slow, the submission is slow, and the shift fills late or doesn’t fill at all. Fill speed is the whole game in the per-diem market.

It is dynamic. License status changes. A name change after a marriage, a disciplinary action, a lapse during renewal --- any of these can flip a previously-verified clinician from eligible to ineligible. A PDF from six months ago is not the same artifact as a real-time check today.

It is auditable. Health systems are accountable to regulators for the credentials of every clinician who touches a patient. “We checked” isn’t enough. The record needs to show the source, the timestamp, the result, and the exact license that was checked.

The pre-2025 workflow handled all three poorly even with the integration in place. The data feed was real; the workflow on top of it wasn’t. A back-office team could only process verifications at human speed, could only detect changes when they re-checked, and produced an audit artifact that was whatever screenshot got pasted into a ticket. The system worked because the people doing it were diligent. It didn’t scale.

The architectural shift

A before-and-after diagram. On the left, a manual verification flow: a stick-figure operations worker clicks through a stack of forms representing the Nursys public UI, with a slow-moving arrow to a verification ticket. On the right, a direct API integration: a Nursys card at the top, a hit-or-clean structured response in the middle, and the generated Trusted Quick Confirm report card as the final output. Caption: manual verification on the left, automated real-time verification on the right.

The real-time qualification rule shipped in August 2025. The Nursys data feed had been in place for seven years; what changed in August was where the check ran and what it produced. The shape of the shift is the same shape we’ve been writing about across this publication: move the compliance check from a downstream back-office step into the upstream qualification surface, where the rule is evaluated automatically against authoritative data, with a structured, auditable output.

Concretely, three things changed at the same time:

The source moved from a screenshot of the public UI to a structured Nursys response, persisted with timestamps and source metadata.

The evaluation moved from human judgment into rule code, with every input the rule depends on (license number, state of issue, current practice state, disciplinary status, name match) treated as an explicit field rather than a thing a person eyeballs.

The output moved from a comment thread into a Trusted-generated artifact: the Quick Confirm Report, which we ship to customers in our own format rather than asking them to trust the upstream Nursys UI.

Compact licenses are a graph problem

A stylized abstract map of the United States rendered as a grid of cells. 40 cells are highlighted with a mint accent to mark Nurse Licensure Compact states. Three example connector arrows route from a primary-state cell to a practice-state cell, showing how a single compact license routes verification across borders. Caption: 40 NLC states, one license, primary state to practice state.

The Nurse Licensure Compact is an interstate agreement that lets a nurse hold a single multi-state license issued by their primary state of residence and use it to practice in any other NLC state. As of this writing, 40 states are in the compact. Connecticut joined in October 2025, which sounds like a small event until you remember that every NLC change is a change to the graph the verification rule has to traverse.

A non-compact license is a single edge: licensed in this state to practice in this state. A compact license has more structure. The primary-state license is the ground-truth document. The practice state is wherever the nurse is currently working. As long as both states are NLC members and the nurse’s primary residency aligns with the issuing state, the single license covers practice in the other.

The places that get hard:

A nurse holds a compact license issued by State A but is now working in State B. The Nursys response keys off State A; the qualification rule needs to know we’re practicing in State B. Six months later that same nurse has moved to State C and submitted to a shift in State D. The query changes; the underlying license number does not.

A nurse’s primary state of residence is in transition. The textbook compact rule asks that the nurse’s permanent address or driver’s license state match the issuing state. In real life, a nurse who moved last month may still have the old driver’s license while their permanent address has updated, or vice versa.

In March 2026 we relaxed that part of the rule: only one of the two needs to match, not both. The strict version was rejecting compact licenses for nurses who were unambiguously eligible, because their paperwork lagged the move by a few weeks. The relaxed version captures the intent of the compact without insisting that every form of ID has caught up.

A state changes status. When Connecticut joined the NLC in October 2025, every existing Connecticut single-state RN became a compact licensee on the date of the change. Our graph needed to know that, and so did every qualification rule pointing at a Connecticut nurse. The right way to model this isn’t a hard-coded list of compact states scattered across the codebase; it’s a single source of truth the rule consults at evaluation time, refreshed when NLC membership changes.

A back-office team can carry the compact rules in their heads, mostly. The system has to carry them explicitly, because it runs in real time against every submission, and the cost of a wrong rejection isn’t one annoyed nurse---it’s a shift that doesn’t fill.

License hits as a structured rule

Most license verifications come back clean. The ones that don’t are where the system has to be most careful.

Nursys returns “hits” on a license: disciplinary actions, name changes, status changes (active, encumbered, expired, suspended, revoked). In the old workflow, a hit landed in a comment field as free text. Sometimes the note was “disciplinary action in 2018, cleared in 2019, ok to proceed.” Sometimes it was “hit on record, escalating.” Sometimes the field was empty and someone three steps downstream had to call the coordinator to ask what happened.

Free text is fine as an input to a human. It isn’t fine as an input to a rule. Code can’t decide whether a free-text comment means eligible or not without reading it, and at our volume, reading every comment is a smaller version of the original manual workflow.

In October 2025 we shipped the License Hits Rule, which replaces free-text handling with a structured determination. Every Nursys response is parsed for hits. Each hit is classified by action type, issue date, and current status. The rule encodes which combinations are disqualifying for which contexts, and which are informational. The output is a determination, not a comment.

This matters in three ways.

The rule is the same every time. Two submissions with the same hit profile get the same answer. Coordinator A and coordinator B don’t disagree about the same record three days apart.

The rule is auditable. When a hit-based rejection comes back to a clinician, we can show exactly which fields drove the decision and which version of the rule was applied. We can re-run the rule against historical data when the rule changes.

The rule is changeable. When a customer asks us to handle a specific category of hit differently --- this state’s administrative-only actions don’t disqualify, that state’s probationary status requires manager approval --- the change happens in one place. The previous version of this was a slack channel and a wiki page.

The Quick Confirm Report

The Nursys public UI is the source of truth, and it is also a UI we don’t control. Formatting changes. Fields move. The artifact a customer would screenshot today doesn’t look quite like the one from a year ago. For a compliance team filing the same kind of document for every clinician, that’s friction.

In September 2025 we shipped the Trusted Quick Confirm Report, a Trusted-branded verification artifact generated from the Nursys response at the moment of verification. It includes the nurse’s name as Nursys returned it, the license number, the issuing state, the license type and current status, the compact-eligibility determination, the result of the hits rule, the verification timestamp, and the Nursys reference for the underlying check.

The report is generated on every verification, lives alongside the clinician’s profile, and is attached to the submission. Audit teams can pull it months later and see the same artifact they would have seen on the day of the check, in a format we own.

My first instinct on this project was to skip the report and point compliance teams at Nursys directly. That instinct was wrong. Pointing at a third-party UI for proof-of-verification means your audit artifact is whatever that UI happens to render today, and a customer reading it is missing the Trusted-side context (the compact determination, the hits rule version, the timestamp of our check) that explains why we approved the submission. The report is where the verification stops being a third-party query and starts being a Trusted determination, with the upstream source attached.

Profile validation, downstream

The verification result doesn’t just live on submissions. It feeds back into the clinician profile. The license fields on a Trusted RN profile are now validated against the Nursys-derived Trusted Status---what we know about that license at the point of last verification. A clinician updating their license in the app sees the field validate against the upstream record. A profile with stale or contradictory license data flags itself before a submission is attempted.

The qualification rule fires at submission time, but the profile is the long-lived artifact that every submission references. Keeping the profile aligned with the upstream source is what prevents the rule from doing the same verification a hundred times for the same clinician across a hundred submissions.

--- Vinicius, Engineering

← back to posts