Speed matters: from job posting to submitted candidate in 10 minutes

Apr 22, 2026 · 10 min
marketplace

At a typical travel-nursing agency, getting a clinician in front of a hospital is still a nine-step human chain. A recruiter monitors VMS portals, calls the nurse to confirm interest, walks through the requirements, chases missing documents, hands the packet off for assembly, and uploads to the VMS herself. On a good day she finishes a submission in a few hours. On a normal day it takes two. Three to five days, end to end, is normal turnaround.

It isn’t that any one person is slow. The system is. Nine sequential human steps, each one waiting on the one before it.

We rebuilt that system. For a new job activating against a pre-qualified clinician on Auto Apply, the end-to-end clock from posting to submitted candidate now runs under 10 minutes. That number is the headline. There isn’t one clock running in the submission pipeline, though. There are three, gated by different bottlenecks, and they tell three different stories about what speed actually is.

Three horizontal bars showing the three clocks in the submission pipeline. Clock 1 (Job posted to JPRO created) is the longest. Clock 2 (JPRO to VMS submitted) is medium. Clock 3 (ready-to-submit to VMS) is short. Each bar shows a mint automated segment and a gold human-review segment.

Three clocks, not one

When someone asks how fast Trusted submits to jobs, the honest answer starts with which submission they’re asking about.

Clock 1 starts when a job is posted by a hospital or a VMS and ends when a clinician’s application is created on our side. We call that application object a JPRO. Across the full population of submissions today, the median time from job creation to JPRO creation reads in days. That isn’t because applications take days to create. It’s because most submissions are happening on jobs that have been sitting in the system for a while, getting picked up by clinicians who weren’t targeted at activation. This is a matching and activation clock, not a submission clock. The fix isn’t about pipeline speed; it’s about getting the right clinician to the right new job at activation time. That’s the work covered in Every Open Shift and in Auto apply.

Clock 2 starts when a JPRO is created and ends when the packet is submitted to the VMS. This is the qualification and submission pipeline itself. The median has sat near an hour since late 2023, hovering in the 0.7 to 1.3 hour range. The rules engine and AI qualification layer (Automatic Qualifications: Keeping Humans in the Loop) compressed this one and then held it flat.

Clock 3 starts when a JPRO is marked ready to submit and ends when the packet hits the VMS. This is the last-mile clock, and it’s the one with the most direct control. Median today: about 8 minutes. In 2023 the same step took 22 to 38 minutes. Automated packet assembly is what moved it---Submission packet generation covers that piece. The clinician-facing version of the same compression shows up in a one-tap apply.

Taking the population median of Clock 1 and framing it as “submission time” is a category error. A median that mixes a brand-new job activating against a pre-qualified Auto-Apply clinician with a 14-day-old job finally getting picked up by someone browsing the listings is measuring two different physics. Treat them as one number and the story gets blurry. Treat them as three clocks and what we’ve actually built becomes visible.

Four eras of submission speed across four columns. 2020 at 3 to 5 days, nine sequential human steps to assemble a packet. 2023 at roughly 24 hours, manual auto-submission pilot with a five-hour internal target. 2024 at hours, autocuration and Auto-Apply and the rules engine in production. 2025 to 2026 at minutes, AI qualification, SRP, and automated packets on the pre-qualified path. Caption: from days to minutes.

The agency baseline: nine steps in a chain

The agency baseline is worth describing in full, because it’s the system most of the industry still runs on, and it’s what every later compression we built had to beat.

Every new job traverses nine steps before it becomes a submission. An account manager sets up the program and the submittal requirements. A curator pulls the job from a VMS queue, confirms the role’s requirements, and marks it active. A nurse browses her matches, spots it, requests an interview. The app flags gaps in her profile. She uploads what’s missing. A nurse advocate picks up a qualification task, opens a side-by-side comparison of profile against requirements, and marks the application ready. The submissions team generates the packet, checks the cover sheet, uploads to the VMS, and updates the record to “submitted to vendor.”

Nine handoffs. Each handoff is a queue. The nurse advocate qualification step alone averages 20 to 25 hours of wall-clock time, most of it spent waiting for a human to get to the task.

The constraint at that point is the submissions team. The leverage from making packet assembly faster or the VMS upload less error-prone is real but bounded. Even a maximally efficient submissions team is still nine queued steps from the job posting.

Why we made speed the strategy

By Q1 2023 the market had changed. Job demand was declining. Competition was tightening. Jobs started closing faster, sometimes within a day of posting, sometimes within hours. Being first to submit went from a nice-to-have to a structural lever, because the VMS often paused or closed a posting once it had enough candidates to review.

To prove the economics, we ran a manual pilot. A person refreshed a sheet every 30 minutes, created applications for opted-in nurses when a matching job appeared, and tagged the result for follow-up. It was a person doing what software should do. The point wasn’t to ship it---it was to find out whether first-mover advantage was worth the engineering cost of automating the chain. It was.

The follow-on initiative made the direction explicit: compress the time from job ingestion to VMS submission, treat it as the primary lever rather than a downstream metric. Three sub-levers fell out of that: faster ingestion, faster application creation, faster qualification and packet assembly.

What followed was something more interesting than chasing a single number: taking each lever seriously as its own bottleneck and compressing them independently.

Compressing ingestion

The first major gain came from autocuration. Jobs arrived from VMS partners with structured but incomplete data. A curator had to look at the source, fill in required fields like specialty taxonomy, shift type, unit details, and mark the job active. For well-structured feeds, that was a few minutes of work. For messy feeds, it was longer, and it queued behind every other job in the curator’s view.

An AI-powered curation step replaced most of the manual data entry for jobs whose source data was clean enough to map deterministically. The curation queue became a review queue. A human looks at the edge cases the model wasn’t confident about, and the rest of the feed flows through. Ingestion-to-active dropped from hours to minutes for most jobs.

The curator role didn’t disappear. It moved up the value stack, from data entry to exception review and program-level QA.

Compressing application creation

Eliminating the wait between “job is active” and “first application exists” was the second gain.

Two mechanisms attack this from different angles. Job diversions identify the best-fit clinicians in the pool the moment a job activates and surface it to the top of their matches with a notification. A clinician actively checking her matches can apply within minutes.

Auto apply goes further. For opted-in clinicians, the application fires automatically at job activation. The clinician has already told us what she wants. She doesn’t need to be browsing at the moment the job posts; she just needs to have set her preferences. The moment a matching job goes active, an application is in progress.

Together, these two mechanisms compress the activation-to-application interval from hours of waiting for someone to browse, down to seconds for high-volume activations where matched clinicians are already opted in.

This is the lever that bends Clock 1 for new jobs. It doesn’t bend the population median, which still includes a long tail of late applications to old jobs. But for a brand-new job that matches a pre-qualified clinician on Auto Apply, the first application now exists before most workflows would have noticed the job was up.

Compressing qualification and the packet

The third bottleneck used to be the 20-to-30-hour manual qualification window. Even with an application created immediately, a human had to read the profile against the rules before the packet could be assembled.

For the happy-path applications, the rules engine moved that. About 60% of qualification items today complete automatically against structured profile data. A clinician with a current ACLS, a verified license in the right state, and the credentials the program requires gets cleared without anyone looking. The AI qualification layer handles the items that don’t have a clean deterministic answer: text on a transcript, a slightly off-format certificate, an ambiguous timestamp. Advocates review the edge cases and the things the model wasn’t confident about. That’s the Automatic Qualifications: Keeping Humans in the Loop story in more detail.

Once qualification completes, packet assembly fires automatically. The packet logic knows which artifacts the program requires, where each one lives, and how it should be formatted. Where assembly used to take 22 to 38 minutes end to end, it now runs in seconds. Most of what’s left in Clock 3 is the human submission review for programs that require one, not the assembly itself.

The review step is still there, deliberately. It’s where we catch the things the system shouldn’t silently submit: a recently expired credential the rules engine hadn’t flagged yet, a program-specific format quirk, a packet that’s technically valid but obviously wrong on inspection. Quality assurance is the reason it exists, and we’re comfortable with the minutes it adds. The cost of a bad submission, to the clinician and the program, is higher than the cost of an extra minute on the clock.

Upstream of all of this, a clinician who has completed the activation checklist is already qualified for 80%-plus of jobs without needing anything additional. That’s what makes the Auto Apply path actually clear: the clinician is pre-qualified before the job exists. Getting clinicians to that state was its own multi-year project, and it’s worth walking back to it---because none of the three clocks runs at all if the clinician isn’t ready to apply.

Upstream: removing the activation gate

Before mid-2022, a clinician who finished her Trusted profile couldn’t apply to jobs yet. A CX team member reviewed the profile first. In theory a quality gate; in practice, a momentum killer. The data was unambiguous: a majority of users who reached the “approved” state dropped off before completing the steps to apply. The profile-completion-to-first-application conversion rate had a hole in it, and the hole was the wait.

The gate had a reasonable origin. In the early marketplace, placing a clinician at a hospital required high confidence that the profile was accurate. A submission with missing credentials or an incomplete work history created downstream problems: wasted recruiter time, failed submissions, customer frustration. Manual review was designed to catch those cases before they propagated. But the gate sat at the wrong point in the flow, and it applied the same friction to every clinician regardless of risk. Most clinicians completing their profiles for the first time had accurate, complete information. The profiles that failed review were a minority. The majority spent hours or days in a waiting room for nothing.

1-Step Activation, launched in June 2022, moved the review from prerequisite to concurrent. The clinician completes her checklist and immediately applies; the verification runs alongside the application. If something needs correction she gets targeted communication, not a retroactive block. A new application state, Pending Application, holds the application on her side while the JPRO is only created once the profile is confirmed.

Eight weeks later we extended the same principle further. The Apply button was unlocked for clinicians who hadn’t yet completed their profile, and the application flow itself walked them through the missing items inline---add work experience here, send reference requests from this screen, upload your license here. The application became the profile-completion experience for clinicians who hadn’t finished one. Applications saved as Pending while a reference was outstanding or a checklist was in progress. When the outstanding item resolved, the JPRO was created automatically.

The principle is identical to everything that came afterward: meet the clinician where her intent is, do the verification work in parallel rather than as a gate, and reserve the synchronous wait for the items that actually need a human looking. Auto Apply, the rules engine, automated packets---none of it works against a population of clinicians stuck in a review queue. The activation gate had to go first.

The current submission pipeline shown as six columns with timing markers below each. Job activates at T plus zero with ingestion and autocuration. Auto-Apply fires at T plus seconds, creating the JPRO from preferences. Qualification runs at T plus seconds, rules engine plus AI for edge cases. Packet built at T plus minutes, artifacts pulled to spec. Review at T plus minutes, the human QA step where required. VMS submit, automated, marked done. Caption: automated where possible, human where it matters.

The current state, told in three clocks

Reading the three clocks today.

ClockStart eventEnd eventCurrent medianPrimary bottleneck
Clock 1Job postedJPRO createdDays (population)Mix of late applications to old jobs, not pipeline speed
Clock 2JPRO createdVMS submitted~1 hourOff-happy-path qualification: advocate review, async credential checks, holds for missing items
Clock 3Ready-to-submitPacket hits VMS~8 minutesDeliberate human submission review where the program requires it

The Clock 1 number isn’t a measure of pipeline speed---it’s the mix of submissions we see, mostly applications to jobs that have been live for a while. For the slice that matters---a new job activating against a pre-qualified clinician on Auto Apply---the three clocks compose. Job activates at T+0. Auto Apply fires within seconds. Qualification completes automatically against the rules engine within seconds. The packet assembles in seconds. The submission review happens in minutes. End-to-end wall-clock time from activation to VMS, under 10 minutes for that path.

That’s the number that matters, because that’s the number that gets a clinician in front of a hospital before the posting closes.

What this composes into

None of these compressions is interesting in isolation. Autocuration on its own would have shaved hours off a process that was still days long. Auto Apply with manual qualification would still have stalled at the 20-hour qualification window. Automated packet assembly with no rules engine would have produced packets nobody trusted.

What makes the sub-10-minute path real is that each prior bottleneck got solved independently, and the solutions compose. The system has to know the job (ingestion), the clinician’s preferences (Auto Apply), what the program requires (rules engine), and how to assemble a packet against those requirements (packet assembly). And it has to know which of those steps a human still needs to look at (submission review).

Most of the remaining latency is there by design. As qualification accuracy improves and advocate overrides become rarer, the review step compresses too---not to zero, but to a checkpoint rather than a wait. For trusted clinicians on known programs with clean credentials, what’s left is the time the VMS itself takes to accept the packet.

--- Engineering

← back to posts