From form to conversation: rebuilding clinician validations as chat

Oct 29, 2025 · 8 min
marketplaceai

A travel nurse with eleven years of experience opens our onboarding flow on the train home from a 13-hour shift. The form is twelve pages long. By page four she is being asked to explain a three-month gap in her work history; the field is a single-line text input with a 240-character limit. Her actual answer is that she took bereavement leave, moved her mother across state lines, and re-credentialed in a new state. She types “family stuff”, hits next, and the validation rejects it for being too vague. She tries again. She closes the tab.

Field 47 was where we kept losing her. We knew the number because we logged it. We logged dozens of those numbers, every week, on every variation of the form we built. Every time we tried to fix one we created another. The form was the problem, and rearranging the form was not going to fix it.

So we stopped rearranging the form.

The form was always a workaround

A form is a schema rendered as a UI. The fields exist because the database has columns. The validations exist because the columns have types. The pagination exists because no one can stand to look at a 47-field flat list. None of it is about the user; all of it is about the software.

For most of the last two decades, this was the only way to collect structured information from a human. Render the schema, accept what fits, reject what doesn’t, repeat until the human gives up or the data lands clean. Every healthcare onboarding flow we’ve ever seen is some version of this. The form is the lingua franca of healthcare data collection because it was the only thing that worked.

The form asks the human to do the translation. Your life is messy. The schema is not. The work of fitting one into the other is the user’s problem, and the user pays for it in time and in the answers they give up on giving. Family stuff is a perfectly accurate description of three months of someone’s life. It does not pass a validation rule written in 2019.

Once a competent language model can sit between the human and the schema, that translation work can move. The human gets to answer in the language they actually use. The model writes the answer into the schema on their behalf. The form still exists, but the human no longer has to see it.

Chat at the front, form underneath

A before-and-after diagram. On the left, a stylized 10-row form with field labels like specialty, license, employment gap, and dates, with a stress signal at the top reading Page 4 of 12. On the right, a chat thread with several message bubbles, one of which contains an inline form rendered with two already-filled rows. An arrow runs from left to right with the caption Form to Conversation.

In August of 2025 we shipped the first version of conversational validation for clinician work experience. A clinician who needs to clarify an employment gap, correct a misread date, or reconcile two overlapping positions no longer sees a form field. They see a chat. The agent asks one question at a time, in plain English, and writes the structured answer into the same validation surface that a human ops reviewer would have used.

The architecture borrows directly from the Form pattern we wrote about earlier. The chat is the front-end. Underneath, an agent has tools that mutate fields on a Form---the same Form a human ops reviewer fills out in our internal app. The agent reads the conversation, decides which fields the clinician’s answer maps to, calls the right form.set_* tool with a reason, and submits when the form is complete. If validation rejects the submission, the same error message that would have shown red in the human UI comes back to the agent, and the agent asks the next question.

The clinician never sees a validation error. They see a follow-up question that has the validation embedded in it.

Three things did the work.

The agent has a small, well-scoped tool set: search for clinical units, look up state licensure, pull existing work-experience records, mutate specific fields on the Form, submit. It can’t do anything else. The tool registry is the boundary between what the model is allowed to do and what it can talk about, a design choice we wrote about in Built for the frontier that has held up across every agent surface we’ve shipped since.

The validations are the same validations the human reviewer hits. If a date range overlaps with an existing position, the validator rejects the Form regardless of whether the field was set by a typing human or a model. One rulebook, one source of truth.

Every field set comes with a reason. When the model writes set_employment_gap_explanation(“bereavement and out-of-state move”, reason=“clinician described family loss and relocation; spans 12 weeks matching the gap in employment records”), the reason is logged with the mutation. An ops reviewer can scan the audit trail and see, in plain English, why the agent landed on the value it did.

In September we extended the pattern to broader work-experience validations: employment gaps, date corrections, position-overlap reconciliations, and a handful of smaller cases the form variant had been particularly bad at. Same agent stack, same Form underneath, more cases routed through chat.

Context first, inventory second

A diagram of an agentic activation flow. On the left, a profile card showing role, specialty, and state for an arriving clinician. In the middle, an agent node with two tool icons for searching jobs and sending a message. On the right, a chat thread where the agent surfaces real shifts including an ICU position in Phoenix with shift hours, length, and a starting rate. Caption: Context first, inventory second.

The work-experience case is one example of the underlying pattern. The bigger one we’re building right now is activation.

Most clinicians who sign up for Trusted complete part of their profile and stall. They get pulled into a shift, they get distracted by life, the activation email never converts. The marketplace doesn’t see them again until they come back on their own. Our stalled-signup conversion rate has historically sat around fifteen percent. We’d like it materially higher.

In February we wrote the spec for agentic PQL activation: an LLM agent that reaches out to a stalled signup the way a good recruiter would, with two specific advantages over the email blast it replaces. First, it already knows the clinician’s role, specialty, and state from the partial profile they completed. Second, it has tools to search live job inventory the moment the conversation starts.

The pattern is the same as the work-experience flow, applied at a different surface. The agent has a small tool set: read profile, search jobs, send a message. It cannot mutate the marketplace, it cannot offer pay rates outside the live posting, it cannot promise a placement. What it can do is open a conversation that starts from real context (Hi, I see you’re an ICU nurse looking at Arizona. There’s a Phoenix shift this week that pays in the range you mentioned, want to see it?) and uses the same chat-to-Form architecture to capture whatever additional profile fields the conversation surfaces.

What we’re aiming for is to lift stalled-signup conversion from around fifteen percent to north of twenty. The system is in flight at the time of writing---we’ll write about results when we have them.

The reason we believe the lift is achievable is that the failure mode of the email blast is exactly the failure mode of the form: it asks the user to do the translation. Email says click here to finish your profile. The agent says here is a job you could take next week, want to talk about it? One starts from the schema. The other starts from the human.

What we got wrong in v1

v1 shipped, and a meaningful slice of clinicians did what we should have anticipated: they typed long, unstructured paragraphs into the chat. Two, three, sometimes five hundred words at a time. Their entire work history, their reasoning about a gap, their feelings about a previous employer, all in one message.

The agent treated each turn as a unit. It would extract the first thing that mapped to a field, set it, ask the next question, and ignore the rest. Information the clinician had volunteered in turn two would not surface again until turn nine, when the agent finally asked a question it pertained to. By then the clinician was annoyed at being asked to repeat herself.

The fix was a change in how the agent treats the conversation as a whole. Every clinician message is now parsed for all extractable fields, those fields are written to a working buffer attached to the Form, and the agent’s next question is chosen based on what is still missing rather than what is logically next. If the clinician dumps her entire history in turn two, the agent compresses the rest of the conversation into a confirmation pass.

We had built v1 around an assumption real users immediately violated: that the conversation would proceed one field at a time. Tired humans at the end of a shift don’t converse one field at a time. They give you everything at once and expect you to keep up. The model is good at extraction, but only if you give it the full conversation to extract from on every turn.

When to render the form, when to ask the question

Forms aren’t always the wrong answer. A clinician reviewing her own completed profile wants a structured view, not a chat history---the structure is the point. Dense data entry where the user already knows every field she wants to fill (license number, NPI, date of birth) is faster as a form than as a conversation. Anything the clinician has filled out before and is updating in place is a worse experience as chat than as a form.

The right framing isn’t forms versus chat. It’s which is the right surface for this moment. The decision becomes a runtime one, not a build-time one. The same agent that handles a conversational validation can render an inline form when the structure helps. The same form that handles dense data entry can hand off to a chat surface when a clinician’s answer overflows the schema. We’re not done with forms; we’re done with forms being the only way to collect structured information from a human.

Where else this applies

Work-experience validation is one surface. PQL activation is another. There are more inside the platform where the same pattern applies: licensing, credential renewal, shift preference capture, post-shift feedback.

--- Engineering

← back to posts