Shadow AI Is Already in Your Lab. Now What?
Picture this: it is 11 PM. Your lab is short-staffed. You have a critical result to call, the provider is not calling back, and you need to document the attempt in your LIS in a way that will hold up if anyone reviews it later. You pull out your phone. You open ChatGPT. You type out the situation and ask it to help you draft the note.
Nobody told you to do this. Nobody told you not to. It takes 45 seconds. It works. You do it again the next night.
That is shadow AI. And it is happening in clinical labs right now.
What We Are Actually Talking About
Shadow AI is the use of generative AI tools outside of institutional oversight. No IT approval. No Business Associate Agreement. No governance policy. Just a staff member who is burned out, under-resourced, and found something that helps.
Wolters Kluwer’s 2026 Healthcare AI Trends report puts it plainly: the surge in shadow AI accelerated in 2025, driven by “persistent burnout and staffing shortages.” Healthcare workers are not reaching for these tools because they are reckless. They are reaching for them because they are exhausted and nobody has handed them an approved alternative.
The same report notes that 2026 is shaping up to be the year of AI governance, with health system leadership racing to catch up to how fast clinicians have already adopted these tools. That is a polished way of saying: staff figured it out on their own, and administration is scrambling.
Why Labs Are Especially Vulnerable
I want to point something out about that report: it does not mention laboratorians once.
This is not a knock on the report. It is a pattern I run into constantly. When healthcare AI conversations happen, they center on physicians, nurses, and clinical decision support. The lab is invisible. And that invisibility has real consequences.
Clinical laboratory professionals carry some of the highest cognitive demands in healthcare. Complex decision trees. High-volume, time-sensitive work. Specimen integrity issues. Critical value management. Staffing shortages that have not recovered since the pandemic. These are exactly the conditions that push people toward any tool that makes the work feel more manageable.
My own research looks at how lab staff interact with AI-assisted patient safety systems, and I can tell you: when institutional tools do not fit the workflow, people improvise. Workarounds are not new. What is new is that the workarounds now involve AI.
The Real Risks
The problem is not that lab staff are using AI. The problem is the gap.
When staff use unapproved tools outside institutional oversight, a few things can go wrong.
The PHI problem. If a laboratorian types patient details into a free ChatGPT account to help draft a callback note or explain an unusual result, they may be transmitting protected health information to a third-party tool with no BAA in place. That is a HIPAA exposure risk. Most staff do not know they have crossed that line.
The data training problem. This one is subtle and worth understanding. Major AI tools handle your data differently depending on what plan you are on.
If you are on a paid plan, you typically have the option to opt out of having your conversations used to train the model. But you have to actively find that setting. It is usually buried under something like “data controls” or “model training” in your account preferences. It is not off by default.
If you are on a free account, you often do not have that option at all. Your inputs may be retained and used to improve the model. Free-tier users tend to be the last to know this.
The deskilling problem. This one plays out over time. When clinicians rely on AI outputs without building an understanding of why those outputs are right or wrong, they can lose the critical evaluation skills they need for when the AI gets it wrong. In the lab, that matters a lot.
What You Can Actually Do
If you are a lab professional using AI tools informally right now:
You are not alone, and you are not a bad person. You are trying to survive a system that has not caught up.
But here are a few things worth doing today:
No PHI in AI tools. Not in free tools, not in paid tools without a BAA. If you need to describe a patient scenario, strip out all identifying information first. You can explain the clinical situation without including the patient.
Check your data settings. If you are on a paid plan, go into your account settings and look for a section on data use or model training. If you have the option to turn off training on your conversations, turn it off.
Know what tier you are on. Free accounts typically do not give you the opt-out option. If you are not sure what plan you are using, that is worth finding out.
Know your institution’s policy. Even if the answer is “we do not have one yet,” that is useful information.
If you are in quality or leadership:
Shadow AI in your lab is not primarily a compliance problem. It is a signal that your staff need support.
Some organizations are beginning to stand up institution-wide AI tools that are PHI-compliant and available to all staff. That is the direction to push. Wolters Kluwer’s experts call this an “AI safe zone,” a controlled environment where staff can actually use these tools without putting the organization or patients at risk.
If you want to start that conversation with your IT or compliance team, here is one way to frame it: “We know staff are using AI tools informally to manage workload. Can we talk about what a compliant, approved option might look like for our lab?” That opens a door without putting anyone on the defensive.
I Want to Hear From You
This is the conversation the lab community is not quite having yet, at least not out loud.
Is shadow AI showing up in your workplace? Are staff using tools informally and nobody is talking about it? Has your organization stood anything up that actually helps? Have you tried to have this conversation with leadership and hit a wall?
Hit reply and tell me what you are seeing. I read every response, and this is exactly the kind of thing I want to keep writing about.
See you next week.
Meredith
Meredith Hurston is a healthcare quality professional, doctoral student in AI and machine learning, and the author of Algorithmic Oversight. Lab Notes is her newsletter documenting the practical, honest side of working with AI in healthcare.

