Cookie Policy
We use cookies to enhance your browsing experience, serve personalized content, and analyze our traffic. By continuing to use our website, you consent to our use of cookies. To learn more, visit our Privacy Policy.
January 13, 2026
thought leader
AI tools will evolve. Our standards should not.
In science and healthcare communications, our role as accurate, accountable storytellers is now more important than ever.
Juan Vasquez
Amanda Johnson

Recent missteps stemming from the use of AI-based tools are powerful reminders that when it comes to healthcare and science, even small communication breakdowns can carry outsized consequences. In a case reported in media, a UK patient received a diabetes screening invitation—despite never being diagnosed with diabetes. The root cause? An AI-generated clinical summary that fabricated a false medical history. It wasn’t a clinical error in the traditional sense; it was a breakdown in translation—an AI-generated narrative that passed unvetted, introducing fiction into fact.
In another high-profile example, a pharma researcher reportedly relied on ChatGPT to format references for a preprint, only to discover that several citations appeared to be hallucinations, which ranged from existing publications referencing the incorrect year to papers that didn’t exist. What may have seemed like a harmless shortcut proved otherwise.
The myth of the menial: Small tasks, big consequences
These aren’t just tech glitches. They are failures in medical and scientific communication—failures that show how easy it is to let speed eclipse scrutiny. AI-generated content can look polished and convincing, but AI’s hallucinations and inaccuracies are so well-documented that, increasingly, we ask, “Was this written by AI? Is this credible? Was this vetted by a human who understands the science, context, and consequences?”
In this AI-enabled era, it’s our responsibility—as communicators, strategists, and scientists—to apply a human lens to every output, because a small error in a draft or reference list, left unchecked, can have real downstream consequences on patient lives, public trust, and the credibility of science itself.
Evolving technology, evolving playbook
There are clear reasons to lean into AI, not as a replacement but as a collaborator. Regardless of how new technologies are deployed in a healthcare setting, the imperative remains the same: adopt with intention and apply with care. We’re continually evolving our own AI playbook to support the diverse ways companies across the healthcare ecosystem are adopting these tools. Whether it’s a biotech team using AI to accelerate drug discovery or a commercial team integrating AI into customer engagement strategies, the applications—and the risks—are different.
This necessitates an agile approach. As the technology evolves, so must methods for prompt design, quality control, and tailoring outputs to meet scientific, regulatory, and human standards. Whether practicing healthcare or communicating about it, responsible AI use starts with understanding the context and building in the right safeguards from the start.
1. The ask shapes the output
The prompt is the protocol. Asking the right question, providing the right context in the right format, with the right guardrails, dramatically changes the output, especially in scientific and medical domains, where vague prompts can lead to vague—and potentially questionable—results.
2. QC isn’t an option. It’s everything
Our default assumption is that every AI-generated output is a work in progress, requiring a rigorous fact check, cross referencing, and evaluation by people, to ensure what’s delivered is not just fast but sound.
3. Humanize with human eyes
A great AI output is a springboard from which to ideate and evolve. Beyond ensuring accuracy, tailoring that captures nuance and context, the intent of the use case, method of delivery, and values of the audience will improve it. What’s clinically accurate may not be clearly understood. What’s factually correct may not be emotionally resonant. That’s where we humans come in.
Human and artificial intelligence clearly share a future, and their shared presence is already showing us the stakes. Before we get swept up in visions of what AI might become, we must contend with what it is right now: an incredibly powerful but imperfect tool capable of introducing fiction into areas where fact is critical. In the here and now, humans must stay good stewards of accurate communication in health and science while nascent technologies are adopted for even the simplest tasks. We can’t afford to be lax or complacent—or let overreliance on imperfect tools erode our own skills—with patient care and public trust on the line. The work of safeguarding facts, context, and clarity is urgent—not tomorrow, but today.

Deerfield Group acquires Triple Threat Communications to enhance healthcare marketing strategy and creative offerings

How modular content is reshaping medical communication

The doctor will see you (and your algorithm) now

Deerfield Group evolves agency brand identity to reflect unique approach to scalable, flexible solutions for healthcare clients