AI in Continuing Medical Education: A Practical Framework for CME Writers
If you’re anything like me, you tested ChatGPT when it first came out… and maybe you weren't too impressed. But AI hasn't disappeared — it's embedded itself into tools we already use. For anyone working in CME writing today, the question isn't whether to engage with AI. It's how to use it responsibly: where it adds value, where it falls short, and how to explain your approach to clients who are still figuring out their comfort level.
Three years in, I believe AI can meaningfully support parts of CME development but only when it's paired with clear parameters and hands-on human oversight. These parameters are part of what separates a strategic CME medical writer from someone who's simply executing tasks.
Let's look at where AI helps, where it hinders, and how to build a defensible AI workflow — one of the emerging core competencies for CME writers working in a rapidly changing landscape.
Author's Note: When I use the term "AI," I'm referring to AI tools broadly — not just chatbots, but also proprietary tools built for literature review, research synthesis, or image generation, such as AINGENS MACg, our partner for WriteCME Pro's upcoming AI Practice Lab.
Practical Use Cases: Where AI Supports CME Writing
The most valuable AI use cases are the tasks that consume the bulk of a project's time but require the least strategic insight. Here's where AI earns its place in a medical writer's workflow.
Literature Review & Early Research Support
AI excels at summarizing large volumes of information quickly. In the early stages of research, I use AI to digest source material, identify high-level themes, generate research questions that uncover deeper patterns, and flag potential knowledge gaps.
The caveat: Always verify what an AI tool tells you, and make sure you're using the right tool for the job. ChatGPT, for example, isn't designed to conduct research — it's a language model, not a search engine. We'll review the best tools for different medical writing tasks, including research, in our upcoming AI Practice Lab with Núria Negrão PhD (starts March 9, 2026).
Structuring & Drafting Support
AI is effective at outlining and rewriting — useful capabilities for anyone developing CME content across multiple formats and audiences. Try using AI to create a first-draft outline, rewrite content for different learner types, or generate question stems for pre- and post-activity assessments.
Even if the output isn't your final version, it gives you a starting point to refine. This alone can save meaningful time on a project.
These tasks are about organization, structure, and clarity. They don't require educational strategy. That's where your expertise as a CME medical writer excels.
Efficiency Tasks
Efficiency tasks are the small, time-consuming steps that quietly slow down a project: editing for tone or reading level, converting content formats as T. DeLene Beeland described recently on LinkedIn, building project timelines from milestones, summarizing reviewer feedback, or reformatting multiple-choice questions into case-based assessments.
As with any AI-assisted task, your instincts and training should have the final say. But used thoughtfully, AI can keep workflow moving on low-risk tasks while you focus your energy on the strategic work that actually requires a human mind.
An important distinction: Many efficiency tasks run adjacent to high-stakes work. Reformatting assessment questions, for example, sits right next to designing those assessments, which requires learner analysis, alignment with objectives, and deliberate cognitive load management. So define your own limits and use AI in ways that don't compromise the integrity of the education you're creating.
Where AI Falls Short in CME
AI can support your workflow, but it has hard limits, and recognizing those limits is one of the core competencies for CME writers who want to work strategically rather than reactively.
AI Cannot Do What Compounded Expertise Does
For instance, a common criticism of AI is that it "lacks empathy." That's true, but this framing undersells the magnitude of the problem. What AI actually lacks is compounded expertise, or what sociologists call tacit knowledge — our ability to draw on our clinical knowledge, institutional awareness, behavioral insight, and communication skill simultaneously and in real time.
Empathy in CME writing isn't an emotional quality. It's a professional competency that builds across every project, every SME conversation, and every clinical context you work in. It shows up in analysis — how you probe beneath surface-level practice gaps — and in design — how you build education that acknowledges the full human reality of clinical practice.
Here's what compounded expertise looks like in practice. You're interviewing a subject matter expert (SME) who says, "We know the guideline, but it doesn't work in our setting." A CME writer doesn't just log that statement — they probe beneath it. Is the barrier to guideline implementation workflow-related? Resource-driven? A team dynamics issue? A conflict between the guideline and payer requirements?
You're drawing on multiple domains of knowledge at once to ask the most effective next question. Of course, you can use AI to generate follow-up questions. But only you can hear the hesitation in someone's voice, recognize when an SME is being diplomatically vague about an institutional problem, or sense that the stated barrier isn't the real one.
AI Outputs Carry the Biases of Their Training Data
Limitations aren't just about what AI can't do. Limitations also reflect what AI actively introduces if you're not watching for it. AI models are trained on existing content, which is disproportionately Western, English-dominant, and skewed toward certain demographic assumptions.
In CME writing, that bias doesn't just produce awkward phrasing. It can produce content that reinforces inequitable clinical decision-making. As research on glomerular filtration rate has shown, biased input data results in biased AI outputs, and in healthcare, those outputs have real consequences for patients.
So the risk of inaccuracy from AI in CME is direct. If AI over-associates certain conditions with specific demographics, or reproduces treatment assumptions rooted in incomplete data, it can skew both educational content and the clinical decisions learners make.
As Dr. Imani Shephard has noted, human-AI feedback loops can magnify these errors. Because we often perceive AI as objective, we are more likely to trust biased outputs without questioning them. CME medical writers serve as a critical check here. Screening AI outputs for bias in clinical and demographic representations isn't optional — it's part of our work.
There's another related risk worth naming: homogenization. If we all use the same AI tools with similar prompts, we risk producing education that sounds the same, is structured the same, and ultimately teaches in the same way, regardless of the learner population or clinical context.
Your voice, your analytical perspective, and your ability to create education content that reflects a specific learner's reality are what keep CME content from becoming generic and formulaic. AI can accelerate your process, but it shouldn't flatten your thinking.
AI Cannot Design for Practice-Faithful Performance
Compounded expertise doesn't just show up in analysis. It shows up in how education is designed — and this is where AI's empathy gap becomes a design problem, not just a conceptual one. (For a deeper dive into empathy as a design tool, see Why Empathy is Key in CME Writing.)
Research consistently shows that people learn best from stories. In CME, that means education designed around the human experiences of clinical practice — not just the clinical facts — produces better learner outcomes. When a patient case acknowledges that the patient is afraid, or that the provider is frustrated by a system that's working against them, learners connect with the material differently. They see their own reality reflected.
As instructional design expert Mark Hagerty noted on the Write Medicine podcast, the more we focus on the humanity of the people in a case study, the easier it is for learners to absorb the content.
For instance, AI can generate a clinically plausible patient case — correct diagnosis, appropriate treatment algorithm, reasonable lab values. But effective patient cases aren't just accurate, they're also emotionally authentic.
They must provide sufficient personal detail and human elements to capture the learner’s interest and engage them in the case. Authentic cases use storytelling tools to turn the learner into the protagonist of a situation they recognize, and that recognition is what drives retention and, ultimately, behavior change.
Denying the emotional dimension of clinical practice doesn't make education more rigorous — it makes it less effective. Conflict, tension, and resolution aren't narrative decoration in CME. They're instructional tools.
A CME medical writer who understands this can create education content that engages the whole learner — their clinical knowledge and the human context in which they apply it. AI generates scenarios that look feasible on paper.
It doesn't generate the kind of scenario that makes a learner pause and think, "This is exactly what I deal with." That distinction is what separates CME writing that checks a box from education that actually shifts how someone practices.
Ready to build AI workflows you can actually trust to help you with your medical writing? Join us starting March 9, 2026 for the AI Practice Lab.
Guardrails: Responsible AI Use in CME Writing
In 2026 the Accreditation Council for Continuing Medical Eduction (ACCME) released guidance on responsible AI use, the clearest framework we've had to date for how AI should and shouldn't be used in CME development. The guidance covers seven areas, from bias mitigation to data governance, and it applies whether you're staff at an accredited provider or a freelance CME medical writer working on contract.
We'll be digging into these guardrails and how to apply them in WriteCME Pro's upcoming AI Practice lab with Núria Negrão PhD, but here's a TL;DR on how translate this framework into a workflow.
Screen everything for bias and commercial influence. AI outputs get the same independence review as human-authored content. If it touches accredited education, the Standards for Integrity and Independence apply — no exceptions, even for tasks that feel low-stakes.
Disclose AI use. The ACCME expects transparency when AI is used to generate, modify, or analyze educational materials. That includes the tool you used, how you used it, and who reviewed the output. Build this into your project documentation as standard practice.
Keep humans accountable. AI is not a substitute for our professional judgment. We need to review every AI-assisted deliverable for factual accuracy, hallucinations, and demographic bias and document that review trail.
Protect learner data. No personally identifying or proprietary client content goes into public AI tools. Know the difference between open-source and closed-source platforms, and use only tools that meet your client's data security requirements. We'll be clarifying how to do this in our AI Practice Lab.
Know where your line is. The ACCME encourages AI for brainstorming, drafting, feedback analysis, assessment generation, and case development. It flags high-risk uses — auto-producing assessment answers, automating analysis that bypasses accuracy checks — as requiring especially rigorous oversight. The practical boundary: AI can support the structural and organizational layers of your work. It does not support the synthesis, analysis, and contextual interpretation that drive your most important deliverables.
Ready to build AI workflows you can actually trust to help you with your medical writing? Join us starting March 9th for the AI Practice Lab.
How to Talk About AI With Your CME Clients
One of the less discussed aspects of how to be a CME writer who uses AI well is knowing how to communicate about it with our clients. Here's how to navigate three common client scenarios.
The AI-Avoidant Client
This client perceives AI as a liability — the risk of proprietary information leaking feels too significant. You have two options: follow their policy entirely, or present a documented, risk-mitigated workflow for review.
If you go the second route, walk them through your disclosure protocols, how you protect sensitive data, and where human review sits in your process. Give them something concrete to evaluate rather than a vague assurance.
If the client maintains their position, respect it fully. The ACCME expects disclosure of all AI use in accredited education. No AI means no AI — that's a compliance requirement, not a judgment call.
The Client Who Wants AI to Do Everything
On the other end, some clients want AI to compress timelines or reduce costs dramatically. Their assumption is that AI can replace significant portions of the development process. As writers, we must reframe these expectations.
Walk your clients through what AI handles well (drafting, outlining, feedback analysis) versus what it can't touch (synthesis, educational strategy, bias screening). Establish realistic timelines and set clear boundaries around what requires your direct expertise.
The point here isn't to shut down their enthusiasm — it's to channel it toward use cases that improve the work without compromising educational quality or accreditation standards.
When You're Both Figuring It Out
Most commonly, neither you nor the client has a firm AI policy. You might tentatively agree to use it for editing or formatting — but nothing's documented, and the boundaries stay vague. This is exactly the scenario where having a written AI policy pays off. Define which tools you use, how you use them, and the specific tasks you're using them for.
Share this proactively. You're not just protecting yourself — you're giving the client a compliance-aligned framework they didn't have to build on their own. That's a valuable service, and it's the kind of thing that distinguishes a strategic CME medical writer from a task-level executor.
The Bigger Picture: AI as Infrastructure for CME Medical Writers
AI in medical writing should function as infrastructure. Think of it as an integrated part of your workflow, not a substitute for your expertise. For the last couple of years, the differentiator has been whether you use AI, but now we've shifted to a different question: How do you use AI in your education content creation?
Answers to this question demand compounded expertise — the same layered skill set that has always defined the best CME writing: deep audience analysis, evidence-based instructional design, root cause thinking, and the ability to connect clinical data to learners' real practice contexts. AI doesn't replace any of that. It gives you more bandwidth to do it well.
If you're ready to build a strategic, defensible AI workflow, join us on March 9, 2026 for a 4-week AI Practice Lab with expert Núria Negrão PhD. You'll work in real time to design and test an AI-assisted process you can actually trust for research, drafting, analysis, and quality control across a range of medical and science writing deliverables, and the tools to create an AI strategy that adds tangible value to your clients. See you there.