Signposts Aren't Enough: How to Design CME Content That Drives Action

As CME writers, we're trained to think about outcomes frameworks—Moore's levels, Kirkpatrick's evaluation model, self-efficacy measures. We craft learning objectives carefully, aligning them with these big-picture destinations. We know Bloom's taxonomy inside and out.

And yet, when it comes time to develop the actual content, there's sometimes a disconnect. The learning objectives are solid. The clinical data is comprehensive. But somehow, the content doesn't quite land, or we struggle to translate learning objectives into real world steps that learners can implement in practice.

The problem isn't the objectives themselves. It's that we're missing a critical layer between learning objectives and the content we create to support them.

The Missing Layer: From Signposts to Concrete Steps

I think of outcomes as the final destination—the places we're trying to help learners reach. Learning objectives are the signposts along the road toward that destination. They tell us we're headed in the right direction.

But as writers, we need another level of specificity. We need to identify the concrete actions, tasks, and steps that learners must actually take to meet those learning objectives. This is where CME content development can fall short.

Consider a typical learning objective for an activity on atopic dermatitis, for dermatologists:

Assess patient candidacy for advanced biologic therapies targeting IL-13 in moderate to severe atopic dermatitis.

It's a well-written objective. It's measurable, specific to the clinical context, appropriately leveled. But what does a dermatologist actually need to do to meet this objective? What are the discrete actions involved in that assessment?

This is where Bloom's taxonomy—as useful as it is for cognitive learning levels—doesn't quite get us there. Bloom's helps us think about knowledge, comprehension, analysis. But it doesn't necessarily give us the framework for identifying the concrete steps learners need to take in clinical practice.

Why This Matters

Three common challenges get in our way when we're developing CME content.

1. Insufficient audience analysis. We know who the learners are demographically, but we don't dig deep enough into their clinical workflows, their decision-making contexts, or the specific competencies they need for different practice scenarios.

2. Learning science feels inaccessible. Writers want to apply evidence-based principles, but the theoretical frameworks can feel overwhelming. So they default to presenting clinical data in a linear fashion without considering how learners actually process and retain information.

3. Clinical data without clinical application. Content gets bogged down in study details, mechanisms of action, and comprehensive reviews—without clearly showing learners what's relevant and applicable to their specific practice context.

The result? Activities that inform but don't transform. Content that educates but doesn't activate.

A Framework for Designing Actionable Content

Recently I sat down with the folks The Good CME Practice group (gCMEp) to share the process I use to bridge that gap between learning objectives and actionable content:

Step 1: Identify 2-3 Concrete Tasks Per Learning Objective

For each learning objective, I want to know: What are the specific tasks, actions, or steps the learner needs to take to meet this objective?

This is where Roger Mager's approach to objectives is invaluable. Mager's framework breaks down learning goals into three components:

  • Performance (what the learner will do)

  • Conditions (under which they'll do it)

  • Criteria (how well they must perform)

Let's return to that dermatitis example. The learning objective involves assessment—a judgment-based behavior where the clinician must recognize appropriate candidates for therapy. But what are the component actions?

To figure this out, I need to know:

  • Who are the learners? (Dermatologists? Primary care? Nurse practitioners?)

  • What are their roles and responsibilities in clinical care?

  • Where do they practice? (Academic centers? Community practices? Hospital systems?)

  • How does biologic therapy fit into their clinical workflows?

Let's say the primary audience is dermatologists in community practice. For them to assess patient candidacy, at the very least they need to:

  1. Review disease severity markers (EASI score, BSA, patient-reported outcomes)

  2. Evaluate prior treatment response (conventional systemic therapies, topical management)

  3. Screen for contraindications or comorbidities affecting biologic selection

These three tasks become the framework for content development. Each section of content, each case scenario, each teaching point needs to support one or more of these specific actions.

How do you identify these tasks? You could use the clinical roles and responsibilities cheat sheet we created in WriteCME Pro or you could ask AI.

Ready to Build Your Own AI workflow?

I you're ready to build a practical, safe, repeatable AI-assisted workflow, join our live cohort. In this four-week Practice Lab, you’ll work in real time to design and test an AI-assisted process you can actually trust for research, drafting, analysis, and quality control across a range of medical and science writing deliverables. Led by Ai in CME Whisperer Núria Negrão, PhD you'll move beyond playing with tools like Gemini, Claude, ChatGPT, and NotebookLM to building a structured workflow that fits how you already work—and that you can confidently explain to clients, teams, and compliance reviewers

Step 2: Understand the Action Type

Not all actions are created equal, and different action types require different instructional approaches. When you understand the action type embedded in a learning objective, you can design content and assessment strategies that actually support that action.

Judgment-based actions involves recognizing patterns and identifying appropriate candidates. They work well with progressive case studies, "what would you do next?" decision points, and pattern recognition exercises.

Decision-heavy actions involve choosing when to initiate therapy, selecting between treatment options. They benefit from branching case scenarios, multiple-choice questions with rich feedback, and decision aids or algorithms.

Communication actions include discussing options with patients, coordinating with care teams. These call for patient-centered cases, reflective questions about shared decision-making, and sample scripts or conversation frameworks.

Procedural/technical actions entail ordering specific tests or implementing protocols. These need step-by-step guidance, checklists or workflow diagrams, or common pitfall callouts.

For our dermatitis example, the three tasks involve different action types:

  • Reviewing severity markers = judgment + procedural (knowing which tools to use and being able to implement them)

  • Evaluating treatment response = judgment (pattern recognition across treatment history)

  • Screening for contraindications = procedural (systematic review) + judgment (clinical significance)

Accordingly, the content needs to blend pattern recognition cases with systematic screening guidance, not just present the data about IL-13 biologics.

Step 3: Map Content to Actions

Once you've identified the concrete tasks, content development becomes much more focused. For each task, we can ask:

  • What knowledge does the learner need to complete this task?

  • What clinical reasoning process supports this task?

  • What are the common errors or gaps in current practice?

  • What would success look like?

This approach also makes it easier to collaborate with faculty and subject matter experts. Instead of asking broad questions like "What should learners know about biologics?", you can ask more effective, targeted questions, like: "When you're screening for contraindications, what's the one thing you see community dermatologists most commonly miss?" or "What's the concrete win or the action that would make the biggest difference for patient outcomes?"

Step 4: Apply Learning Science (Without Intimidation)

Learning science can feel overwhelming because there are so many frameworks and theoretical approaches. I detail these in my book, WriteCME Roadmap. But I think of learning science as a toolkit—different tools for different purposes. You don't need to master every theory to apply evidence-based principles effectively.

Cognitive Load Theory is probably the principle we use most, often without even knowing it. Every time you:

  • Edit ruthlessly to remove extraneous information

  • Chunk content into digestible sections

  • Sequence material from simple to complex

  • Pair relevant images with text

  • Use descriptive headings and white space

...you're managing cognitive load. You're making it easier for learners to process information without overwhelming their working memory.

For our dermatitis example, this might mean presenting the three screening tasks sequentially rather than all at once. It might mean using a table to compare severity markers instead of dense paragraph text. It might mean providing a brief orienting statement before diving into a complex case.

The question to ask yourself is this: "Am I reducing extraneous cognitive load, or am I cutting content that actually supports the action step?"

Retrieval Practice is another accessible principle. Build in opportunities for learners to recall and apply information rather than just consuming it:

  • Case progression with decision checkpoints

  • Reflective questions that require active processing

  • Brief knowledge checks before moving to application

Spacing and Interleaving might sound technical, but the application is straightforward: Don't present all information about one topic in a single block. Mix concepts. Come back to important points multiple times in different contexts.

For a multi-activity program, this might mean introducing severity assessment in Module 1, revisiting it in the context of treatment selection in Module 2, and then applying it again in Module 3 when discussing monitoring. Each exposure reinforces the learning and shows the concept in different action contexts.

Pulling It Through to Assessment

When you've designed content around concrete actions, assessment becomes more straightforward. You're not just testing whether learners remember facts; you're assessing whether they can perform the tasks that lead to learning and practice change.

For our dermatitis example:

  • Task 1 (severity markers) might be assessed with a case requiring learners to interpret EASI scores and patient-reported outcomes

  • Task 2 (treatment response) could use a patient history requiring analysis of what has/hasn't worked

  • Task 3 (screening) might involve a checklist exercise or a case with comorbidities

This alignment also helps outcomes teams who often work from needs assessments that don't include all the information needed for robust evaluation frameworks. When content development explicitly identifies the actions tied to each learning objective, outcomes measurement can target those specific behaviors and competencies.

Where Do You Start?

If this framework resonates with you but you're not sure how to apply it to your own CME development process, you're not alone. Most medical writers come to CME from clinical writing, clinical care, or academic research—backgrounds that don't all include instructional design or learning science.

The good news is that these skills are learnable. The challenge is knowing where you are in developing them and what to focus on next.

That's why I created the WriteCME Readiness Index—a brief assessment that helps you identify your strengths and gaps across the key competencies for effective CME writing. It covers audience analysis, learning objective development, instructional design, learning science application, and assessment strategy.

Whether you're new to CME or looking to level up your approach, the assessment gives you a clear picture of where to focus your professional development. Because signposts aren't enough—and neither is winging it when it comes to designing content that truly drives learner action.

Take the WriteCME Readiness Index

What's your biggest challenge in moving learners from objectives to action? I'd love to hear about it—connect with me on LinkedIn.

Next
Next

AI in Continuing Medical Education: A Practical Framework for CME Writers