Monday, February 26, 2007

CME meets ADDIE, Part 1

My colleague Robert Edgar worked on military training projects for Deterline and Associates in the 1970s. Deterline wasn't paid until they demonstrated that their training materials were effective. So Robert, and other military and corporate instructional designers, used instructional design systems that were systematic, reliable, and produced measurable results. Typically, these systems were variants of ADDIE, an acronym that stands for:

  • Analyze: Define the learning need, audience, and the instructional objectives, media, and format.
  • Design: Development of a specification for how the content will be presented and assessed, often including an outline of the content and types of user interactions.
  • Develop: The instructional designer, and any applicable subject matter experts, writers, editors, a/v producers, graphic designers, animators and programmers create, integrate, prototype, and test the program's content.
  • Implement: The program is delivered to learners.
  • Evaluate: The effectiveness of the program is assessed.

What does this have to do with CME? Consider how ACCME's updated criteria correlate with the ADDIE model.

  • Analyze: The overall goals of the program are specified (Criterion 1), educational needs are based on professional performance gaps (Criterion 2), and CME activities are designed with consideration to learning and setting attributes (Criteria 4 and 5)
  • Design: CME interventions are designed to achieve program mission (Criterion 3), and CME content is developed in the context of desirable physician attributes (Criterion 6)
  • Develop/Implement: Content is developed and delivered independently of the influence of personal financial and commercial interests (Criteria 7, 8, 9, and 10)
  • Evaluate: Changes in learners are measured (Criterion 11), the effectiveness of the overall program is measured (Criterion 12), and the overall program is improved (Criteria 13, 14, and 15)

In sum, I think that the updated criteria can be considered as promoting a variant of the ADDIE model, with some guidelines focused on individual educational activities, some addressing the overall program, and some relating the individual activities to the overall program.

At the recent ACME annual conference in Phoenix, discussions of the updated standards often mentioned instructional design techniques common in ADDIE implementation. For example, one speaker discussed "learning hierarchies," an analysis process in which educational goals are sequentially broken down into a series of smaller, better defined instructional objectives. Other speakers applied Kirkpatrick's 4-levels to the evaluation of CME activities.

While ADDIE-based instructional design might seem new to CME managers, this approach has a long history in military and corporate training. Furthermore, the benefits and drawbacks of ADDIE-based design have long been debated within the instructional design community (e.g., ISPI, ASTD). In Part 2, I discuss the implications of an increased emphasis on ADDIE-based instructional design for CME.

Wednesday, February 14, 2007

Updated Criteria: The Challenge Unchanged

ACCME's updated criteria change the types of data that needs to be collected by CME providers. Previously we could design educational activities based on physician interests. Now our educational activities need to address professional performance gaps. Previously it was enough to increase knowledge. Now, ACCME requires improving competence, performance, or patient outcomes.

To me, the central challenge remains unchanged. Can we meet our accreditation requirements by providing our faculty and staff with guidelines and services that are useful in improving the educational outcomes of our CME activities? Besides maintaining the independence of CME from commercial interests, can CME program managers add value to medical education?

When it comes to the value of our CME requirements and guidelines, perception is reality. If the standards CME program managers develop are perceived as unnecessary bureaucratic overhead, they won't improve outcomes even if they could. If our educational standards and services are perceived as useful and valuable, they will motivate reflection, planning, and analysis that will produce positive results.

This sounds simple, but promoting CME requirements as something faculty should "want to" do as opposed to something that they "have to" do is not easy. CME program managers need to be coalition builders and salespeople. We need to collect feedback, optimize our educational planning and evaluation processes, and evangelize their features and benefits.

The updated criteria increase the amount of work and costs to run a CME program. So while the challenge remains the same, the stakes are higher. And more work won't sell itself.

Thursday, February 8, 2007

CME Disambiguation Glossary: Disclosure

In CME, some words have more than one meaning. This is the first entry in my periodic CME Disambiguation Glossary. My thanks to Wikipedia for promoting the word "disambiguation."

1. Disclosure: A person who writes or delivers CME content informing the sponsoring CME provider of relevant personal financial interests.

2. Disclosure: A person who select CME topics or speakers informing the sponsoring CME provider of relevant personal financial interests.

3. Disclosure: The CME provider informing CME learners of the relevant personal financial interests of persons who deliver or select CME content.

4. Disclosure: The CME provider informing CME learners of the sources of commercial support for the corresponding CME activity.

Tuesday, February 6, 2007

From Teaching to Learning: CME's Great Leap Forward

Continuing medical education (CME) is changing, or is trying to change, in a big way. The organization that sets standards for CME, the Accreditation Council for Continuing Medical Education (ACCME) requires educational programs that produce "real world" results. They want to see measurable changes in health care resulting from CME activities. So people like me who manage CME programs need to shift our attention from what is being taught to what is being learned.

This brings up an interesting issue: How do physicians learn? I suspect that much of what doctors learn is through conversations with other doctors and other informal interactions. In some ways this isn't a problem for CME providers. We can give doctors credit for "performance improvement" resulting from their informal learning as long as we can provide ACCME with adequate documentation. However, it is hard to see how this approach can be financially viable. CME providers are likely to spend far more on creating documentation and policies, and reviewing, filing and analyzing data, than we will be able to charge in fees for this service. The same issue comes up with giving CME credit to physicians for learning in preparation to deliver CME content. Sure, we can provide this service, but it will cost more to manage than it is likely to bring in.

Perhaps a solution is to produce traditional CME lectures, that make money from commercial support and registration fees, in conjunction with opportunities for informal learning. You can think of this in sales and marketing terms. Formal/traditional lectures are marketing. They get learners familiar and comfortable with new ideas. Informal conversations with colleagues are sales. This is where doctors really become convinced and committed to changes that will improve health care. CME providers need to collect data to give learners CME credit for both, and to find out what sources of data led to changes in health care practices.