To demonstrate their effect, CME programs need a good measurement system—a fast, inexpensive, and reliable means to measure behavioral changes across most CME programs. Unfortunately, this won't happen any time soon. Measuring changes in physician practice, patient care, and patient health is tedious, difficult, and expensive. In the meantime, program developers, grant providers, and educational institutions will continue to rely on the old faithful measures—participation and CME certificates awarded.
And therein lies the problem.A for Attendance Although CME grant providers and program reviewers typically look at participation counts and CME credits—the only readily available metrics—these measures are less than satisfactory. Participation counts quantify the minimal condition of participation—merely showing up—one that cannot be equated with success, and credits pertain only to the small minority that apply for CME.
Either way, counting filled seats is similar to giving grades, not on class work, but attendance. The question of interest isn't who's there. It should be: "Who's learning?"
Studies that measure the effectiveness of lectures show attendees spend 31 percent of their time thinking irrelevant thoughts and just 1 percent relating information or attempting to solve problems—and those figures are drawn from university settings where knowledge of the lecture material is required to pass the course.
The results for retention are equally dismal. Participants can be expected to retain about 10-20 percent of the material after a standard one-hour lecture—but only if it is accompanied by good audio/visual support.
CME credit. The truth is, it's difficult to ascertain what learning actually takes place. Post tests, the standard measure for CME effectiveness, are not reliable measures unless they are well-written and paired with a pretest. Since only 10 percent of the audience will likely take a post test or apply for CME credit, you will never have any idea what the other 90 percent got out of the program. What's more, for many live events, attendance alone is required for CME credit, so traditional measures of value do not apply at all.
Internet Inconsistency Online CME providers like to think they have solved many of the issues of measuring CME because they capture how many people interact with the program, instead of counting seats filled. But that is not true—the traditional online participant report provides even less value than an attendee count at a live event. That may be because the definition of "participation" is so fuzzy.
One well-known professional medical portal defines participation as any individual registered as a medical professional accessing any page of a CME program through any route of entry for any purpose, a definition that has since become the de facto standard of measure in the online CME industry. But measuring participants that way is akin to lumping people who just peek in the door at a live event along with those who actually come in and sit through the presentation.
Clearly the industry needs measures that demonstrate how much of an online program "participants" are really participating in. Current reporting systems can't tell whether they are absorbing information, just clicking while skimming the material, or letting the program advance while getting a snack from the fridge.