August 2015 SP&A Client Newsletter
Coming to Terms with ACCME Criteria 3 and 11
by
Judy M. Sweetnam, M.Ed., CHCP
The ACCME asks us not only to determine the educational outcome for a planned activity (C3), but also to measure its effectiveness (C11). We are given three choices for the educational goals that we are we seeking to achieve:
- A competence-based outcome, and/or
- A performance-based outcome, and/or
- A patient-based outcome.
Accordingly, the ACCME expects us to measure the effectiveness of the activity against the stated goals. If our activity is planned to change or improve competence, we evaluate for changes in competence. If our goal is to change or improve competence and performance, we design tools to evaluate our effectiveness in changing both competence and performance, and so on. Meanwhile, measuring changes in performance based on anecdotal changes in patient outcomes can still be fairly straight-forward if there are sufficient time and resources devoted to designing questions and analyzing learner responses. Of course, there are the problems associated with reliability and validity in a scenario where there are limited responses; but that is a discussion for another day.
Over the last four years, many of us have struggled with the definition of competence and, more importantly, how to measure the effectiveness of a competence-based outcome. (ACCME FAQ November 11, 2011). (Click on the highlighted title or click on QR Code #1, below.) The ACCME’s definition is based on Miller’s (1990) work, where he describes competence as “knowing how” to do something. Knowledge, in the presence of experience and judgment, is translated into ability (competence) – which has not yet put into practice. It is what a professional would do in practice, if given the opportunity. The skill, abilities and strategies one implements in practice, is performance.
The most straightforward and reliable way to measure this type of outcome is simply to ask learners how they will apply what they have learned from an activity. This type of qualitative, free text response, gives very rich data when completed appropriately. However, it also opens the door for poorly written one-word answers, or to nonsense responses when required as part of an online attestation for credit claim. In order to help address poor data, different formats (e.g., paired questions, pre/post questions) are created to measuring competence. Sadly, despite innovative design, some of these evaluation tools still only measure knowledge. Objective, measureable, statistically significant changes in knowledge gain are important but they are not what the ACCME ask for when they ask us to measure competence. How can this be so?
In searching out an explanation, it is useful to read one of the articles recommended in the AMA MedEd update: “10 Must-Read Articles for Medical Educators: Applying the Science of Learning to Medical Education,” by Richard E Mayer (Medical Education 2010; 44: 543-549). While many have struggled with whether their evaluation tools measure knowledge or competence, this latter article helps differentiate between the two:
Learning outcomes can be measured with retention tests and transfer tests. Retention test measure how well the learner remembers the presented material such as whether he or she is able to recall what was presented (e.g., ‘Define retention test’) or recognize what was presented (e.g., Remembering what was presented is an example of [a] a retention test, [b] a transfer test’. Transfer tests measure how well the learner can apply what was learned to new situations (e.g., Generate a transfer test item from this section’). P.546
In summary, the question to ask ourselves in determining whether we have created a tool that measures competence or knowledge is, “Does this tool address retention, or does it address transfer?” If I have addressed transfer as Mayer describes above, then, and only then, have I measured competence. Measuring retention misses the mark.
P.S. This article (click highlighted text or click on QR Code #2, below) is rich with instructional design pearls and their scientific rationale.