This article was just published in Evidence Based Medicine [subscription required]:
Green ML. Evaluating evidence-based practice performance. Evid Based Med 2006; 11(4):99-101.
Abstract: Evidence-based practice (EBP) educators need valid and feasible approaches to evaluate the impact of new curricula and to document the competence of individual trainees. As of 1999, however, the published evaluation instruments often lacked established validity and reliability, focused on critical appraisal to the exclusion of other EBP steps, and measured knowledge and skills but not behaviours in actual practice. Editorialists at the time lamented, “ironically, if one were to develop guidelines for how to teach EBM based on these results, they would be based on the lowest level of evidence”. In the ensuing years, educators have responded to the challenge. We now have several instruments, supported by multiple types of evidence for validity, to evaluate EBP knowledge and skills. In the parlance of Miller’s pyramid,3 these instruments can document that a trainee “knows” and “knows how” to practice evidence-based medicine. And promising EBP objective structured clinical examinations, while supported by more limited psychometric testing, allow trainees to “show how” in realistic clinical settings.