BEME Guides in Medical Education: the series

beme.jpgCurrently there are eight guides in this series. Below are the abstracts, DOI [Digital Object Identifier] Links and PubMed Records (where available) for the guides.  See also AMEE Education Guides: Extended SummariesAMEE Medical Education Guides

BEME Guide No 1: Best Evidence Medical Education
BEME Guide No 2: Teaching and learning communication skills in medicine: A review with quality grading of articles
BEME Guide No 3: Systematic searching for evidence in medical education. Part 1: Sources of information
BEME guide No 3: Systematic searching for evidence in medical education. Part 2: Constructing searches
BEME Guide No 4: Features and uses of high-fidelity medical simulations that lead to effective learning
BEME Guide No 5: BEME systematic review: Predictive values of measurements obtained in medical schools and future performance in medical practice
BEME Guide No 6: How can experience in clinical and community settings contribute to early medical education?
BEME Guide No 7: Systematic review of the literature on assessment, feedback and physicians’ clinical performance
BEME Guide No 8: A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education

Harden RM et al. BEME Guide No 1: Best Evidence Medical Education. Med Teach 1999; 21(6): 553-562.
There is a need to move from opinion-based education to evidence-based education. Best evidence medical education (BEME) is the implementation, by teachers in their practice, of methods and approaches to education based on the best evidence available. It involves a professional judgement by the teacher about his/her teaching taking into account a number of factors-the QUESTS dimensions. The Quality of the research evidence available-how reliable is the evidence? the Utility of the evidence- can the methods be transferred and adopted without modification, the Extent of the evidence, the Strength of the evidence, the Target or outcomes measured-how valid is the evidence? and the Setting or context- how relevant is the evidence? The evidence available can be graded on each of the six dimensions. In the ideal situation the evidence is high on all six dimensions, but this is rarely found. Usually the evidence may be good in some respects, but poor in others.The teacher has to balance the different dimensions and come to a decision on a course of action based on his or her professional judgement.The QUESTS dimensions highlight a number of tensions with regard to the evidence in medical education: quality vs. relevance; quality vs. validity; and utility vs. the setting or context. The different dimensions reflect the nature of research and innovation. Best Evidence Medical Education encourages a culture or ethos in which decision making takes place in this context.  DOI Link      Full Text     TOP OF PAGE

Aspegren KBEME Guide No 2: Teaching and learning communication skills in medicine: A review with quality grading of articles. Med Teach 1999; 21(6):563-570.
A literature search for articles concerning communication skills teaching and learning in medicine was done.The search yielded 180 pertinent articles, which were quality graded into the three categories of high, medium and low quality, using established criteria. Only those of high and medium quality were used for the review, which thus is based on 31 randomized studies, 38 open effect studies and 14 descriptive studies. Communication skills can be taught in courses, are learnt, but are easily forgotten if not maintained by practice.The most effective point in time to learn these at medical school is probably during the clinical clerkships, but there is no study that has specifically addressed this question.After a short period of training, doctors can be effective as teachers.The teaching method should be experiential as it has been shown conclusively that instructional methods do not give the desired results.The contents of communication skills courses should primarily be problem defining.All students should have communication skills training since those with the lowest pre-course scores gain the most from such courses. Men are slower learners of communication skills than women, which should be taken into account by course organizers. As there is only one really long-term follow up into the residency phase of communication skills training at medical school, those who have done randomized studies in the field should if possible carry out further follow-up studies.   DOI Link   TOP OF PAGE

Haig A and Dozier M. BEME Guide No 3: Systematic searching for evidence in medical education. Part 1: Sources of information. Medical Teacher 2003; 25(4):352-363.
Searching for evidence to inform best practice in medical education is a complex undertaking. With very few information sources dedicated to medical education itself, one is forced to consult a wide range of often enormous sources–and these are dedicated to either medicine or education, making a medical education search all the more challenging. This guide provides a comprehensive overview of relevant information sources and methods (including bibliographic databases, grey literature, hand searching and the Internet) and describes when they should be consulted. The process of constructing a search is explained: identifying and combining core concepts, using Boolean algebra and search syntax, limiting results sets, and making best use of databases’ controlled vocabularies. This process is illustrated with images from search screens and is followed by numerous examples designed to reinforce skills and concepts covered. The guide has been developed from the ongoing experience gained from the systematic searches conducted for the Best Evidence Medical Education Collaboration, and concludes by looking ahead to initiatives that will shape future searching for medical education evidence.  DOI Link     PubMed Link    TOP OF PAGE

Haig A and Dozier M. BEME guide No 3: Systematic searching for evidence in medical education. Part 2: Constructing searches. Medical Teacher 2003; 25(5): 463-484.
Searching for evidence to inform best practice in medical education is a complex undertaking. With very few information sources dedicated to medical education itself, one is forced to consult a wide range of often enormous sources-and these are dedicated to either medicine or education, making a medical education search all the more challenging. This guide provides a comprehensive overview of relevant information sources and methods (including bibliographic databases, grey literature, hand searching and the Internet) and describes when they should be consulted. The process of constructing a search is explained:identifying and combining core concepts, using Boolean algebra and search syntax, limiting results sets, and making best use of databases’ controlled vocabularies. This process is illustrated with images from search screens and is followed by numerous examples designed to reinforce skills and concepts covered. The guide has been developed from the ongoing experience gained from the systematic searches conducted for the Best Evidence Medical Education Collaboration, and concludes by looking ahead to initiatives that will shape future searching for medical education evidence. 
PubMed Record   DOI Link     TOP OF PAGE

Issenberg SB, McGaghie WC, Petrusa ER, Gordon DL, Scalese RJ. BEME Guide No 4: Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher 2005; 27 (1):10-28.
REVIEW DATE: 1969 to 2003, 34 years.
BACKGROUND AND CONTEXT: Simulations are now in widespread use in medical education and medical personnel evaluation. Outcomes research on the use and effectiveness of simulation technology in medical education is scattered, inconsistent and varies widely in methodological rigor and substantive focus.
OBJECTIVES: Review and synthesize existing evidence in educational science that addresses the question, ‘What are the features and uses of high-fidelity medical simulations that lead to most effective learning?’.
SEARCH STRATEGY: The search covered five literature databases (ERIC, MEDLINE, PsycINFO, Web of Science and Timelit) and employed 91 single search terms and concepts and their Boolean combinations. Hand searching, Internet searches and attention to the ‘grey literature’ were also used. The aim was to perform the most thorough literature search possible of peer-reviewed publications and reports in the unpublished literature that have been judged for academic quality.
INCLUSION AND EXCLUSION CRITERIA: Four screening criteria were used to reduce the initial pool of 670 journal articles to a focused set of 109 studies: (a) elimination of review articles in favor of empirical studies; (b) use of a simulator as an educational assessment or intervention with learner outcomes measured quantitatively; (c) comparative research, either experimental or quasi-experimental; and (d) research that involves simulation as an educational intervention.
DATA EXTRACTION: Data were extracted systematically from the 109 eligible journal articles by independent coders. Each coder used a standardized data extraction protocol.
DATA SYNTHESIS: Qualitative data synthesis and tabular presentation of research methods and outcomes were used. Heterogeneity of research designs, educational interventions, outcome measures and timeframe precluded data synthesis using meta-analysis.
HEADLINE RESULTS: Coding accuracy for features of the journal articles is high. The extant quality of the published research is generally weak. The weight of the best available evidence suggests that high-fidelity medical simulations facilitate learning under the right conditions. These include the following: providing feedback–51 (47%) journal articles reported that educational feedback is the most important feature of simulation-based medical education; repetitive practice–43 (39%) journal articles identified repetitive practice as a key feature involving the use of high-fidelity simulations in medical education; curriculum integration–27 (25%) journal articles cited integration of simulation-based exercises into the standard medical school or postgraduate educational curriculum as an essential feature of their effective use; range of difficulty level–15 (14%) journal articles address the importance of the range of task difficulty level as an important variable in simulation-based medical education; multiple learning strategies–11 (10%) journal articles identified the adaptability of high-fidelity simulations to multiple learning strategies as an important factor in their educational effectiveness; capture clinical variation–11 (10%) journal articles cited simulators that capture a wide variety of clinical conditions as more useful than those with a narrow range; controlled environment–10 (9%) journal articles emphasized the importance of using high-fidelity simulations in a controlled environment where learners can make, detect and correct errors without adverse consequences; individualized learning–10 (9%) journal articles highlighted the importance of having reproducible, standardized educational experiences where learners are active participants, not passive bystanders; defined outcomes–seven (6%) journal articles cited the importance of having clearly stated goals with tangible outcome measures that will more likely lead to learners mastering skills; simulator validity–four (3%) journal articles provided evidence for the direct correlation of simulation validity with effective learning.
CONCLUSIONS: While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings.  DOI Link     
PubMed Record     TOP OF PAGE

Hamdy H, Prasad K, Anderson M B, Scherpbier A, Williams R, Zwierstra R and Cuddihy H. BEME Guide No 5: BEME systematic review: Predictive values of measurements obtained in medical schools and future performance in medical practice. Medical Teacher 2006; 28 (2):103-116.
BACKGROUND: Effectiveness of medical education programs is most meaningfully measured as performance of its graduates.
OBJECTIVES: To assess the value of measurements obtained in medical schools in predicting future performance in medical practice.
METHODS: SEARCH STRATEGY: The English literature from 1955 to 2004 was searched using MEDLINE, Embase, Cochrane’s EPOC (Effective Practice and Organization of Care Group), Controlled Trial databases, ERIC, British Education Index, Psych Info, Timelit, Web of Science and hand searching of medical education journals.
INCLUSION & EXCLUSIONS: Selected studies included students assessed or followed up to internship, residency and/or practice after postgraduate training. Assessment systems and instruments studied (Predictors) were the National Board Medical Examinations (NBME) I and II, preclinical and clerkship grade-point average, Observed Standardized Clinical Examination scores and Undergraduate Dean’s rankings and honors society. Outcome measures were residency supervisor ratings, NBME III, residency in-training examinations, American Specialty Board examination scores, and on-the-job practice performance.
DATA EXTRACTION: Data were extracted by using a modification of the BEME data extraction form study objectives, design, sample variables, statistical analysis and results. All included studies are summarized in a tabular form.
dATA ANALYSIS AND SYNTHESIS: Quantitative meta-analysis and qualitative approaches were used for data analysis and synthesis including the methodological quality of the studies included.
RESULTS: Of 569 studies retrieved with our search strategy, 175 full text studies were reviewed. A total of 38 studies met our inclusion criteria and 19 had sufficient data to be included in a meta-analysis of correlation coefficients. The highest correlation between predictor and outcome was NBME Part II and NBME Part III, r = 0.72, 95% CI 0.30-0.49 and the lowest between NBME I and supervisor rating during residency, r = 0.22, 95% CI 0.13-0.30. The approach to studying the predictive value of assessment tools varied widely between studies and no consistent approach could be identified. Overall, undergraduate grades and rankings were moderately correlated with internship and residency performance. Performance on similar instruments was more closely correlated. Studies assessing practice performance beyond postgraduate training programs were few.
CONCLUSIONS: There is a need for a more consistent and systematic approach to studies of the effectiveness of undergraduate assessment systems and tools and their predictive value. Although existing tools do appear to have low to moderate correlation with postgraduate training performance, little is known about their relationship to longer-term practice patterns and outcomes.  PubMed Record  
DOI Link    TOP OF PAGE

Dornan T, Littlewood S, Margolis S A, Scherpbier A, Spencer J and Ypinazar V. BEME Guide No 6: How can experience in clinical and community settings contribute to early medical education? A BEME systematic review. Medical Teacher 2006; 28 (1):3-18.
REVIEW DATE: Review period January 1992-December 2001. Final analysis July 2004-January 2005.
BACKGROUND AND REVIEW CONTEXT: There has been no rigorous systematic review of the outcomes of early exposure to clinical and community settings in medical education.
OBJECTIVES OF REVIEW: Identify published empirical evidence of the effects of early experience in medical education, analyse it, and synthesize conclusions from it.Identify the strengths and limitations of the research effort to date, and identify objectives for future research.
SEARCH STRATEGY: Ovid search of: BEI, ERIC, Medline, CINAHL and EMBASE Additional electronic searches of: Psychinfo, Timelit, EBM reviews, SIGLE, and the Cochrane databases. Hand-searches of:Medical Education, Medical Teacher, Academic Medicine, Teaching and Learning in Medicine, Advances in Health Sciences Education, Journal of Educational Psychology.
CRITERIA: Definitions: Experience: Authentic (real as opposed to simulated) human contact in a social or clinical context that enhances learning of health, illness and/or disease, and the role of the health professional. Early: What would traditionally have been regarded as the preclinical phase, usually the first 2 years. Inclusions: All empirical studies (verifiable, observational data) of early experience in the basic education of health professionals, whatever their design or methodology, including papers not in English. Evidence from other health care professions that could be applied to medicine was included. Exclusions: Not empirical; not early; post-basic; simulated rather than ‘authentic’ experience.
DATA COLLECTION: Careful validation of selection processes. Coding by two reviewers onto an extensively modified version of the standard BEME coding sheet. Accumulation into an Access database. Secondary coding and synthesis of an interpretation.
HEADLINE RESULTS: A total of 73 studies met the selection criteria and yielded 277 educational outcomes; 116 of those outcomes (from 38 studies) were rated strong and important enough to include in a narrative synthesis of results; 76% of those outcomes were from descriptive studies and 24% from comparative studies. Early experience motivated and satisfied students of the health professions and helped them acclimatize to clinical environments, develop professionally, interact with patients with more confidence and less stress, develop self-reflection and appraisal skill, and develop a professional identity. It strengthened their learning and made it more real and relevant to clinical practice. It helped students learn about the structure and function of the healthcare system, and about preventive care and the role of health professionals. It supported the learning of both biomedical and behavioural/social sciences and helped students acquire communication and basic clinical skills. There were outcomes for beneficiaries other than students, including teachers, patients, populations, organizations and specialties. Early experience increased recruitment to primary care/rural medical practice, though mainly in US studies which introduced it for that specific purpose as part of a complex intervention.
CONCLUSIONS: Early experience helps medical students socialize to their chosen profession. It helps them acquire a range of subject matter and makes their learning more real and relevant. It has potential benefits for other stakeholders, notably teachers and patients. It can influence career choices.  PubMed Record   
DOI Link    TOP OF PAGE

Veloski J, Boex J R, Grasberger M J, Evans A and Wolfson D B (2006). BEME Guide No 7: Systematic review of the literature on assessment, feedback and physicians’ clinical performance.  Medical Teacher  2006; 28(2): 117-128.
BACKGROUND AND CONTEXT: There is a basis for the assumption that feedback can be used to enhance physicians’ performance. Nevertheless, the findings of empirical studies of the impact of feedback on clinical performance have been equivocal.
OBJECTIVES: To summarize evidence related to the impact of assessment and feedback on physicians’ clinical performance.
SEARCH STRATEGY: The authors searched the literature from 1966 to 2003 using MEDLINE, HealthSTAR, the Science Citation Index and eight other electronic databases. A total of 3702 citations were identified.
INCLUSION AND EXCLUSION CRITERIA: Empirical studies were selected involving the baseline measurement of physicians’ performance and follow-up measurement after they received summaries of their performance.
DATA EXTRACTION: Data were extracted on research design, sample, dependent and independent variables using a written protocol.
DATA SYNTHESIS: A group of 220 studies involving primary data collection was identified. However, only 41 met all selection criteria and evaluated the independent effect of feedback on physician performance. Of these, 32 (74%) demonstrated a positive impact. Feedback was more likely to be effective when provided by an authoritative source over an extended period of time. Another subset of 132 studies examined the effect of feedback combined with other interventions such as educational programmes, practice guidelines and reminders. Of these, 106 studies (77%) demonstrated a positive impact. Two additional subsets of 29 feedback studies involving resident physicians in training and 18 studies examining proxy measures of physician performance across clinical sites or groups of patients were reviewed. The majority of these two subsets also reported that feedback had positive effects on performance.
HEADLINE RESULTS: Feedback can change physicians’ clinical performance when provided systematically over multiple years by an authoritative, credible source.
CONCLUSIONS: The effects of formal assessment and feedback on physician performance are influenced by the source and duration of feedback. Other factors, such as physicians’ active involvement in the process, the amount of information reported, the timing and amount of feedback, and other concurrent interventions, such as education, guidelines, reminder systems and incentives, also appear to be important. However, the independent contributions of these interventions have not been well documented in controlled studies. It is recommended that the designers of future theoretical as well as practical studies of feedback separate the effects of feedback from other concurrent interventions.  PubMed Record    
DOI Link     TOP OF PAGE

Steinert Y, Mann K, Centeno A, Dolmans D, Spencer J, Gelula M, Prideaux D. BEME Guide No. 8: A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education. Med Teach 2006; 28(6):497-526.
BACKGROUND: Preparing healthcare professionals for teaching is regarded as essential to enhancing teaching effectiveness. Although many reports describe various faculty development interventions, there is a paucity of research demonstrating their effectiveness.Objective: To synthesize the existing evidence that addresses the question: “What are the effects of faculty development interventions on the knowledge, attitudes and skills of teachers in medical education, and on the institutions in which they work?”
METHODS: The search, covering the period 1980-2002, included three databases (Medline, ERIC and EMBASE) and used the keywords: staff development; in-service training; medical faculty; faculty training/development; continuing medical education. Manual searches were also conducted. Articles with a focus on faculty development to improve teaching effectiveness, targeting basic and clinical scientists, were reviewed. All study designs that included outcome data beyond participant satisfaction were accepted. From an initial 2777 abstracts, 53 papers met the review criteria. Data were extracted by six coders, using the standardized BEME coding sheet, adapted for our use. Two reviewers coded each study and coding differences were resolved through discussion. Data were synthesized using Kirkpatrick’s four levels of educational outcomes. Findings were grouped by type of intervention and described according to levels of outcome. In addition, 8 high-quality studies were analysed in a ‘focused picture’.
RESULTS: The majority of the interventions targeted practicing clinicians. All of the reports focused on teaching improvement and the interventions included workshops, seminar series, short courses, longitudinal programs and ‘other interventions’. The study designs included 6 randomized controlled trials and 47 quasi-experimental studies, of which 31 used a pre-test-post-test design.  DOI Link      See also this link    TOP OF PAGE

Use of flawed multiple-choice items by the New England Journal of Medicine for continuing medical education

nejm2.gif It will be interesting to see what the editors of the New England Journal of Medicine think of this study:

Alex S. Stagnaro-Green  and Steven M. Downing. Use of flawed multiple-choice items by the New England Journal of Medicine for continuing medical education.  Med Teach 2006; 28 (6):566-568. 

Abstract: Physicians in the United States are required to complete a minimum number of continuing medical education (CME) credits annually. The goal of CME is to ensure that physicians maintain their knowledge and skills throughout their medical career. The New England Journal of Medicine (NEJM) provides its readers with the opportunity to obtain weekly CME credits. Deviation from established item-writing principles may result in a decrease in validity evidence for tests. This study evaluated the quality of 40 NEJM MCQs using the standard evidence-based principles of effective item writing. Each multiple-choice item reviewed had at least three item flaws, with a mean of 5.1 and a range of 3 to 7. The results of this study demonstrate that the NEJM uses flawed MCQs in its weekly CME program.    DOI Link 

Table 1. Item Flaws in 40 NEJM CME test items.

Type of flaw

Number of flaws

Percentage of 40 items flawed

1. Verbatim text

40

100

2. Unfocused stem

40

100

3. Window dressing

29

73

4. Unequal option length

20

50

5. Negative options

14

35

6. Clues to correct answer

9

23

7. Negative stem

5

13

8. Heterogeneous options

4

10

Total

203

Notes:

1.  Verbatim text flaw= much of the MCQ text is identical to the article. To answer correctly, the participant is not required to understand the information, but need only locate the identical text in the article.
2.  Unfocused stem flaw= no question is posed by the lead-in or stem of the item. The participant must read all of the options in order to understand what question is being asked.
3.  Window dressing flaw= the MCQ has excessive verbiage that is irrelevant to the question asked or the construct being assessed.
4.  Unequal option length flaw= the correct option is appreciably longer than the alternative incorrect options. This provides a testwise clue to the correct answer, since true statements are usually longer than false statements.
5.  Negative options flaw= the options contain negative words. For example, “The article did not conclude…”.
6.  Clues-to-correct answer flaw= the MCQ provides a direct testwise clue to the correct answer. For example, the use of ‘always’, ‘never’ or ‘absolutely’ in an option identifies that option as incorrect.
7.  Negative-stem flaw= the stem includes negatives such as not or except.
8.  Heterogeneous-options flaw= the options are not homogenous in content and/or grammatical structure.

A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME Guide No. 8

beme.jpg  Just published in Medical Teacher:

Steinert Y, Mann K, Centeno A, Dolmans D, Spencer J, Gelula M, Prideaux D. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME Guide No. 8. Med Teach 2006; 28(6):497-526.

Background: Preparing healthcare professionals for teaching is regarded as essential to enhancing teaching effectiveness. Although many reports describe various faculty development interventions, there is a paucity of research demonstrating their effectiveness.Objective: To synthesize the existing evidence that addresses the question: “What are the effects of faculty development interventions on the knowledge, attitudes and skills of teachers in medical education, and on the institutions in which they work?”Methods: The search, covering the period 1980-2002, included three databases (Medline, ERIC and EMBASE) and used the keywords: staff development; in-service training; medical faculty; faculty training/development; continuing medical education. Manual searches were also conducted. Articles with a focus on faculty development to improve teaching effectiveness, targeting basic and clinical scientists, were reviewed. All study designs that included outcome data beyond participant satisfaction were accepted. From an initial 2777 abstracts, 53 papers met the review criteria. Data were extracted by six coders, using the standardized BEME coding sheet, adapted for our use. Two reviewers coded each study and coding differences were resolved through discussion. Data were synthesized using Kirkpatrick’s four levels of educational outcomes. Findings were grouped by type of intervention and described according to levels of outcome. In addition, 8 high-quality studies were analysed in a ‘focused picture’.

Results: The majority of the interventions targeted practicing clinicians. All of the reports focused on teaching improvement and the interventions included workshops, seminar series, short courses, longitudinal programs and ‘other interventions’. The study designs included 6 randomized controlled trials and 47 quasi-experimental studies, of which 31 used a pre-test-post-test design.

Key points: Despite methodological limitations, the faculty development literature tends to support the following outcomes: Overall satisfaction with faculty development programs was high. Participants consistently found programs acceptable, useful and relevant to their objectives.Participants reported positive changes in attitudes toward faculty development and teaching.Participants reported increased knowledge of educational principles and gains in teaching skills. Where formal tests of knowledge were used, significant gains were shown.Changes in teaching behavior were consistently reported by participants and were also detected by students.Changes in organizational practice and student learning were not frequently investigated. However, reported changes included greater educational involvement and establishment of collegiate networks.Key features of effective faculty development contributing to effectiveness included the use of experiential learning, provision of feedback, effective peer and colleague relationships, well-designed interventions following principles of teaching and learning, and the use of a diversity of educational methods within single interventions.Methodological issues: More rigorous designs and a greater use of qualitative and mixed methods are needed to capture the complexity of the interventions. Newer methods of performance-based assessment, utilizing diverse data sources, should be explored, and reliable and valid outcome measures should be developed. The maintenance of change over time should also be considered, as should process-oriented studies comparing different faculty development strategies.

Conclusions: Faculty development activities appear highly valued by participants, who also report changes in learning and behavior. Notwithstanding the methodological limitations in the literature, certain program characteristics appear to be consistently associated with effectiveness. Further research to explore these associations and document outcomes, at the individual and organizational level, is required.  DOI Link 

The developing physician – becoming a professional

nejm1.gif   The following article and accompanying editorial were published in the October 26 issue of the New England Journal of Medicine:

Stern DT, Papadakis M. The developing physician – becoming a professional. N Engl J Med 2006; 355(17):1794-1799. [by subscription only]

Excerpt: We all reflect on our formal training in medicine and know that somehow we made the transition from being a student in a classroom to being a seasoned clinician caring for patients. We spent years acquiring the knowledge and skills necessary to function as a physician, and part of that learning was accomplished by following examples and by trial and error. Most of us are still learning how to be better “professionals,” but we are building on a foundation that was developed in medical school and early postgraduate training. These educational and training environments have changed substantially in recent years  . .Section headings:  Setting Expectations;  Providing Experiences;  Evaluating Outcomes;  Teaching Professionalism 

Hafferty FW. Professionalism: The next wave [editorial]. N Engl J Med October 25 2006 [free full text]
Excerpt: In the october 26 issue of the Journal, Stern and Papadakis make a number of observations about professionalism and the learning environments in which medical training occurs. Like a growing number of medical educators, they recognize that considerable learning (some think most) takes place outside the domain of the formal curriculum and that such learning involves indoctrination in the unwritten rules of studenthood and medical practice. Some medical schools and residency programs have acknowledged the existence of alternative, or shadow, domains of learning, whose lessons are sometimes collectively called the “hidden curriculum,” and have accepted responsibility for both understanding and modulating the effects of these domains on students’ knowledge, skills, and values. Included in this broadened curriculum are the lessons students learn as they witness conflicts between the expectations and ideals articulated in professional codes and the behavior of individual physicians (particularly faculty members) and organizations as both go about the daily and concurrent work of medicine and education. 

Canadian Neck Pain Practice Guideline: feedback on the Addendum requested

cca.jpg Last September, the following guideline was published in the Journal of the Canadian Chiropractic Association:


The Canadian Chiropractic Association and the Canadian Federation of Chiropractic Regulatory Boards Clinical Practice Guidelines Development Initiative (The CCA/CFCRB-CPG). Chiropractic clinical practice guide line: evidence-based treatment of adult neck pain not due to whiplash. J Can Chiro Assoc 2005; 49(3):160-209.
This guideline is available in two formats: PDF format and now online on the CCA Web site.

From the CCA Web site:
Addendum to the Neck Pain Guideline, available for public comment, October 2006 
Since [this guideline was published], an Evidence Monitoring Committee has been tracking emerging evidence and assessing the implications on the guideline. The Guideline Development Committee, authors of the guideline, are putting forward an addendum to the guideline at this time. Attached is the change that they are recommending, specifically to Table 3b, and the rationale for making such a change. The Guideline Development Committee would welcome feedback from the profession on this change.Send your feedback directly to wglover@associationsfirst.com by November 3, 2006. PDF version of the Addendum

See also Practice Guidelines [blog category]; Practice Guidelines [CMCC Web page]

Advances in researching adult e-learning (November 2006)

studies.jpg  Studies in Continuing Education is a scholarly journal concerned with all aspects of continuing, professional and lifelong learning. A special issue entitled Advances in researching adult e-learning (v. 28, no. 3, November 2006) has just been published. The July issue also focussed on e-learning.(FYI, here are the 2006 ERIC records for this journal.)

 

Contents of the current issue:

Interaction and e-learning: the student experience;  Dialogue, language and identity: critical issues for networked management learning;  Knowledge-building quality in online communities of practice: focusing on learning dialogue1;  Learning a different form of communication’: experiences of networked learning and reflections on practice;  Epistemological agency: a necessary action-in-context perspective on new employee workplace learning;  Making language work in hybrid workspaces: three tensions

A Canadian Thanksgiving

Here are some photographs taken earlier this month during one day on Thanksgiving weekend, from about 6:00 am to 7:00 pm. This beautiful river is located in the Canadian Shield.  I wish you could smell the misty morning air and hear the birds that I heard, particularly the squawk of the Great Blue Heron who flew right over me. It was especially fine to see the full moon so early in the morning.  

We have so much to be thankful for, haven’t we? [Click on the little photos below for larger views.]
Image hosted by Webshots.com  Image hosted by Webshots.com  Image hosted by Webshots.com   Image hosted by Webshots.com
Image hosted by Webshots.com  Image hosted by Webshots.com  Image hosted by Webshots.com   Image hosted by Webshots.com
Same location, same occasion, some time in the 70s …
Image hosted by Webshots.com
 Thanksgiving morning, 2008.


Cochrane reviews compared with industry supported meta-analyses and other meta-analyses of the same drugs: systematic review

bmjlogo.gif  Here is an interesting review published recently in the BMJ:

Jorgensen AW, Hilden J, Gotzsche PC. Cochrane reviews compared with industry supported meta-analyses and other meta-analyses of the same drugs: systematic review. BMJ 2006; 333(7572):782-786.

OBJECTIVE: To compare the methodological quality and conclusions in Cochrane reviews with those in industry supported meta-analyses and other meta-analyses of the same drugs.
DESIGN: Systematic review comparing pairs of meta-analyses that studied the same two drugs in the same disease and were published within two years of each other.
DATA SOURCES: Cochrane Database of Systematic Reviews (2003, issue 1), PubMed, and Embase.
DATA EXTRACTION: Two observers independently extracted data and used a validated scale to judge the methodological quality of the reviews.
RESULTS: 175 of 1596 Cochrane reviews had a meta-analysis that compared two drugs. Twenty four meta-analyses that matched the Cochrane reviews were found: eight were industry supported, nine had undeclared support, and seven had no support or were supported by non-industry sources. On a 0-7 scale, the median quality score was 7 for Cochrane reviews and 3 for other reviews (P < 0.01). Compared with industry supported reviews and reviews with undeclared support, Cochrane reviews had more often considered the potential for bias in the review–for example, by describing the method of concealment of allocation and describing excluded patients or studies. The seven industry supported reviews that had conclusions recommended the experimental drug without reservations, compared with none of the Cochrane reviews (P = 0.02), although the estimated treatment effect was similar on average (z = 0.46, P = 0.64). Reviews with undeclared support and reviews with not for profit support or no support had conclusions that were similar in cautiousness to the Cochrane reviews.
CONCLUSIONS: Industry supported reviews of drugs should be read with caution as they were less transparent, had few reservations about methodological limitations of the included trials, and had more favourable conclusions than the corresponding Cochrane reviews.
PubMed Record      Free full text 

Riding the Waves of “Web 2.0”

pew.gif  This new report about what’s evolving on the Internet was released earlier this month by PEW/Internet and American Life Project. From the site:

“Web 2.0” has become a catch-all buzzword that people use to describe a wide range of online activities and applications, some of which the Pew Internet & American Life Project has been tracking for years. As researchers, we instinctively reach for our spreadsheets to see if there is evidence to inform the hype about any online trend. This article provides a short history of the phrase, along with new traffic data from Hitwise to help frame the discussion.

Link to these other Internet Evolution resources:
The Future of the Internet II;  Internet Penetration and Impact;  Tech Term Awareness;  How the Internet has woven itself into American life;  The Future of the Internet

The craft of writing: a physician-writer’s workshop for resident physicians

jgim.gif Here is an article that may interest those who believe there is a role for creative writing in health care. The article first appeared online in July, and was just published in the October 2006 issue of the Journal of General Internal Medicine: [subscription required]

Reisman AB, Hansen H, Rastegar A. The craft of writing: a physician-writer’s workshop for resident physicians. J Gen Intern Med  2006; 21(10):1109-1111.

INTRODUCTION: How can residency programs help trainees address conflicting emotions about their professional roles and cultivate a curiosity about their patients’ lives beyond their diseases? We drew on the medical humanities to address these challenges by creating an intensive writing workshop for internal medicine residents.
AIM: To help participants become better physicians by reflecting on their experiences and on what gives meaning to work and life. This paper describes the workshop and how residents were affected by the focus on the craft of writing.
SETTING: A group of 15 residents from 3 training programs affiliated with 1 institution.
PROGRAM DESCRIPTION: We engaged the expertise of physician-writer Abraham Verghese in planning and facilitating the 2 and one-half day workshop. Residents’ submissions were discussed with a focus on the effectiveness of the writing. We also conducted a focus group with participants to evaluate the workshop.
PROGRAM EVALUATION: Themes in the writing included dysphoria, impotence of the physician, and the healing power of compassion. Our focus group data suggested that this workshop served as a creative outlet from the rigors of medicine, created a sense of community among participants, enhanced both self-awareness and awareness of their patients’ lives, and increased intra-institutional and extra-institutional interest in writing and the residency program.
DISCUSSION: Teaching creative writing to residents in an intensive workshop may deepen interactions with peers and patients, improve writing skills, and increase interest in writing and the residency program.
PubMed Record