Research article
Open Access

Respective value of the traditional clinical rotation and high fidelity simulation on the acquisition of clinical reasoning skills in medical students – A Randomized Controlled Trial.

Sarahn Lovett[1], Jan Roche[2], Sharyn Hunter[3], Ian Symonds[4], Naomi Tomlinson[5], Robert Gagnon[6], Bernard Charlin[7], Joerg Mattes[8]

Institution: 1. John Hunter Children's Hospital, 2. University of Newcaslte, Australia, 3. University of Newcastle, 4. University of Newcastle, 5. University of Newcastle, 6. University of Montreal, 7. University of Montreal, 8. University of Newcastle,
Corresponding Author: Dr Sarahn Lovett lovett.sarahn@gmail.com
Categories: Learning Outcomes/Competency, Medical Education (General), Teaching and Learning

Abstract

OBJECTIVE

To assess the respective value of Traditional clinical rotation and High Fidelity Simulation (HFS) on the acquisition of clinical reasoning (CR) skills in Medical Students.

METHODS

A randomized controlled trial was conducted. Forty medical students were exposed to a full day of HFS, either during their Paediatric Term (PT) (Experimental group = PT+, HFS+) or, after completion of their PT (Control group = PT+, HFS-).  CR skills were assessed by a Script Concordance Test (SCT) prior to group allocation and at the completion of PT.

RESULTS

39 out of 40 students completed both SCT.  Scores before (mean/SD 57.4/6.7) and after (mean/SD 61.1/7.0) the PT were significantly improved (mean increase [95% confidence interval (CI)]: 3.6 [2.0-5.2]; p<0.0001; n=39). There was no added improvement from involvement in a full day of HFS in SCT scores between the experimental and the control groups (mean increase [95% CI]: 3.2 [0.5-5.9], n=18 versus 4.0 [1.9-6.1], n=21, respectively; p=0.61).

CONCLUSION

Exposure to PT significantly improves the medical students’ CR acquisition with no further enhancement by exposure to HFS. Thus potential improvements in students’ knowledge and skills due to exposure to HFS may not affect CR skills, and thus may not be assessable by SCT. 

Keywords: Script Concordance Test, Clinical Reasoning, Simulation

Introduction

In clinical settings, the challenge for educators today, is to develop educational interventions that not only improve the knowledge, professional development and technical skill of our medical graduates, but that improve the core clinical competency of clinical reasoning (CR) skill acquisition. The added challenge for universities and teaching hospitals is to provide these educational interventions that improve CR, to a growing number of medical trainees and graduates, in a cost-efficient and evidence-based manner.

CR is a complex process and clinicians analyze large amounts of information to arrive at the correct diagnosis or management plan for each patient.  The organization of knowledge used in that endeavor is termed a “Script” (Charlin, Roy, Brailowsky, Goulet, & Vleuten, 2000). The CR process is extremely intricate, yet fast (Charlin, Boshuizen, Custers, & Feltovich, 2007), and different clinicians often come to the same diagnostic decision via varying paths (Grant & Marsden, 1988). Activation of the illness scripts, when dealing with a patient, helps the clinician to efficiently rule in or out hypotheses, and helps decide on management strategies for them (Charlin, Roy, Brailowsky, Goulet, & Vleuten, 2000; Charlin, Boshuizen, Custers, & Feltovich, 2007; Feltovich & Barrows, 1984). 

The Script Theory aims to describe how humans understand real world events and why this understanding occurs almost effortlessly (Charlin, Boshuizen, Custers, & Feltovich, 2007). It tries to explain how our non-analytical (or pattern recognition) thought process can produce accurate decisions so efficiently, and suggests that for non-routine situations, deliberate script activation occurs (Coderre, Mandin, Harasym, & Fick, 2003) to help solve new problems. Using the Script Theory (Charlin, Roy, Brailowsky, Goulet, & Vleuten, 2000; Charlin, Boshuizen, Custers, & Feltovich, 2007; Charlin, Tardif, & Boshuizen, Scripts and medical diagnostic knowledge: Theory and application for clinical reasoning instruction and research, 2000), Charlin and colleagues have developed the Script Concordance Test (SCT) assessment that is able to give quantitative and reproducible results when assessing individual’s CR capabilities.

The complex CR process is at the core of medical practice, and it is the clinician’s most critical competence (Pelaccia, Tardiff, Triby, & Charlin, 2011). Doctors who can clinically reason efficiently and accurately will benefit society (Askew, Manthey, & Mahler, 2012) (Forsberg, Aronsson, Keller, & Linbald, 2011).  If CR is a skill of utmost importance to develop competent clinicians of the future we must ask ourselves:

  1. Can we design learning activities to promote the CR acquisition of novices?
  2. Can we assess the effectiveness of these learning activities in terms of CR acquisition?

The challenge for educators is to support clinical students in the acquisition of CR ability, by providing educational interventions that teach these skills (Newble, Norman, & van der Vleuten , 2000).  The assumption that CR ability is a product of experience has meant that the default method of teaching CR to medical students revolves around the exposure to traditional content curriculum provided at university and to patients and cases while on the wards or in tutorial groups.  More recently HFS has been used as a surrogate for providing clinical experience to students and graduates.  However, HFS is expensive and labor intensive teaching method. Although improvement in technical clinical skills has been proven (McGaghie, Issenberg, Cohen, Barsuk, & Wayne, 2011; Cook, et al., 2011; Cheng, Lang, Starr, Pusic, & Cook, 2014), further research is required to validate its use to improve CR acquisition.

In this randomized controlled trial we assessed the effect of HFS on the acquisition of CR skills, as measured by SCT, in medical students during the traditional PT.

Methods

Assessment Tool

The SCT is largely used as a measure of clinical reasoning. The SCT measures one of the most crucial aspects of CR, which is the ability to weight information in ill define scenarios to make decisions (Charlin, Roy, Brailowsky, Goulet, & Vleuten, 2000; Charlin & van der Vleuten, Standardised assessment of reasoning in the context of uncertainty: The Script Concordance approach., 2004), thus making the SCT the best available assessment tool for the very complex process of CR (Charlin, et al., 2012).  Hence we can measure the effect an intervention has on an individual’s CR abilities, and thus the SCT can guide development of ways to teach CR.  Our study chose to utilize the SCT to determine how effective the traditional PT alone (control group = (PT (+) HFS (-)) versus traditional PT with one day HFS exposure (experimental group = (PT (+) HFS (+)) was at improving clinical reasoning acquisition in medical students.

Clinical Rotation

Students from the Joint Medical Program (JMP) of the University of Newcastle (UoN) and University of New England (UNE), commencing their fourth year of the five years medical program were recruited to participate in the project.  Involvement in the project occurred during the six week PT that comprised traditional paediatric curriculum delivered by problem based learning tutorials, case based discussions, lecture series, paediatric clinical placements and skills sessions.  This traditional format is aimed at providing knowledge and skills to the students so they are competent and safe interns who are versed in the common acute and chronic medical presentations of children. The PT assessment includes summative (multiple choice examination and Objective Structured Clinical Exam (OSCE), student reflective portfolio) and formative (professional and clinical skills) assessment tools.  All participating students entered the study with similar baseline technical and non-technical skills attained throughout the previous years of the JMP.  No students have had previous exposure to paediatric medicine prior to involvement in study.  Students did not miss out on any of the traditional PT to be involved in this study.

High Fidelity Simulation

Eight paediatric simulation scenarios were designed of common and acute paediatric presentations.  These scenarios were designed to develop clinical reasoning and decision making skills rather than technical or procedural skills.  Scenarios were trialed on Paediatric Registrars and Consultants for validity, prior to exposing medical students. Each student performed individually one simulation in a random order followed by peer feedback and instructor debriefing. Each student also observed an additional seven live simulations with instruction to provide peer feedback and attend debriefing immediately after the simulation. Sending the students into the scenarios as individuals forced them to face a situation of uncertainty, critically analyze the information provided, and come to a clinical decision for their patient in the scenario.  The aim of this was to mobilize illness scripts of individual students’ and thus promote CR (Charlin, Tardif, & Boshuizen, Scripts and medical diagnostic knowledge: Theory and application for clinical reasoning instruction and research, 2000). Scenarios included acute presentations of croup, asthma, pneumonia with sepsis, head injury, abdominal trauma, anaphylaxis, sepsis and seizures. Each scenario involved a senior paediatric nurse, a high fidelity child or infant mannequin, consultant phone advice, access to NSW Health practice guidelines, a paediatric pharmacopoeia, and a resuscitation trolley with standard airway and circulation support equipment.  The students were aware that during the scenario, they were only expected to function at the intern level (junior medical officer), and under no circumstances were they expected to perform procedures that they would otherwise not for their skill level. The focus of the simulation experience was on clinical decision making, not on acquiring new technical skills.  Students were expected to be able to perform basic life support and all students had completed the Resus4Kids (Resus4Kids, 2013) program prior to participation. 

On the day of exposure to HFS, each group was oriented to the space, equipment and expectations of the day. All students were debriefed following their scenario.  A confidentiality contract was signed by each member of the staff and student cohort to ensure confidentiality of students’ performances.

Script Concordance Test Development

Utilizing published SCT guidelines for construction (Dory, Gagnon, Vanpee, & Charlin, 2012; Fournier, Demeester, & Charlin, 2008) a script concordance test was developed comprising 51 cases, each with three questions relating to the case, totaling 153 questions. The clinical scenarios comprised of common paediatric cases that reflect the goals of Australian medical educational objectives.

We approached nineteen experts to complete the SCT (Dory, Gagnon, Vanpee, & Charlin, 2012); all were certified by the Royal Australasian College of Physician as Pediatricians or Fellows. Fifteen accepted and returned completed tests, complying with the recommendations made by Gagnon et al (Gagnon, Charlin, Coletti, Sauve, & Van der Vleuten, 2005) to get ten or more panel members to achieve adequate reliability estimates and thus created the scoring keys.

The SCT comprising 51 cases and 153 questions (see Figure 1 for example SCT question), which the experts completed, was then assessed to determine the questions that best detected clinical experience.  Questions whose responses lacked variance among experts, are thought to measure factual knowledge rather than reasoning in context of uncertainty.  These questions are discarded as deemed poor discriminators of clinical expertise and reasoning.  Utilizing the published guidelines by Charlin et al (Charlin, et al., 2006) we computed the variance among the expert panel members.  Questions that showed a variance above 0.4 on the indicator were chosen for the final SCT that was delivered to the students.  This final SCT comprised of 41 cases and a total of 90 questions. Students had no previous experience with the SCT format and instructions on how to answer the paper were given prior to sitting SCT.  The SCT was completed by the students on a paper copy and the results were scored and computed with the excel program available on the SCT website of the University of Montreal, Canada (University of Montreal - Faculty of Medicine, 2014).

Figure 1. Example of the components of a script concordance question.  Image modified from Charlin B and van der Vleuten C (Charlin & van der Vleuten, Standardised assessment of reasoning in the context of uncertainty: The Script Concordance approach., 2004) with permission.

Study Populations

Information regarding the project was provided during their introductory week for the rotation in JMP Women and Childs’ Health (WACH) course. All students were asked for participation, 64% accepted to take part (n=40). The Human Ethics Committee of the UoN and UNE approved the study and written informed consent was obtained from students and experts before inclusion into the study.

Study Design

40 students were randomly assigned to either experimental (PT (+) HFS(+)) or control (PT(+) HFS(-)) arm.  After randomization, 22 students were assigned to the control arm and 18 to the experimental arm.  Simple randomization was performed and was completed independent to researchers.

Figure 2. Study design.

40 consenting students sat SCT-1 after allocation to either the experimental (n= 18) or control arm (n= 22). 39 students sat SCT-2 after completion of PT.

21 of the 22 students in the control arm participated in a catch-up HFS session for equity reasons after the second SCT.

Statistical Analysis

This study was powered to detect a difference in mean SCT score of one standard deviation between groups in the second SCT result with a power of 80% and alpha value of 0.05. Regarding analysis, we confirmed that the SCT data follows a Gaussian distribution employing the Kolmogorov-Smirnov test. We used the t-test for paired or unpaired observations (as appropriate) to test for statistically significant differences in SCT scores if normally distributed, and the Wilcoxon matched pairs test (paired) or Mann-Whitney test (unpaired) for data that was not normally distributed. We used the Spearman correlation coefficient (rs), to quantify the association between two variables. We used Prism 5 GraphPad Software Inc. for analysis.

Results

There was no significant difference between the experimental and control group’s SCT-1 result, indicating that the participating students all enter the PT with similar base line CR capabilities (Fig 3). Both the experimental and control group showed improvement in their SCT scores following being exposed to the traditional PT (Fig 3). 

There was no significant difference between the SCT-2 scores between the experimental and control arms, indicating that exposure to one session of simulation did not affect SCT scores (primary outcome). The improvement in mean SCT score [95% CI] observed in the experimental group was 3.2 [0.5-5.9] and in the control group 4.0 [1.9-6.1] (p=0.61).

Figure 3:  SCT Scores for Experts and Students

Discussion

HFS is an expensive and time-consuming teaching technique that leads to improvements in examination and technical skills, communication and teamwork (McGaghie, Issenberg, Cohen, Barsuk, & Wayne, 2011; Merchant, 2012; Issenberg & Scalese, 2007). However it is unknown whether HFS directly impacts on CR capabilities. Our study was designed to assess the impact of HFS on CR skills acquisition of students during their PT as part of the JMP medical curriculum. We have shown that HFS implemented into the PT in a way which was thought to be deliverable and sustainable within the JMP medical curriculum in the future, had no significant effect on CR skills assessed by SCT. Importantly, both experimental and control groups did show an overall improvement in their CR skill acquisition, demonstrating the educational worth of the traditional PT. Thus SCT is sensitive to an educational intervention, in this case PT, which has not previously been presented in the English literature.

It is unlikely that this result is a consequence of the study size because our study was powered to detect a difference of one standard deviation in SCT score between groups.  It is possible that exposure to HFS needs to be increased significantly to affect CR, at least in novices. However, the additional face-to-face teaching time for HPS in this study was greater than 50% of the time committed to structured professional skills or problem-based learning tutorials during the PT. Thus, we and others would find it challenging to offer more HPS teaching in a six-week rotation, due to competitive teaching activities thought to be a integral component of a problem-based medical curriculum. It is possible that the HPS modules used in this study require modification if the goal is acquisition of CR skills. This would require future prospective studies.

HFS was well received by our students (data not shown), which is in line with previous reports (Ten Eyck, Tews, & Ballester, 2009).  The student satisfaction surveys showed that the experience was enjoyable, approachable and they felt more confident about dealing with acute paediatric situations after the simulation.  We acknowledge that there is a significant role for the use of HFS in training medical students and graduates, especially when developing technical and non-technical skills such as communication and teamwork. 

Potential confounders that occurred during our study must be recognized.  The difficulty in creating a large bank of SCT questions from which to validate and produce two different tests of reliable value meant the first SCT sat by the students was the same as the second SCT. There was the concern that any improvement in the second SCT over the first SCT scores may be due to recall bias.  It was felt by the research team that this effect would be small, due to the six-week gap between sittings, and the complexity of the questions.  Furthermore the actual design of the SCT relies upon there not being one correct answer, but a spectrum of “correctness” for each hypothesis. Most importantly our study was controlled, and any potential recall effect would have resulted in an improvement in both groups without compromising the primary outcome.

The impact that the HFS sessions had on the students was obviously a key concern to how it affected the students’ CR skills.  We relied on the students’ respecting the need for confidentiality amongst each intervention group to not discuss HFS scenarios with students still yet to experience the HFS to maximize the teaching power of the HFS scenarios.  The importance of not revealing information about the intervention day was highlighted by the confidentiality clause all students had to sign to be eligible to participate.

Conclusion

We were able to show that during exposure to the traditional paediatric curriculum found in the JMP, there is a significant improvement in CR acquisition skills in our population and this was detected by the SCT. 

HFS is a well-received teaching method that provides experiential learning opportunities for students, however it is very expensive and labor intensive. 

HFS does provide opportunities to attain technical and non-technical skills but not for the development of clinical reasoning when delivered as it was in our study.

The effect of HFS on CR has been previously difficult to analyze, but the SCT offered us a unique and reliable method to do so. Exposure to one HFS session did not improve CR acquisition, above what was achieved through exposure to the traditional medical curricula of the JMP.

These findings provide the first level II evidence that SCT is sensitive to an educational intervention and suggest that HPS may not promote CR skills in medical students when assessed by SCT.   It also highlights the potential use of the SCT to identify and design learning activities in our JMP curriculum that can improve CR acquisition of our future students in an efficient, enjoyable and economical way.

Take Home Messages

1. Exposure to traditional medical school curriculum increases students' clinical reasoning capabilities.

2. The Script Concordance Test is sensitive to an educational interventions effect on subject’s clinical reasoning capabilities.

Notes On Contributors

Notes on Contributors:

Author

Contribution

Dr Sarahn Lovett

Paediatrician

MBBS, BMedSci (Hons.)

FRACP

John Hunter Children’s Hospital

Lookout Rd, New Lambton Heights

NSW 2305 Australia

Sarahn.Lovett@gmail.com

 

Senior study designer; SCT construction; HFS scenario designer and programmer; HFS instructor/debriefer; Statistical analysis of data; Literature review; Writing of original paper. 

Janiece Roche

BSc(Nursing) ICU, CCU, NHET-SIM

faculty.

Director Simulation -Chameleon

School of Medicine and Public Health

Faculty of Health and Medicine

University of Newcastle

University Drive, Callaghan

NSW 2308, Australia

Jan.Roche@newcastle.edu.au

 

Study design; HFS

co-scenario designer and

programmer; simulation based

education train the trainer/ instructor/debriefer; HFS coordination.

 

Dr Sharyn Hunter

PhD, BSc, BSc.

School of Nursing and Midwifery

Faculty of Health and Medicine

University of Newcastle

University Drive, Callaghan

NSW 2308, Australia

Sharyn.Hunter@newcastle.edu.au

 

HFS instructor/debriefer; HFS coordination.

Conjoint Professor Ian Symonds

MBBS, FRANZCOG

Head of School

School of Medicine and Public Health

Faculty of Health and Medicine

University of Newcastle

University Drive, Callaghan

NSW 2308, Australia

Ian.Symonds@newcastle.edu.au

 

Supervision of project; contribution towards HFS program design;

Dr Naomi Tomlinson

Paediatrician

BMedSc, MBBS, FRACP, PostGradDip (Child Health), GradCert (Clinical Education)

Department of Paediatrics

Royal Hobart Hospital

48 Liverpool St

Hobart 7000

Ph: 0362228308

Naomi.Tomlinson@dhhs.tas.gov.au

 

SCT compilation; HFS instructor/debriefer.

Professor Bernard Charlin

MD

Faculty of Medicine

Department of Surgery

University of Montreal

PO Box 6128, Station Centre-ville

Montréal QC

H3C 3J7

Canada

bernard.charlin@umontreal.ca 

 

Guidance and assistance in study design and SCT construction. Advised on analysis and interpreted data.

Robert Gagnon

Psychometrician

Faculty of Medicine

University of Montreal

PO Box 6128, Station Centre-ville

Montréal QC

H3C 3J7

Canada

robert.gagnon@umontreal.ca

 

Guidance and assistance in study design and SCT construction.

Professor Joerg Mattes

FRACP

Senior Staff Specialist, Paediatric Respiratory Medicine

John Hunter Children’s Hospital

Lookout Rd, New Lambton Heights

NSW 2305 Australia

Joerg.Mattes@newcastle.edu.au

 

Conceived and supervised project; Editor of paper; Statistical analysis of data and interpretation of results.

 

Acknowledgements

The authors would like to thank Professor Brian Jolly for the comments on this manuscript, Ashleigh May for administrative support and randomization of participants, and students for their participation. 

Bibliography/References

Askew, K., Manthey, D., & Mahler, S. (2012). Clinical reasoning; are we testing what we are teaching? Medical Education, 46, 534-544.

http://dx.doi.org/10.1111/j.1365-2923.2012.04288.x  

Charlin, B., & van der Vleuten, C. (2004). Standardised assessment of reasoning in the context of uncertainty: The Script Concordance approach. Evaluation and the Health Professions, 27, 304.

http://dx.doi.org/10.1177/0163278704267043

Charlin, B., Boshuizen, H. P., Custers, E. J., & Feltovich, P. J. (2007). Scripts and clinical reasoning. Medical Education, 41, 1178-1184.

http://dx.doi.org/10.1111/j.1365-2923.2007.02924.x

Charlin, B., Gagnon, R., Pelletier, J., Coletti, M., Abi-Rizk, G., Nasr, C., et al. (2006). Assessment of clinical reasoning in the context of uncertainty: the effect of variability within the reference panel. Medical Education, 40, 848-854.

http://dx.doi.org/10.1111/j.1365-2929.2006.02541.x  

Charlin, B., Lubarsky, S., Millette, B., Crevier, F., Audetat, M.-C., Charbonneaur, A., et al. (2012). Clinical reasoning processes: unravelling complexity through graphical representation. Medical Education, 46, 4554-463.

http://dx.doi.org/10.1111/j.1365-2923.2012.04242.x   

Charlin, B., Roy, L., Brailowsky, C., Goulet, F., & Vleuten, C. (2000). The Script Concordance test: a tool to assess the reflective clinician. Teaching and Learning in Medicine, 12, 189-195.

http://dx.doi.org/10.1207/S15328015TLM1204_5  

Charlin, B., Tardif, J., & Boshuizen, H. P. (2000). Scripts and medical diagnostic knowledge: Theory and application for clinical reasoning instruction and research. Academic Medicine, 75, 182-190.

http://dx.doi.org/10.1097/00001888-200002000-00020   

Cheng, A., Lang, T. R., Starr, S. R., Pusic, M., & Cook, D. A. (2014). Technology-Enhanced Simulation and Paediatric Education: A Meta-analysis. Pediatrics, 133 (5), e1313-e1323.

http://dx.doi.org/10.1542/peds.2013-2139   

Coderre, S., Mandin, H., Harasym, P. H., & Fick, G. H. (2003). Diagnostic reasoning strategies and diagnostic success. Medical Education, 37, 695-703.

http://dx.doi.org/10.1046/j.1365-2923.2003.01577.x   

Cook, D. A., Hatala, R., Brydges, R., Zendejas, B., Szostek, J. H., Wang, A. T., et al. (2011). Technology-Enhanced Simulation for Health Professions Education. A Systematic Review and Meta-Analysis. The Journal of the America Medical Association, 306 (9), 978-988.

http://dx.doi.org/10.1001/jama.2011.1234   

Dory, V., Gagnon, R., Vanpee, D., & Charlin, B. (2012). How to construct and implement script concordance tests: insights from a systematic review. Medical Education, 46, 552-563.

http://dx.doi.org/10.1111/j.1365-2923.2011.04211.x   

Feltovich, P. J., & Barrows, H. S. (1984). Issues of generality in medical problems solving. In H. G. Schmidt, & M. L. De Volder, Tutorials in Problem-based Learning: a New Direction in Teaching the Health Professions (pp. 128-142). Assen: Van Gorcum.   

Forsberg, H. H., Aronsson, H., Keller, C., & Linbald, S. (2011). Managing health care decisions and improvement through simulation modeling. Quality Management in Health Care, 20 (1), 15-29.

http://dx.doi.org/10.1097/QMH.0b013e3182033bdc

Fournier, J. P., Demeester, A., & Charlin, B. (2008). Script concordance tests: Guidelines for construction. BMC Medical Informatics and Decision Making, 8 (18), 1 of 7.

http://dx.doi.org/10.1186/1472-6947-8-18

Gagnon, R., Charlin, B., Coletti, M., Sauve, E., & Van der Vleuten, C. (2005). Assessment in the context of uncertainty: how many members are needed on the panel of reference of a script concordance test? Medical Education, 39, 284-291.

http://dx.doi.org/10.1111/j.1365-2929.2005.02092.x

Grant, J., & Marsden, P. (1988). Primary knowledge, medical education and consultant expertise. Medical Education, 22 (3), 173-179.

http://dx.doi.org/10.1111/j.1365-2923.1988.tb00002.x   

Issenberg, S. B., & Scalese, R. J. (2007). Best evidence on high-fidelity simulation: what clinical teachers need to know. The Clinical Teacher, 4, 73-77.

http://dx.doi.org/10.1111/j.1743-498X.2007.00161.x   

McGaghie, W. C., Issenberg, B., Cohen, E. R., Barsuk, J. H., & Wayne, D. B. (2011). Does Simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Academic Medicine, 86 (6), 706-711.

http://dx.doi.org/10.1097/ACM.0b013e318217e119   

Merchant, D. C. (2012). Does High-Fidelity Simulation Improve Clinical Outcome? Journal for Nurses in Staff Development, 28 (1), E1-E8.

http://dx.doi.org/10.1097/NND.0b013e318240a728  

Newble, D., Norman, G., & van der Vleuten, C. (2000). Assessing clinical reasoning. In J. Higgs, & M. Jones, Clinical Reasoning in the Health Professions (2nd Edition ed., pp. 156-65). Oxford, UK: Butterworth-Heinemann.

Pelaccia, T., Tardiff, J., Triby, E., & Charlin, B. (2011). An analysis of clinical reasoning through a recent and comprehensive approach: the dual process theory. Medical Education Online, 16, 5890.

http://dx.doi.org/10.3402/meo.v16i0.5890

Resus4Kids. (2013). RESUS4KIDS. Retrieved August 19, 2014, from RESUS4KIDS: http://www.resus4kids.com.au/

Ten Eyck, R. P., Tews, M., & Ballester, J. M. (2009). Improved medical student satisfaction and test performance with a simulation-based emergency medicine curriculum: A Randomised Controlled Trial. Annals of Emergency Medicine, 54 (5), 684-691.

http://dx.doi.org/10.1016/j.annemergmed.2009.03.025

University of Montreal - Faculty of Medicine. (2014). cpass. Retrieved August 26/08/14, 2014, from University of Montreal: https://www.cpass.umontreal.ca/tcs.html

Appendices

There are no conflicts of interest.

Please Login or Register an Account before submitting a Review

Reviews

Julie Williamson - (08/07/2016) Panel Member Icon
/
This article describes a well-designed educational intervention study comparing a traditional rotation training (PT) group with a group that completed PT and one day of high-fidelity simulation (PT + HFS). The improvement in clinical reasoning (CR) after 6 weeks of PT suggested that the script concordance test (SCT) was sensitive enough to pick up the expected improvement in CR. Alternatively, as the authors acknowledge in their discussion, student scores may have improved because they had already taken the same test once before and perhaps learned from the test rather than the PT. In any event, it is refreshing to read what some may consider a negative outcome; that is, PT + HFS did not improve CR more than PT alone did. In designing educational interventions, it is equally important to know what has worked and what has not. Readers should note that during HFS training, each student completed one case and observed seven. This seems to be a relatively small intervention. Is it reasonable to expect CR to improve after one day of training of any modality? It raises the question of how much HFS training would be needed to improve CR. This knowledge could impact how educators incorporate HFS into student training.
Trevor Gibbs - (07/07/2016) Panel Member Icon
/
Many of the papers written for medical or healthcare education are trials / experiments etc. looking at what works and proving that interventions work, so it was refreshing to read a paper that looked at something not working, or in this case not adding value.

I did find that this paper needed several reads through to grasp the authors meanings, but the interpretation was that research performed and the paper written was to assess if the addition of a high fidelity type training gave added value to a programme aimed at developing student's clinical reasoning skills; in this situation a paediatric course. The conclusion was that it did not improve the students' scores in an assessment of clinical reasoning through Script Concordance Testing, and given the expense of the simulation exercise, its addition is not cost effective- an important statement to make to all those involved in curriculum planning.

On reading through the paper, there were a few observations I made although overall I really enjoyed reading the paper and respected its value

I am unsure if it really was an RCT as the authors state, given that the control group( nearly all) took the HFS exercise at a very different time- however, I am unsure if this really would make any difference to the results.

I am a great believe in individualisation of curricula activities- "just for you learning" and creating an effective bank of learning activities for the students so that you can cater for students individual preferences. I am also a great believer in active learning, creating several active learning methods, enabling students to see and learn from different perspectives. The authors describe a very good example of this in the PT course- it is in my opinion exemplary and not always found in others' curricula. This led me to think if the HFS was not just another "good teaching method" that provides extra opportunity rather than additional education.

I wondered if the HSF element on its own would have any beneficial effect upon the SCT and if so to what degree ?

All in all, and despite these slight reservations I felt that this paper should be read by those concerned with curriculum development generally and in CR specifically