Research article
Open Access

Is there a difference between self-perceived performance and observed performance in an Objective Structured Clinical Examination (OSCE)? An exploratory study among medical students in the United Arab Emirates

Erik Koornneef[1], Tom Loney[2], Ahmed R. Alsuwaidi[3], Marilia Silva Paulo[4]

Institution: 1. Erasmus School of Health Policy and Management, Erasmus University, Rotterdam, Netherlands, 2. College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates, 3. Department of Pediatrics, College of Medicine and Health Sciences, United Arab Emirates University, Al Ain, United Arab Emirates, 4. Institute of Public Health, College of Medicine and Health Sciences, United Arab Emirates University
Corresponding Author: Ms Marilia Silva Paulo ([email protected])
Categories: Assessment, Clinical Skills, Research in Health Professions Education, Undergraduate/Graduate
Published Date: 22/08/2018

Abstract

Competency-based education and training has become a key component of healthcare systems across the globe. Ensuring that healthcare professionals are able to assess their own competencies is critical for continued professional development and the delivery of high-quality care. The aim of this study was to assess how medical students perceive their performance on an objective structured clinical examination. Using a cross-sectional study design, a sample of Emirati third and fourth year (preclinical) medical students (N=106; 56.4% response rate) was recruited from the United Arab Emirates University in Al Ain, United Arab Emirates.  Medical students completed a short non-invasive clinical task (i.e. measuring and recording blood pressure and performing hand hygiene) followed by a structured survey to self-assess their performance and skills. Trained assessors used a clinical skills observation checklist tool to score each student’s performance. According to the observed performance, 27.36% of medical students performed the objective structured clinical task adequately. In contrast, 69.52% rated their own performance as adequate. Furthermore, only 8.43% of medical students rated their own clinical skills as below average. This study did not find evidence that medical students can accurately assess their own clinical skills and performance. In order to support the delivery of high-quality healthcare, it is important that medical students develop their ability to accurately assess their own clinical skills and performance early in their medical careers.  Teaching and appraising self-reflection is an important component of any undergraduate or postgraduate medical degree program.

Keywords: medical education; objective structured clinical examination; self-assessment; United Arab Emirates

Introduction

Medical education plays an important role in maintaining and improving the quality of a country’s healthcare system (Khalid, 2008). Many competencies are defined for medical students that must be acquired before graduation, such as clinical knowledge and expertise, professional integrity, empathy, communicative skills, and conceptual thinking (Patterson et al., 2000; Huenges et al., 2017). To achieve these desired competencies, future doctors need to be able to accurately self-assess and appraise their multiple skills, also in addition to recognizing their limitations (Huenges et al., 2017). In this paper, we assumed that competency involves multiple skills. Healthcare providers and educators are moving towards competency-based education and assessment skills, and the lack of self-assessment skills from healthcare professionals can act as a barrier for self-paced learning (Graves, Lalla and Young, 2017). Self-assessment has multiple definitions in the literature and the term has also been used to describe self-reflection or self-evaluation. Andrade and Du (5, page 160) define each of these concepts independently, and in this paper we used their self-assessment definition as the “process of formative assessment during which students reflect on and evaluate the quality of their work and their learning, judge the degree to which they reflect explicitly stated goals or criteria, identify strengths and weaknesses in their work and revise accordingly”(Andrade and Du, 2007). Studies have found that physicians often assess themselves as being more competent than they actually are (Davis et al., 2006). Therefore, introducing self-assessment for medical students may assist them to accurately assess their own skills and competencies in the future. Accurate self-assessment of personal and professional capabilities are now seen as essential for success (Alwi and Sidhu, 2013) as healthcare professionals and essential for delivery of high quality care.

The Objective Structured Clinical Examination (OSCE) is a comprehensive evaluation tool that has been used to assess the competencies of medical students in the majority of medical schools worldwide (Zayyan, 2011). The OSCE assesses clinical skills, counselling, and communication-based competencies through direct observation (Zayyan, 2011). The OSCE has been widely used over the past two decades and can be defined as a “timed examination in which medical students interact with a series of simulated patients in stations” (Zayyan, 2011). The OSCE comprises several clinical stations, usually 10-12, where the student performs tasks including history-taking, physical examinations, counselling or patient management, and clinical procedures. The student is required to complete the task within a set time limit and according to well-defined criteria for each specific clinical skill. These clinical tasks are normally assessed by trained assessors from the medical faculty (Zayyan, 2011; Kim, 2016).

This study took place in the United Arab Emirates (UAE), an independent federation, consisting of seven Emirates with a total population of approximately 9.1 million people, in 2016 (Federal Competitiveness and Statistics Authority, 2017). It is a relatively young, high-income country, established in 1971 (Abdel-Razig and Alameri, 2013) with a strong government-led desire to build a world-class healthcare system to improve the health of its population (Koornneef, Robben and Blair, 2017).  The World Health Organization described the Eastern Mediterranean Region, where the UAE is located, as a region facing major challenges regarding the healthcare workforce. Specifically, the UAE faces major challenges related to the shortage of UAE national healthcare workers, a high reliance on expatriate staff, limited health professionals’ production capacity, and a high turnover of expatriate healthcare workers (World Health Organization, 2017). In this context, the present study focuses on one of these challenges: the capacity deficit to educate and train an adequate number of appropriately educated and trained UAE nationals’ healthcare professionals.

The main objective of the study was to explore the differences between self-assessment and trained-assessors OSCE score. Our hypothesis was that a medical student who rates their clinical skills and competencies as adequate would also achieve a higher observed OSCE overall score.

Methods

Study design

A cross-sectional study was used to investigate the relationship between self-perceived performance from medical students and trained-assessor rated OSCE performance. The STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) Statement was used to structure this paper (Elm et al., 2007).

Setting

The study was conducted at the clinical skills simulation centre of the College of Medicine and Health Sciences of the United Arab Emirates University, the largest public university in the UAE. Data collection occurred over two consecutive days in April 2016. This study was approved by the institution’s Social Sciences Research Ethics Committee (ERS_2015_3212).

Participants

Medical students from the Doctor of Medicine (M.D.) six-year program at the College of Medicine and Health Sciences were the study population. Pre-clinical students (third and fourth year) were invited to participate in the survey and to perform a specific non-invasive clinical task (measure blood pressure).

Variables

The study variables were overall OSCE score, student self-assessed performance, and self-reported clinical skills. These two last variables were measured by statements in the survey, ranked by a Likert scale ranging from one to five. The variable self-assessed performance was defined through the survey sentence “Overall, I think that I performed the OSCE to the best of my abilities” measured by the Likert scale as strongly disagree (1), disagree (2), neither (3), agree (4) and strongly agree (5). The variable self-reported clinical skills was defined through the sentence “I would rate my own clinical skills and competence as” categorized into (1) poor, (2) fair, (3) average, (4) good and (5) very good.

The dependent variable overall OSCE score was created by summing the scores of the clinical skills observation tool that was completed by the observers. The trained observers were faculty and staff from the College of Medicine. They were considered eligible to assess the clinical task of collecting blood pressure by their qualifications, and they were professionally trained on how to evaluate the quality of hand hygiene practice, having successfully completed a two hour long online hand hygiene course from Hand Hygiene Australia and through a bespoke two-hour face-to-face practical course prepared by the authors.

Data sources/measurements

To accomplish our research objective, we used a cross-sectional survey and a clinical observation tool to collect the data. The survey was designed specifically for this study and the designing process took into consideration a review of other papers and surveys (Makkai and Braithwaite, 1996; Sunshine and Tyler, 2003; Murphy, Tyler and Curtis, 2009; Tyler, Mentovich and Satyavada, 2014). The survey formed part of a larger study exploring medical student’s perceptions of healthcare regulation (Koornneef et al., no date) and included questions regarding the two above mentioned variables (self-perceived performance and self-reported clinical skills).

The clinical skills observation tool was designed in consideration of other observation tools used to assess OSCE, for example, the OSCEstop (OSCEstop, 2013). This observation tool included data collection on four major parts: preparation, including introducing self to the patient, hand hygiene including the WHO hand hygiene standards (before and after the clinical task), and blood pressure measurement (clinical task performed at OSCE). These four parts were assessed by observers using a Likert scale ranging from one to three (one – performed adequately, two – attempted, but performed inadequately and three – not attempted).

Eligible medical students received an email invitation to participate in the research study one week before the study took place. Students were informed and asked to perform a clinical task and to complete the survey. Students who were willing to participate booked a slot or ‘walked in’ at the clinical skills simulation centre during the two days of the data collection. Upon arrival, the students received a brief description of the study and consent process, and they were requested to read and sign a consent form. A research assistant explained the study as follows: the participant was asked to perform a short non-invasive clinical task – measuring and recording a person’s blood pressure – and complete the survey afterwards. Students were randomly assigned to one of the four available clinical skills simulation rooms. One of the observers played the role of the “standardized patient”, and the other one pretended to be completing a Sudoku book, but observed the student performing the OSCE and completed the clinical skills observation tool. Usually the OSCE is a circuit of stations, but as this OSCE was designed specifically for this study, it comprised only one station with one clinical task. At the end of the task, the participant was asked to complete the survey and earned a Certificate of Attendance. All students had received the same training on performing the clinical task and were aware of the key steps involved in completing the task correctly and in accordance with the UAE health regulations.

Bias

To minimize potential bias in our study, the observers were not known to the students, they were always of the same gender as the participants and they were trained and experienced in observing students’ OSCE performance.  In addition, each participant was randomly assigned to the clinical room where the OSCE was carried out. The layout of the clinical observation rooms was identical. Students were unaware (blinded) to the covert assessor role of the research assistant who pretended to complete the Sudoku book whilst they performed their clinical task.  This method of blinding was used to minimize any possible Hawthorne effect (i.e. observer effect that causes reactivity in which an individual modifies their behaviour in response to awareness of being observed).

Study sample size

All undergraduate medical students from the third and fourth year (N=188) were invited to participate in the study. Of the 188 students, 106 participated in our study (56.38% response rate).

Quantitative variables/Statistical methods

Descriptive statistical techniques were used to describe the dependent variable (trained assessor rated overall OSCE score) and the two independent ones under analysis: self-perceived performance and self-reported clinical skills. A t-test was used to test the difference between genders and the dependent variable. An ANOVA was used to determine the difference between the categories of the independent variables and the OSCE overall score. All the tests were performed using a=5%.

Results/Analysis

Participants

A total of 106 medical students participated in our study representing 31.80% of all undergraduate medical students in the university. All university students from the College of Medicine and Health Sciences at the United Arab Emirates University are UAE nationals, and 77.40% were female. The proportion of male/female in the study sample is similar to the gender distribution of the medical student population in the college.

Main results

When asked if they performed the OSCE to the best of their abilities, the majority (69.52%) of students answered agree or strongly agree, while nearly a third (30.48%) of students self-assessed their performance as neutral (neither) or negative (disagree or strongly disagree) (Figure 1).

Figure 1: Medical students self-perceived performance after OSCE.

Half of the students (55.66%) self-reported their clinical skills as ‘good’ and only 8.49% considered their clinical skills below average (Figure 2). None of the students rated their clinical skills and competencies as ‘poor’.

Figure 2: Medical students self-reported clinical skills.

The observed score shows that the OSCE overall score was performed ‘adequately’ by 27.36% of students, while 72.64% were rated as ‘attempted, but performed inadequately’. None of the students did not attempted.

The mean (±SD) of the trained-assessor observed OSCE overall score was 1.7±0.0, minimum of 1.0 and maximum of 2.6. The mean (±SD) of the trained-assessor observed OSCE score for females was 1.7±0.0 and for males was 1.6±0.0 (Figure 3). This difference was not statistically significant (p=0.794).

Figure 3: Overall OSCE score per gender.

The students that ‘strongly disagreed’ and the students that ‘neither agreed nor disagreed’ to performing the OSCE at their best had a mean OSCE overall score of 1.6±0.1 and 1.6±0.0, respectively (Figure 4). The students that ‘strongly agreed’, ‘agreed’ and ‘disagreed’ revealed same mean OSCE overall score with a decimal difference amongst them. ANOVA was calculated to assess the difference between the students’ perceived performance categories and there were no statistically significant differences (p=0.763).

Figure 4: Overall OSCE score and medical students self-perceived performance.

The students that reported their clinical skills as ‘fair’ showed the highest mean (±SD) OSCE overall score (1.8±0.1). While the students who reported their clinical skills to be ‘good’ or ‘very good’ presented mean (±SD) overall OSCE score of 1.7±0.0 and 1.7±0.1, respectively.  There was no statistical significance between how students reported their clinical skills and OSCE overall score (p=0.6).

 Figure 5: OSCE overall score and medical students self-reported clinical skills.

The intragroup variance between gender, self-perceived performance, and self-reported clinical skills is not statistically significant (p=0.492).

Discussion

Key results

The key result is that this study did not find evidence to support the hypothesis that medical students in the pre-clinical phase can accurately self-assess their own skills, competencies and performance. In other words, the lack of a statistical significance between the mean of overall OSCE scores the two self-rated variables may indicate that medical students in the preclinical phase have not yet developed the necessary self-reflection skills to accurately appraise their own performance compared to their assessed performance. There was no difference between the gender of the medical students regarding self-assessment and trained-assessor observed overall OSCE score. These findings were similar to Andrade and Du’s study that explored the attitudes toward and beliefs about self-assessment in undergraduate teacher education students in the United States and did not find differences in the responses of male and female students (Andrade and Du, 2007).

Limitations

The undergraduate preclinical medical students that participated in the present study represented nearly a third (31.80%) of the total medical students at the United Arab Emirates University. One of the limitations of this study is that it represents a convenient sample from one of six medical universities in the UAE, and includes only third and fourth-year preclinical medical students.

Interpretation

Only one-quarter of preclinical medical students performed the OSCE adequately. However, the majority of the students reported a positive self-assessment when asked if they performed the OSCE to their best ability. In Oman, a similar study compared the difference between the student’s self-assessment and the trained-assessor OSCE score in 60 medical students and the results show that the students consistently overestimated their performance in four of the 12 items while underestimating their performance in the remaining eight items (Jahan et al., 2014). 

Almost 70% of participants self-reported their clinical skills as good or very good and that they had completed the OSCE to the best of their ability. This is in stark contrast with the actual trained-assessed OSCE appraisal which found that only 27% of students performed the OSCE task adequately. Other studies have found similar discrepancies. In a systematic review including 20 studies on the accuracy of physician self-assessment compared with observed assessments, the results showed that physicians did not accurately self-assess themselves in the majority of the studies (Davis et al., 2006). In addition, the systematic review reported only weak or no associations were found between self-rated assessment and external observed assessments (Davis et al., 2006). The inaccuracy of self-assessment is also reported in medical students as being frequent and across several specialities or levels in the graduating program (Eftekhar et al., 2012; Graves, Lalla and Young, 2017; Huenges et al., 2017). The timing of assessment has been shown to play a role in student self-reflection. A study examining the self-rated competencies of 168 medical students pre- and post-OSCE showed that students decreased their self-rating after the family medicine objective examination, but not significantly for family medicine specific skills (Graves, Lalla and Young, 2017). A study of 244 medical students for the specialization in general practice revealed that the method of self-assessment was experienced and perceived as useful, but only 57% of the sample opted for self-assessment combined with individual feedback on their strengths and weaknesses (Huenges et al., 2017). Self-assessment is a complex process of internalization and self-regulation (Andrade and Du, 2007), and many medical students may not have developed the necessary cognitive skills and reflective practices during their medical undergraduate degrees to provide a realistic self-appraisal. Therefore, providing sufficient time for students to develop their self-reflection skills is an important component of any undergraduate or postgraduate medical degree programme.

Some authors have questioned the reliability of self-assessment (Davis et al., 2006; Chen et al., 2008; Graves, Lalla and Young, 2017). It has been reported by medical students that if the subjective self-rating is to be used as a formal aspect of the medical education program, then it should be complemented with formative feedback from the supervisors (Huenges et al., 2017). As such, several researchers advise the development of all-inclusive continuing professional education programs including portfolios, documenting practice-based learning and improvement activities, and creating less general and more detailed learning objectives (Davis et al., 2006; Huenges et al., 2017).  In this case, it is important to include direct observation in clinical training which has also been a standard in medical education as it is linked to students self-confidence in their final year (Chen et al., 2008). For future studies including medical students, we would suggest including a third way of measuring clinical competencies: peer review, this would ensure a triangulated measurement: self, peer and external assessments (Colthart et al., 2008).

Conclusion

The self-assessment of medical students is not related to trained-assessed OSCE score in this study. To achieve good practices in future healthcare professionals, specifically physicians, it is important to understand the discrepancies between the medical student’s self-perception and their actual observed performance. Further research is required to provide a deeper understanding of the factors related to the discrepancy between student self-assessment and trained-assessed performance. Such detailed information would allow educators to create better learning environments with more effective self-assessment strategies. This paper contributes to the understanding of the current production of Emirati medical students in the UAE, to achieve the UAE Vision 2021 and to the 2030 agenda of the Sustainable Development Goals and Universal Health Coverage.

Take Home Messages

Only a quarter of all students performed the OSCE task adequately, but almost 70% of participants self-reported their clinical skills as good. Study findings did not provide evidence to support the hypothesis that medical students can accurately self-assess their own competencies; however, study findings indicate that teaching self-reflection is an important component of any undergraduate or postgraduate medical degree programme.

Notes On Contributors

Erik Koornneef is a PhD candidate with the Erasmus School of Health Policy and Management, Erasmus University, Rotterdam, the Netherlands and the Head of Healthcare for Cognit, a Mubadala and IBM Watson joint venture. His research interest is focused on the effectiveness of regulation in healthcare and innovative ways to improve healthcare quality.

Dr Tom Loney is an Associate Professor in Public Health and Epidemiology in the College of Medicine at the Mohammed Bin Rashid University of Medicine and Health Sciences in Dubai, United Arab Emirates. Part of his research portfolio focusses on improving undergraduate and postgraduate medical, dental, and public health education.

Dr Ahmed R. Alsuwaidi is an Associate Professor of Pediatrics and Infectious Diseases in the College of Medicine and Health Sciences at the United Arab Emirates University in Al Ain, United Arab Emirates. His current research interests include vaccine-preventable diseases, tuberculosis, respiratory viral infections, and medical education.

Marilia Silva Paulo is completing her PhD in Health Policies and Development at the Institute of Hygiene and Tropical Medicine, NOVA University, Lisbon, Portugal. Her current research focusses on the development of healthcare services to improve chronic care services for patients with diabetes, cardiovascular diseases, and cancer.

Acknowledgements

None.

Bibliography/References

Abdel-Razig, S. and Alameri, H. (2013) ‘Restructuring Graduate Medical Education to Meet the Health Care Needs of Emirati Citizens’, Journal of Graduate Medical Education, 5(2), pp. 195–200. https://doi.org/10.4300/JGME-05-03-41

Alwi, N. F. B. and Sidhu, G. K. (2013) ‘Oral Presentation: Self-perceived Competence and Actual Performance among UiTM Business Faculty Students’, Procedia - Social and Behavioral Sciences, 90 (October), pp. 98–106. https://doi.org/10.1016/j.sbspro.2013.07.070

Andrade, H. and Du, Y. (2007) ‘Student responses to criteria‐referenced self‐assessment’, Assessment & Evaluation in Higher Education.  Routledge , 32(2), pp. 159–181. https://doi.org/10.1080/02602930600801928

Chen, W., Liao, S. C., Tsai, C. H., Huang, C. C., et al. (2008) ‘Clinical skills in final-year medical students: The relationship between self-reported confidence and direct observation by faculty or residents’, Annals of the Academy of Medicine Singapore, 37(1), pp. 3–8.

Colthart, I., Bagnall, G., Evans, A., Allbutt, H., et al. (2008) ‘The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME Guide no. 10.’, Medical teacher, 30(2), pp. 124–45. https://doi.org/10.1080/01421590701881699

Davis, D. A., Mazmanian, P. E., Fordis, M., Van Harrison, R., et al. (2006) ‘Accuracy of Physician Self-assessment Compared With Observed Measures of Competence’, Jama, 296(9), p. 1094. https://doi.org/10.1001/jama.296.9.1094

Eftekhar, H., Labad, A., Anvari, P., Jamali, A., et al. (2012) ‘Association of the pre-internship objective structured clinical examination in final year medical students with comprehensive written examinations’, Medical Education Online, 1, pp. 1–7. https://doi.org/10.3402/meo.v17i0.15958

Elm, E. von, Altman, D. G., Egger, M., Pocock, S. J., et al. (2007) ‘Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies’, BMJ, 335(7624), pp. 806–808. https://doi.org/10.1136/bmj.39335.541782.AD

Federal Competitiveness and Statistics Authority (2017) Population of the United Arab Emirates - Population in United Arab Emirates - UAE Open Data Portal. Available at: http://fcsa.gov.ae/en-us (Accessed: 26 February 2018).

Graves, L., Lalla, L. and Young, M. (2017) ‘Evaluation of perceived and actual competency in a family medicine objective structured clinical examination.’, Canadian family physician Medecin de famille canadien, 63(4), pp. e238–e243.

Huenges, B., Woestmann, B., Ruff-Dietrich, S. and Rusche, H. (2017) ‘Self-Assessment of competence during post-graduate training in general medicine: A preliminary study to develop a portfolio for further education.’, GMS journal for medical education, 34(5), p. Doc68. https://doi.org/10.3205/zma001145

Jahan, F., Moazzam, M., Norrish, M. and Naeem, S. M. (2014) ‘Comparison of the medical students ’ self-assessment and simulated patients evaluation of students ’ communication skills in Family Medicine Objective Structured Clinical’, Middle east journal of family medicine, 12(9), pp. 27–35.

Khalid, B. A. A. (2008) ‘The current status of medical education in the Gulf Cooperation Council countries.’, Annals of Saudi medicine, 28(2), pp. 83–8. https://doi.org/10.5144/0256-4947.2008.83

Kim, K.-J. (2016) ‘Factors associated with medical student test anxiety in objective structured clinical examinations: a preliminary study’, International Journal of Medical Education, 7, pp. 424–427. https://doi.org/10.5116/ijme.5845.caec

Koornneef, E. J., Dariel, A., Elbarazi, I., Robben, P. B. M., et al. (no date) ‘Surveillance cues do not enhance altruistic behavior among strangers in the field’.

Koornneef, E., Robben, P. and Blair, I. (2017) ‘Progress and outcomes of health systems reform in the United Arab Emirates: a systematic review’, BMC Health Services Research. BMC Health Services Research, 17(1), p. 672. https://doi.org/10.1186/s12913-017-2597-1

Makkai, T. and Braithwaite, J. (1996) ‘Procedural justice and regulatory compliance.’, Law and Human Behavior, 20(1), pp. 83–98. https://doi.org/10.1007/BF01499133

Murphy, K., Tyler, T. R. and Curtis, A. (2009) ‘Nurturing regulatory compliance: Is procedural justice effective when people question the legitimacy of the law?’, Regulation & Governance. Blackwell Publishing Asia, 3(1), pp. 1–26. https://doi.org/10.1111/j.1748-5991.2009.01043.x

OSCEstop (2013) Blood Pressure Measurement. Available at: http://oscestop.com/Blood_pressure.pdf (Accessed: 17 December 2017).

Patterson, F., Ferguson, E., Lane, P., Farrell, K., et al. (2000) ‘A competency model for general practice: implications for selection, training, and development.’, The British journal of general practice : the journal of the Royal College of General Practitioners. Royal College of General Practitioners, 50(452), pp. 188–93. Available at: https://www.ncbi.nlm.nih.gov/pubmed/10750226 (Accessed: 19 December 2017).

Sunshine, J. and Tyler, T. R. (2003) ‘The Role of Procedural Justice and Legitimacy in Shaping Public Support for Policing’, Law Society Review. Blackwell Publishing, 37(3), pp. 513–548. https://doi.org/10.1111/1540-5893.3703002

Tyler, T., Mentovich, A. and Satyavada, S. (2014) ‘What motivates adherence to medical recommendations? The procedural justice approach to gaining deference in the medical arena’, Regulation & Governance, 8(3), pp. 350–370. https://doi.org/10.1111/rego.12043

World Health Organization (2017) ‘Framework for Action for Health Workforce Development’. Available at: http://www.emro.who.int/images/stories/hrh/Strategic_framework_for_health_workforce_development_MAY_2017_3.pdf (Accessed: 20 November 2017).

Zayyan, M. (2011) ‘Objective structured clinical examination: The assessment of choice’, Oman Medical Journal, 26(4), pp. 219–222. https://doi.org/10.5001/omj.2011.55

Appendices

None.

Declarations

There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (https://creativecommons.org/licenses/by-sa/4.0/)

Ethics Statement

This study was approved by the institution’s Social Sciences Research Ethics Committee (ERS_2015_3212).

External Funding

This paper has not had any External Funding

Reviews

Please Login or Register an Account before submitting a Review

Ken Masters - (02/06/2019) Panel Member Icon
/
An interesting paper discussing the differences between self-perceived performance and observed performance in an OSCE among medical students in the UAE. The authors begin by establishing the medical necessity of having medical students and professionals who are able to assess their own level of competency. The study design has been well constructed and is well-described.

The authors should, however, have reflected more deeply on the implications of the differences and the possible causes. While the difference in self-reporting and assessing is certainly problematic, the validity of the assumption that the students have made the error is based upon the premises that (a) the assessors’ assessment is accurate, and (b) the assessors’ assessment matches what the students have been taught. (This is because the students’ self-assessment will naturally be based upon a comparison of what they have been taught and what they did.)

The troubling aspect to this, and one which calls for further exploration, is not the large difference: the main problem is that the assessors found only 27% of the students to be performing the task adequately. This would indicate that there is something seriously wrong. The error may be (a) teaching, or (b) a disjuncture between what the assessors are expecting and what the students have been taught, or (c) Others, which may include ill-suited students or incompetent assessors. Until this has been investigated, one could just as easily argue that the students are correct in their self-assessment, but the assessors are wrong. The only thing we know is a difference, but we have no way of knowing the cause of that difference. For one to conclude that the students are at fault, a more detailed study of the other alternatives is required. (Although the authors touch on this in their conclusion, the tone of the paper implies that, whatever the causes, they are on the students’ side).

Some smaller issues with the paper:
• It would have been preferable if the abstract had been laid out with the structured headings.
• There is a line “None of the students did not attempted.” which needs addressing.


So, a useful study, but the authors should have gone quite a bit deeper into investigating the possible causes of the discrepancy; at the very least, they should acknowledge that their study shows only the discrepancy, and that one cannot conclude the reasons in any direction, and further investigation is necessary.

Possible Conflict of Interest:

For Transparency: I am an Associate Editor of MedEdPublish.

Richard Hays - (05/12/2018) Panel Member Icon
/
Thank you for this interesting paper which adds to the evidence base indicating that self-assessment in the higher stakes context of a formal assessment probably cannot be relied on. The finding is not new, but is documented in another context, suggesting that this phenomenon crosses cultures, education and health systems and nationalities. Just as most people self-rate their driving skills as superior, candidates in examinations often have a higher innate sense of ability that expert judges. Self-assessment may be more reliable in formative assessment, but in this case my interpretation is that the assessment was voluntary and not part of formal progress assessment - is that correct? Perhaps the authors could clarify if the observation took place after standardised training in the skill that was observed.
This is not clear, yet the low rate of success on rater assessment (30%) raises questions about the teaching. One interesting issue that is not explored here, but could be, is the relationship between a gap between self and rater assessments and acceptance of feedback. That would earn more stars.
Possible Conflict of Interest:

I am the Editor of MedEdPublish