Personal view or opinion piece
Open Access

Regarding the Focus on Metrics in the Residency Application Process

Richard Prayson[1]

Institution: 1. Cleveland Clinic
Corresponding Author: Dr Richard Prayson ([email protected])
Categories: Assessment, Students/Trainees, Postgraduate (including Speciality Training)
Published Date: 11/07/2018


No Abstract was necessary for this article.

Keywords: grades, tests, medical school metrics, applying for postgraduate training


More recently, some medical schools have experimented with moving away from grades, utilizing a pass/fail approach to segments of the curriculum, particularly in the first few years of school.  The benefits of this approach are myriad and are documented in the literature.  Rohe et al demonstrated that pass-fail grading in first year courses reduced student stress, improved student mood and facilitated group cohesion and collegiality (Rohe et al. 2006).  There is a tendency for grades to engender extrinsic motivation in learners instead of more desired intrinsic motivation (Forsyth, 2002).  Grades tend to decrease student interest in what they are learning, create preferences for easier tasks and reduce the quality of student thinking (Kohn, 2011).  One study suggested that using a pass-fall approach to grading in year 2 of medical school, although associated with a slight decrease in performance on preclinical exams administered by the school, did not impact performance on the USMLE step 1 examination (McDuff et al. 2014).  Spring and colleagues concluded in their literature review that pass-fail grading did not adversely impact objective academic performance and it enhanced student well-being (Spring et al. 2011).  They also noted, however, that residency program directors believe that pass-fail grading is disadvantageous to students in the residency application process (Spring et al, 2011).

During this last residency application cycle, I had a number of medical students approach me with concerns and stories about how they believed the lack of grades and a rank was adversely impacting how their applications were being considered by residency program directors.  These students are part of a small 5 year program focused on training students who have an interest in becoming physician scientists (Dannefer and Henson, 2007).  During their medical school training, there are no tests administered, grades given, ranking generated or honor’s society awarded during any of the 5 years in the program.  The program utilizes a competency –based portfolio system as its principle means of assessment. Students regularly receive narrative feedback regarding their performance throughout and are asked to review how they are doing with respect to milestones that have been developed for each competency, based on their level of schooling.  Students are asked to reflect on their feedback, to identify targeted areas for improvement and to construct learning plans to address areas for improvement and to monitor their progress with subsequent feedback.  The goal is to foster reflective practice and to help students internalize this approach to self-improvement, which they will hopefully employ for the rest of their careers.

Since the beginning of the program, there has been ongoing anecdotal issues voiced by students regarding the impact of this assessment approach on the residency application process.  This seems to have increased in more recent years as programs are relying more on metrics to wade through increased numbers of applications. Students have indicated that some program directors have told them that they have no way of knowing how they did in school without grades/rank/honor’s designation; as a result, their application was simply set aside.  After all, there are plenty of applicants who have many or all of the other metrics. So, why struggle with the application that does not. Some residency programs use a scoring rubric that takes into account board scores, rank, grades, honor society status; without many of those parameters, missing metrics result in a low score using these rubrics.  Students have described situations in which they have called programs to followup on the status of their application to be told that they were not being invited because of the lack of grades or rank and that someone from the school needed to or should have called to explain the situation, even though it is delineated in the Deans letter (MSPE letter).  Even when interviewed, most students indicate that they commonly are asked lots of questions about the program’s assessment system, challenging the student to explain, “How do I know if you are clinically competent to take care of patients?”  The lack of other school generated metrics often results in undue emphasis being placed on the only metrics that do exist, the board exam scores.  This further increases the stress level in preparing for what is now viewed as a very high stakes test which are, at best, imperfect measures of a limited aspect of medical knowledge and reasoning and are not a reliable predictor of one’s ability to be a good physician.

So, how does one address these issues, given that the situation is not likely to change in the near future?  First, it is important to keep perspective.  The majority of residency programs are open to this assessment approach, particularly given the more recent move of residency assessment toward competencies and milestones; the approaches are not too dissimilar.  Most program directors, I believe, take a somewhat holistic look at the entire application.  Students need to be prepared to discuss the portfolio assessment approach, how it works and what they perceive as the benefits.  Each year, students report being asked a disproportionate number of questions about the program and it’s assessment process; often these questions take up a good portion of some interviews and most arise out of a true interest in understanding and learning about portfolios.  This is a good thing! Thirdly, students need to be a bit proactive in following up when rejected for an interview or when they have not heard anything form a program in a reasonable period of time; gentle inquiries have sometimes resulted in finding out that they were screened out because of missing metrics and this provides an opportunity to explain and educate.  Sometimes, this results in an invitation.  At the end of the day, our students, by doing well in residency and practicing the skills of self-reflection and self-improvement as residents, demonstrate that there are other ways to approach assessment.  In the words of Albert Schweitzer, “Example is not the main thing in influencing others.  It is the only thing.”

Take Home Messages

There are many advantages to being in a medical school environment in which traditional grading metrics are not utilized.  There are, however, implications for the student in applying to postgraduate or residency training where these metrics seem to be heavily relied upon to screen applications.

Notes On Contributors

The contributor is Director of Student Affairs and the program's career advisor. The program has no tests, grades or class rank ever generated; in lieu of these metrics, a portfolio assessment system is used.




Dannefer, E.F., Henson, L.C. (2007) ‘The portfolio approach to competency-based assessment at the Cleveland Clinic Lerner College of Medicine’, Academic Medicine, 82, pp. 493-502.

Forsyth, D.R. (2002) The Professors Guide to Teaching: Psychological Principles and Practice. Wahington DC: American Psychological Association.

Kohn, A. (2011)  ‘The case against grades’, Educational Leadership.

McDuff, S.G.R., McDuff, S., Farace, J.A., Kelly, C.J., et al. (2014) ‘Evaluating a grading change at UCSD school of medicine: pass/fail grading is associated with decreased performance on preclinical exams but unchanged performance on USMLE step 1 scores’, BMC Medical  Education, 14, pp. 127.

Rohe, D.E., Barrier, P.A., Clark, M.M., Cook, D.A., et al. (2006) ‘The benefits of pass-fail grading on stress, mood and group cohesion in medical students’, Mayo Clinic Proceedings, 81(11), pp.1443-1448.

Spring, L., Robillard,D., Gehlbach, L., Simas, T.A.M. (2011) ‘Impact of pass/fail grading on medical students’ well-being and academic outcomes’, Medical Education, 45, pp. 867-877.




There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (

Ethics Statement

No IRB approval required.

External Funding

This article has not had any External Funding


Please Login or Register an Account before submitting a Review

P Ravi Shankar - (12/07/2018) Panel Member Icon
This article raises a number of interesting questions. Traditionally we have relied on metrics and numbers for grading students and the assumption was the higher the scores the better the students. Over the years there has been a trend toward objectivity in assessment and away from judgment. We have tried to standardize the assessment process and break it down to manageable chunks. Assessment has moved away from the messy, often chaotic world of real life practice toward a standard simulated environment.
This increases objectivity and reduces inter-rater or examiner variation but the downside may be that the assessment becomes more detached from real-life practice. Competency-based assessment and portfolios are becoming more common. This article highlights the challenges faced by students from a medical school which does not use numbers in the grading process. Faced with increasing number of applicants it is common for residency directors to choose an arbitrary cut off mark in a high stakes examination below which they will not consider applicants. This article will be of interest to a wide body of medical educators.
Ronald M Harden - (11/07/2018) Panel Member Icon
As demonstrated in the updated Ottawa consensus report on selection , to be publishes shortly in Medical Teacher, huge progress has been made on the selection of students for medical studies. Continuing effort is needed on the selection for training posts after students complete their undergraduate studies. Some of the problems are highlighted in this personal view which merits reading. Reference is made to portfolio assessment. When we introduced the portfolio as a key assessment instrument in the final qualifying examination in Dundee we had no difficulty in finding clinicians to serve as examiners as they identified reading a students portfolio and talking with the student about it was valuable in identifying the students they wished to appoint as junior doctors on their firm.
Subha Ramani - (11/07/2018) Panel Member Icon
The authors make very important points regarding grades, or lack thereof, and potential effects on residency selection. If the pass-fail system promotes internal motivation and collaboration rather than competition among students, then should medical education not be hailing this approach? However, the confusion it may cause among program directors when selecting applicants for interviews is real. The top programs are typically intent on recruiting the top candidates and grades of “Honors” and “High Honors” make the selection process simpler when pursuing large volumes of applications. Having said that, I agree that it is important to be able to distinguish the more outstanding students from their peers in the pass / fail system and what domains they are outstanding in. Orientation of program directors and core educators on selection committees is critical so that they are able to effectively and efficiently review applications through a holistic lens. Scores and grades may not make the most motivated and reflective practitioners, but the system needs to be trained in this new approach.
All medical educators involved in selection of trainees will find that this perspective stimulates reflection on the process.