Research article
Open Access

The impact of student absences on grade outcomes in a neurology clerkship setting

Tiana E. Cruz[1], Rebecca Riddell[2], Charlene E. Gamaldo[2], Roy E. Strowd[3], Christopher Oakley[2], Rachel Marie E. Salas[2][a]

Institution: 1. University of Maryland, 2. Johns Hopkins University, School of Medicine, 3. Wake Forest School of Medicine
Corresponding Author: Dr Rachel Marie E. Salas ([email protected])
Categories: Scholarship/Publishing
Published Date: 04/07/2019

Abstract

Purpose: To evaluate the potential association between student absenteeism and performance on the National Board of Medical Examiners Clinical Neurology Subject Exam (NBME), a Standardized Patient Exam (SPEX) and the overall clinical performance (CPA) in the Neurology Core Clerkship (NCC).

 

Methods: All medical students rotating through the required 4-week NCC at Johns Hopkins University School of Medicine (JHUSOM) were enrolled in this IRB-approved, retrospective cohort study. Student absences, NBME, SPEX, CPA, and overall final clerkship grades were recorded.

 

Results: Between 2014-2017, 252 students were enrolled. Of the total sample, 68 students (27.0%) were absent from the clerkship.  The median absence length was one day, and the range was 0.25 to 7.00 days.  The total “other” absences dependent variable (defined in the text) was negative and statistically significant for the NBME (p = 0.01).  This model suggests that for an average student, a one day’s absence for reasons we categorized as “other”, will lead to a slightly more than two-point decrease (2.36) in the NBME percent correct score. 

 

Conclusion: Data presented here supports that the NBME graded component decreases with increased “other” absenteeism supporting that increased absences may have a negative effect on student performance in the NCC.  

 

Keywords: Neurology clerkship; student absences; grade impact; medical education; grade outcomes

Introduction

Prior studies have explored the impact of absenteeism on objective measures of learning in medical education.  One such report noted that first year medical students who were successful in passing the required examinations for promotion to the second year had significantly lower reported absence rates than those students who failed mandatory requirements (Yusoff, 2014).  Even studies monitoring rates of absenteeism in more contemporary teaching models have found an association between performance and presence.  For instance, a study exploring this relationship in the context of a contemporary “flipped classroom” approach where students can attend or miss the large group sessions voluntarily, students who missed three or more large group sessions performed significantly worse than their peers (Laird-Fick et al., 2018).  Other studies have shown that absence from classes or lectures even into the third year (BinSaeed et al., 2009) and fourth year (Hamdi, 2006) of medical school can significantly affect academic performance.

Though absenteeism in medical school is expected to affect clinical performance measures, findings on the potential relationship or directionality of these impacts remain mixed.  A growing body of literature suggests that attendance is not necessary for good academic performance, and the overall tools that measure acumen and clinical proficiency are widely varied.  In fact, some studies have reported no association between absentee patterns and grades across years one (Luder, 2016) or two (Eisen, et al., 2015; Kauffman et al., 2018) of medical school and that, for highly confident students at least, skipping learning sessions does not appear to impact final exam grades (Kauffman, et al., 2018).  Other studies suggest that attendance at in-class sessions is no longer a good marker for performance, but rather a student’s levels of self-efficacy and their ability to self-regulate are better predictors of how well they can perform while still missing in-class sessions (Kauffman et al., 2018).

Many studies that attempt to evaluate the impact of absenteeism on student grade outcomes in medical education examine only first-year medical students (where the curriculum more heavily consists of teaching and learning in in-class sessions or lectures). Studies that investigate students in the clerkship years tend to focus on attendance for in-class didactic portions of the curriculum and not attendance for clinic hours.  The graded components of the later years of medical student education are complicated by clinical rotations, where significant portions of a student’s grade are comprised of clinical performance assessments or standardized patient exams, the results of which are often not captured in traditional exam grades, and it is unclear if performance on these measures is impacted by absences from in-class portions of the curriculum.  Studies that have examined advanced medical students (i.e., fourth-year medical students) have found that absences from clinical and tutorial-based activities were related to lower overall clinical subject examination scores (Deane and Murphy, 2013), but further study in this area, as well as in other areas of graded components such as clinical performance assessments or standardized patient exams, are yet to be explored.  In addition, clinical performance assessments or evaluations often include some measure of professional conduct as a metric.  Although professional conduct can be difficult to measure, absences can be included as a consideration (Burns et al., 2017). 

Finally, students have various reasons to be absent from portions of their medical education.  Personal situations like family emergencies or illnesses can arise for students at any stage causing them to be absent for various lengths of time.  Additionally, medical school brings with it many educational or professional development opportunities such as residency interviews or presentations at conferences that also require the student to miss time from the educational curriculum.  It is not clear if different types of absences or reasons for absences have differing effects on educational outcomes.

This study aims to evaluate the relationship between absenteeism in a variety of assessment metrics typically utilized in the clinical years of the medical school curriculum that include standardized written exams, clinical performance assessments, and standardized patient exams of the student typically collected in a core clerkship rotation.  The current study also aims to explore if different types of absences or reasons for absences have differing effects on educational outcomes.  It is hypothesized that absences will have a negative relationship with student performance such that a large number of absences of any type for a student may be associated with poorer performance relative to a student with fewer absences of any type. 

Methods

Participants

Participants for this study were medical students enrolled in the Neurology Core Clerkship (NCC) at Johns Hopkins University School of Medicine (JHUSOM).  The NCC at JHUSOM consists of a four-week rotation, organized into two groups of students for each eight-week block (i.e., the first four-week rotation consists of approximately 15 students, labeled group A, and the second four-week rotation consists of another 15 students, labeled group B).  There are five blocks in the medical education calendar at JHUSOM; each block includes an A group and a B group of students.  All medical students from academic year (AY) 2014-15 Block 2A through AY 2016-17 Block 3A were enrolled in this IRB approved study.  In total, 252 male (50.8%) and female (49.2%) medical students were enrolled during this period that represented medical students in years 2 through 4 (Table 1).  Other demographic data (e.g., age, race/ethnicity) was not available and therefore not reported.    

 

Table 1. Students by Academic Year, Medical Student Year, and Sex*

Academic Year

Male

Female

MS2

MS3

MS4

AY 2014 – 15

42 (53.8)

36 (46.2)

23 (29.5)

53 (67.9)

2 (2.6)

AY 2015 - 16

59 (50.9)

57 (49.1)

14 (12.1)

75 (64.7)

27 (23.3)

AY 2016 – 17

27 (46.6)

31 (53.4)

0 (0.0)

40 (69.0)

18 (31.0)

AY Total

128 (50.8)

124 (49.2)

37 (14.7)

168 (66.7)

47 (18.7)

*Values are reported as n (% of Academic Year).

   

Procedures

Student absence information

The student absence information includes number of absences rounded to quarter days (i.e., a quarter day absence was defined as missing two hours out of a typical eight-hour work day) and reason(s) for absence(s) for the 252 students.  Absence reasons were categorized into either “educational absences” (e.g., Admissions Committee meeting, presentation at a professional conference, Dean’s Letter meeting) or “other” absences (i.e., illness, religious holiday, or other personal absence).  Table 2 shows a description of reasons for absences and Table 3 shows a description of absences by absence type. 

 

For the purposes of this study, absences were not categorized as “excused” versus “unexcused” based on the attendance policy of the Johns Hopkins Neurology Clerkship.  According to the attendance policy students are permitted two absences for any reason during the clerkship without having to make up missed time, these absences would be traditionally categorized as excused.  After these two “excused” absences, all other absences for any reason would be traditionally categorized as unexcused and missed time would need to be made up (during non-work hours such as evenings or weekends).  Because excused and unexcused absences for the clerkship could be granted for academic or other/personal reasons, the excused/unexcused taxonomy was not useful for the purposes of this study.   

 

Table 2. Description of Reasons for Absence*

Reason

n (%) of Students

Median Absence Length in Days

Range of Absence Length

Absence Type**

Illness / Doctor’s Appointment

27 (10.7)

1.00

0.25 - 2.00

Other

Personal

13 (5.2)

1.00

0.50 - 2.00

Other

Residency Interview

12 (4.8)

2.00

1.00 – 7.00

Educational

Comprehensive Clinical Skills Exam

8 (3.2)

1.00

1.00

Educational

Conference Presentation

6 (2.4)

2.00

0.50 - 2.00

Educational

Religious Holiday

4 (1.6)

1.00

1.00

Other

Admissions Committee

3 (1.2)

2.00

1.00 - 2.00

Educational

Student Assessment and Program Evaluation Committee

3 (1.2)

0.50

0.25 - 0.50

Educational

Dean’s Letter Meeting

2 (0.8)

0.25

0.25

Educational

NBME Step 2 Exam

1 (0.4)

1.00

1.00

Educational

*Table only includes students with absences; n = 68 students.

**Absence Type was defined by the study project team to assist with analyses.

 

Table 3. Description of Absences by Absence Type and MS Year*

Absence Type**

Educational

Other

MS2

MS3

MS4

n (%) of Students

32 (12.7)

38 (15.1)

8 (3.2)

36 (14.3)

24 (9.5)

Median Absence Length in Days

1.00

1.00

1.50

1.00

2.00

Range of Absence Length

0.25 – 7.00

0.25 – 2.50

0.88 – 2.00

0.69 – 1.00

1.00 – 2.63

*Table only includes students with absences; n = 68 students.

**Absence Type was defined by the study project team to assist with analyses.

 

Student academic performance

The student academic performance information includes each student’s clinical performance assessment (CPA) mean score (i.e., the faculty and resident assessments of the students’ performance), standardized patient exam percent score (SPEX), National Board of Medical Examiners (NBME) Clinical Neurology Subject Exam percent score, and overall final grade (i.e., Fail, Pass, High Pass, Honors).  For the NBME Clinical Neurology Subject Exam, note that any scores originally reported as scaled scores were converted to percent scores using the relevant NBME conversion table. 

 

The three assessments are weighted in the overall final grade as follows: Clinical Performance Assessment is 35%, Standardized Patient Exam is 25%, and NBME Clinical Neurology Subject Exam is 25%.  Note that the other assessment(s) that contributed to the remaining 15% of the overall final grade were unavailable as these graded components changed across the period used for analysis.

 

Student learning abilities

In addition to the information collected on student absence information and student academic performance, each student’s Medical College Admission Test (MCAT) score, from the student’s American Medical College Application Service application, was extracted from the JHUSOM Student Outcomes Research Database (SORD).  This information was matched by students’ first and last names.  All 252 students were found in SORD, but two students did not have a corresponding MCAT score.

 

The MCAT score information was used as an instrumental variable to help the regression model control for differences in learning abilities across students; the MCAT score variable was included in all the tested models, even when it was not statistically significant in the model, because including this variable helped to explain more of the variability in the assessment scores.

 

Analyses

Descriptive statistics were calculated for the sample including the number of students with and without absences during the study period and average scores for the MCAT, and NBME, CPA, and SPEX outcome variables.

 

Next, a Kruskal-Wallis test and post-hoc Mann-Whitney U tests were used to explore differences in the number of absences by MS year and sex.  Finally, linear regression modeling was used to better understand the potential relationship between student performance and absences.  First, univariate analyses were used to describe the available independent variables and their relationships with student performance in the NCC.  Following the univariate analyses, two regression models were estimated for each of the three assessments - one regression model using the students’ total absences and another regression model separating out educational absences from “other” absences (as define above and described in Table 2).   

 

All analyses were completed using SAS 9.4 (SAS Institute; Cary, NC).

Results/Analysis

Descriptive

Of the 252 students, 184 students (73%) had no absences.  Absence length ranged from 0.25 – 7.00 days absent depending on the absence type; Table 3 shows a description of absences by absence type and MS year, and Table 4 shows the distribution of students’ total absences. 

 

Table 4. Number of Students by Academic Year and Number of Total Absences*

Number of Absences**

AY 2014-15

AY 2015-16

AY 2016-17

Total

No absences

63 (80.8)

80 (69.0)

41 (70.7)

184 (73.0)

> 0 to < 1

3 (3.8)

7 (6.0)

3 (5.2)

13 (5.2)

1 to < 2

7 (9.0)

17 (14.7)

8 (13.8)

32 (12.7)

2 to < 3

5 (6.4)

8 (6.9)

3 (5.2)

16 (6.3)

3 to < 4

0 (0.0)

2 (1.7)

1 (1.7)

3 (1.2)

4 to < 5

0 (0.0)

2 (1.7)

0 (0.0)

2 (0.8)

> 5

0 (0.0)

0 (0.0)

2 (3.4)

2 (0.8)

AY Total

78 (100.0)

116 (100.0)

58 (100.0)

252 (100.0)

*Values are reported as n (% of Academic Year).

**Number of Total Absences in Days.

For this sample the average MCAT score was 35.5 (SD = 3.0) and average NBME score was 79.9 (SD = 6.9; 95% CI = 73.1-86.8).  The SPEX median = 93.0, interquartile range = 89.0-96.0 and the CPA median = 4.61, interquartile range = 4.36-4.78. NBME exam scores were more normally distributed than the other two performance measures (CPA or SPEX scores).

 

Differences across MS year and sex

A Kruskal-Wallis test indicated that total absences were significantly different by medical student year (p < 0.0001); post-hoc Mann-Whitney U tests showed MS4 students had significantly more total absences than MS2 students (p = 0.003) and MS3 students (p < 0.0001; see Figure 1).  The differences appear to be related to the large number of absences for the residency application process in the fourth year; excluding absences for residency interviews and residency-related meetings, Kruskal-Wallis failed to support significantly different total absences by medical student year (p = 0.38).  For the current sample, more than half of the MS4 students missed at least some time from the clerkship activities (see Table 3).  There were no significant differences in number of absences across sex.

 

Impact of absences on student academic performance

There are no statistically significant differences in any of the assessment scores for sex differences.  The sex information was not included in any of the regression models; including sex information in the models did not help to explain more of the variability in assessment scores.

 

Impact of absences on Clinical Performance Assessments

The correlation between absences and CPA scores was not statistically significant (see Table 5).  Neither the first regression model with total absences nor the second regression model with educational and “other” absences supported the hypothesis that absences are negatively associated with CPA scores.  

 

Table 5. Univariate Analyses of Assessments and Predictor Variables

Predictor Variables

Assessment

Clinical Performance Assessment

Standardized Patient Exam

National Board of Medical Examiners Clinical Neurology Subject Exam

Relationship

p-value

Relationship

p-value

Relationship

p-value

Total Absences*

-0.07

0.28

-0.14

0.03

-0.07

0.07

Educational Absences*, **

-0.04

0.48

-0.11

0.08

0.00

0.94

Other Absences*, **

-0.06

0.38

-0.07

0.26

-0.15

0.02

Medical Student Year***, ****

MS2

4.57
(4.35 – 4.83)

0.39

96.0
(91.0 – 98.0)

0.008+

78.5 ± 6.1
(72.4 – 84.6)

0.30

MS3

4.62
(4.38 – 4.80)

93.0
(88.8 – 96.0)

80.4 ± 7.2
(73.1 – 87.6)

MS4

4.55
(4.32 – 4.76)

91.0
(88.5 – 94.0)

79.5 ± 6.0
(73.4 – 85.5)

Gender^, #

F

4.59
(4.36 – 4.77)

0.43

93.0
(89.8 – 96.0)

0.77

79.8 ± 6.7
(73.0 – 86.5)

0.69

M

4.62
(4.36 – 4.80)

92.5
(88.0 – 96.0)

80.1 ± 7.1
(73.1 – 87.2)

Medical College Admission Test*,++

0.14

0.02

0.11

0.09

0.24

<0.0001

MS = Medical Student; F = Female; M = Male

*Values are presented as the Pearson product-moment correlation coefficient and associated p-values

**Absence Type was defined by the study project team to assist with analyses.

***Relationship values are presented as median (interquartile range) or mean ± standard deviation (95% confidence interval).  Analyses conducted using Kruskal-Wallis or ANOVA, as appropriate. 

****MS2: n = 37/252 (14.7%); MS3: n = 168/252 (66.7%); MS4: n = 47/252 (18.7%)

+Follow-up Mann-Whitney U tests showed statistically significant differences between MS2 – MS3 (p = 0.01) and MS2 – MS4 (p = 0.004).

^Relationship values are presented as median (interquartile range) and associated p-value from Mann-Whitney U or mean ± standard deviation (95% confidence interval) and associated p-value from Student’s t-test. 

#Female: n = 124/252 (49.2%); Male: n = 128/252 (50.8%)

++Medical College Admission Test scores provided by SORD.  Information was unavailable for two students; n = 250/252 (99.2%).

 

 

Impact of absences on Standardized Patient Exam Scores

The correlation between total absences and SPEX scores was negative and statistically significant such that a greater number of total absences was weakly associated with lower SPEX scores (see Table 5).  However, neither the first regression model with total absences nor the second regression model with educational and other absences supported the hypothesis that absences are negatively associated with SPEX scores; when controlling for MCAT and medical student year, the association between more total absences and lower SPEX scores was no longer significant. In this model, for each additional day absent, SPEX scores were 0.69 points lower (p = 0.07).  Nonetheless, medical student year was statistically significant and follow up analyses revealed that MS2 students, on average and regardless of absences, achieved a SPEX percent correct score that is approximately 2% higher than MS3 students, and 2.5% higher than MS4 students (see Table 6). 

 

Table 6. Standardized Patient Exam Regression Models*, **

Model

Dependent Variable

Predictor Variable+

Estimate

p-value

Adjusted R Squared

SPEX1

Standardized Patient Exam

Intercept

88.731

<0.0001

0.04

Total Absences

– 0.685

0.07

MCAT

0.161

0.12

dMS3

– 2.260

0.01

dMS4

– 2.407

0.03

SPEX2

Standardized Patient Exam

Intercept

88.723

<0.0001

0.04

Educational Absences

– 0.601

0.16

Other Absences

– 0.916

0.18

MCAT

0.162

0.12

dMS3

– 2.252

0.01

dMS4

– 2.485

0.03

SPEX = Standardized Patient Exam; MCAT = Medical College Admission Test; MS = Medical Student Year

*Parameter estimates, associated p-values, and Adjusted R Squared values were obtained from Proc Reg linear regression modelling in SAS

**n = 250/252 (99.2%); MCAT information was unavailable for two students.

+The predictor variables that begin with a ‘d’ (i.e., dMS3 and dMS4) are categorical variables used to model differences in the intercept for subgroups of students.

 

Impact of absences on NBME Clinical Neurology Subject Exam

While total absences were not significantly associated with lower NBME scores, more “other” absences were weakly associated with lower scores (r = -0.15, p = 0.02; see Table 5).  The first regression model with total absences did not support the hypothesis that absences are negatively associated with NBME scores.  However, the second regression model with educational and “other” absences did support the hypothesis that absences are negatively associated with NBME scores; when controlling for MCAT and MS year, a statistically significant association was observed.  For every day absent for “other” reasons, NBME exam scores were 2.4 points lower (p = 0.01). (see Table 7).  There were no significant differences in NBME scores across educational absences.

 

Table 7. NBME Clinical Neurology Subject Exam Regression Models*, **

Model

Dependent Variable

Predictor Variable

Estimate

p-value

Adjusted R Squared

NBME1

NBME Clinical Neurology Subject Exam

Intercept

58.646

<0.0001

0.07

Total Absences

– 3.063

0.06

MCAT

0.579

<0.0001

dMS3

1.497

0.26

dMS4

0.249

0.88

dMS3*Total Absences

1.309

0.50

dMS4*Total Absences

3.276

0.06

NBME2

NBME Clinical Neurology Subject Exam

Intercept

57.887

<0.0001

0.08

Educational Absences

0.094

0.87

Other Absences

– 2.356

0.01

MCAT

0.586

<0.0001

dMS3

2.060

0.09

dMS4

1.286

0.41

NBME = National Board of Medical Examiners Clinical Neurology Subject Exam; MCAT = Medical College Admission Test

*Parameter estimates, associated p-values, and Adjusted R Squared values were obtained from Proc Reg and Proc Glm linear regression modelling in SAS. 

**n = 250/252 (99.2%); MCAT information was unavailable for two students.

 

Figure 1. Total Absences by Medical Student Year

 

Impact of absences on overall final clerkship grade

There was no significant difference in final clerkship grade as a result of increased absences.

Discussion

This study aimed to evaluate the relationship between absenteeism and a variety of assessment metrics in the Neurology Clerkship setting.  Results showed that increased total absences did not impact overall performance on the various graded components, but rather only absences defined as “other” in nature (e.g., illness, personal absence, religious holiday) had an impact on certain graded components of the clerkship.

 

Overall, analysis revealed an association between MS year and performance on the SPEX.  For physical exams such as the SPEX, MS year could be used as an instrumental variable to model students’ previous clerkship experience.  It is reasonable to expect that students with less clerkship experience (e.g., MS2) would be more negatively impacted by accruing absences than students with more clerkship experience (e.g., MS4), but the corollary was observed in our analysis.  Although these findings seem unexpected at first, perhaps MS2 students are more nervous going into physical exams and thus prepare more than more senior students who have completed physical exams in other clerkships.  Additionally, this finding could be a result of clerkship engagement as opposed to knowledge and skills, suggesting that MS2 students who are new to their clinical clerkships are more engaged than more senior students who may have already selected a specialty and are more focused on applying for internship. 

 

In terms of significant findings related to absenteeism, only the NBME scores were significantly affected by absences characterized as “other” in nature.  As previously discussed, the NBME scores were more normally distributed than the CPA or SPEX measures, this greater variability in student’s scores potentially explains why significant differences in this measure were able to be distinguished.  Significant differences in the SPEX and CPA scores, on the other hand, would be harder to distinguish given the sample size and limited range of variability; most students are high performing on these measures.  The insignificant CPA findings were not altogether unexpected due to the known restricted use of the five-point scale on this assessment; whereby most students routinely receive scores of between 4 or 5 on a 5-point scale. 

 

Additionally, our study showed that there was no significant impact on overall clerkship grade.  This is likely because the negative relationships between the NBME scores and “other” absences might not be educationally significant due to the relatively small impact on student performance.  As an example, using results from our regression models, an MS3 student with average performance and one total “other” absence would be expected to achieve the same CPA mean score, the same SPEX percent correct score, and an NBME percent score that is approximately 2.36 points less as compared to a similar MS3 student without an absence.  Since there were no other significant findings for the other graded outcomes, some graded outcomes were not available for analysis, and the final clerkship grade is categorized into Fail, Pass, High Pass, and Honors (with no students in our sample achieving a grade of Fail), it is not surprising that there was no significant impact on overall clerkship grade.  Despite the potential for some students to receive slightly lower NBME or knowledge scores because of increased absences, it is possible that these students are buffering this impact with higher scores in other areas.

 

Furthermore, absenteeism can be used as a marker of professionalism (Burns et al. 2017) which is often a consideration in overall clerkship final grades. Although no differences were found on the CPA outcome in this study due, at least in part, to the known restricted use the assessment tool’s scale, it is possible that groups of students, even among those who have absences from clerkships, are different from one another in ways that we were not able to identify in this study.  Some students may be repeat offenders with repeatedly high absence requests across all clerkships (which may show patterns of absenteeism that reflect in CPAs across clerkships) and others may have had individual personal or educational events that occurred only during the NCC.  We are not able to comment on any potential differences between these two groups of students as data from other clerkships was not available.  Future study and collaboration with other departments would allow for tracking of absences longitudinally and further clarification.  More broadly, future studies in this area could potentially examine the relationship between absenteeism and professionalism issues.

 

It is also important to consider the possibility that the models support only a weak relationship between absences and assessments because the current policy of the NCC at JHUSOM states that students are only permitted two excused absences during their four-week rotation.  Therefore, if students miss additional days after this two-day limit, they are required to make up the missed curricular activities.  For students with multiple absences, it is possible that make-up activities are effective replacements for missed activities and thus the impact of missed educational opportunities and exposure are, to some degree, recovered. 

 

Further study is recommended since this analysis is limited by the small sample size, only approximately two academic years of data, and a small sample of MS2 and MS4 students, as well as a lack of predictor variables that might assist with explaining the factors that influence student performance in clinical clerkships. To best understand and represent these relationships, regression models should include all relevant independent variables that assist with explaining the variability in the dependent variable. In thinking about student performance, relevant independent variables might include student-related information (e.g., number of previous clerkships, learning ability), clerkship-related information (e.g., preceptor engagement, clinical learning environment), as well as other less direct information such as age or even socioeconomic status (SES) of a student’s family.  For example, a student with a lower SES might be struggling with issues like food access that could affect their clerkship engagement or academic performance.  For the current study, availability of information was limited.  It is possible that absences do impact certain demographics of students (e.g., age, gender, racial/ethnic) more or less, but our study was not able evaluate this.  Finally, as this study is entirely correlational, one limitation of this research is the inability to determine causality in regard to the impact of absences on clerkship performance.

 

Despite the limitations of this study, there remains significant opportunity for further exploration on the impact of absences on student performance in a clinical rotation setting.  In the current study, the timing of the students’ absences (e.g., beginning, middle, or end of the clerkship) was not tracked.  Perhaps students who miss time from the beginning of the clerkship will perform worse than their peers who do not have absences, as they could more easily fall behind their peers on learning the new material.  Additionally, whether the student misses time away from an outpatient versus an inpatient clinical rotation setting could impact their performance on later assessments.  This aspect of absence timing was also not explored in the current study.             

 

Finally, as the literature has already suggested, conceivably students who are more self-regulated or self-directed will perform better despite missing time from curricular activities (Kauffman et al. 2018).  In the current study, we did not measure learning style or preference of the students but adding this measure and exploring the impact on multiple assessment measures, or even attempting to replicate previous research in a different setting would be telling.  This study does show that student absences in a clinical clerkship setting, with a sample of primarily third year medical students, does have a statistically significant and negative impact on a students’ performance on the NBME.  It provides support to the literature that suggests that absences have a negative impact for medical students outside of the classroom or lecture hall and provides backing to clerkship directors who wish to limit students’ missed time from clerkship activities.  Ultimately, additional supporting research, especially with a larger sample of students, could replicate and even expand on these findings and potentially show impacts on other performance measures as well.

Conclusion

In summary, this study shows that student absences in a clinical clerkship setting, with a sample of primarily third year medical students, does have a statistically significant and negative impact on a students’ performance on the NBME. It provides support to the literature that suggests that absences have a negative impact for medical students outside of the classroom or lecture hall and provides backing to clerkship directors who wish to limit students’ missed time from clerkship activities.Ultimately, additional supporting research, especially with a larger sample of students, could replicate and even expand on these findings and potentially show impacts on other performance measures as well.

Take Home Messages

  • Absenteeism in medical school is expected to affect clinical performance measures 
  • This study found NBME scores to be significantly related to absenteeism
  • Students who are more self-regulated or self-directed will perform better if they have absences

Notes On Contributors

Tiana E. Cruz, MA is a Research Analyst at the University of Maryland, College Park Counseling Center and former Coordinator for the Neurology Clerkship at Johns Hopkins School of Medicine. 

 

Rebecca Riddell, MS is a Data Governance, Analysis, and Reporting Specialist in the Office of Assessment and Evaluation at the Johns Hopkins School of Medicine.

 

Charlene E. Gamaldo, MD is an Associate Professor, Neurology, Psychiatry, Anesthesia/Critical Care, Nursing and Public Health Vice Chair, Faculty Development - Neurology Department and Medical Director JH Center for Sleep.

 

Roy E. Strowd, MD, MEdHP is an Assistant Professor, Neurology, Internal Medicine, Section on Hematology and Oncology at Wake Forest School of Medicine in Winston-Salem, North Carolina. He is Director of the Health Professions Education Institute at Wake Forest and focuses on leadership development and mentoring in pre-medical, medical, and post-graduate health professions education.

 

Christopher Oakley, MD is an Assistant Professor, Neurology and Co-Director for the Neurology Clerkship Director for the Johns Hopkins University, School of Medicine.

 

Rachel Marie E. Salas, MD, MEdHP is an Associate Professor in Neurology. She is the Director of Interprofessional Education and Interprofessional Collaborative Practice for the School of Medicine; Director for the Neurology Clerkship Director for the School of Medicine, and Director for the PreDoc Program at the Johns Hopkins University, School of Medicine.

Acknowledgements

None.

Bibliography/References

BinSaeed, A.A., al-Otaibi, M.S., al-Ziyadi, H.G., Babsail, A.A., et al. (2009) 'Association between student absenteeism at a medical college and their academic grades', Journal of the International Association of Medical Science Educators, 19(4), pp. 155-159. http://www.iamse.org/mse-article/association-between-student-absenteeism-at-a-medical-college-and-their-academic-grades/ (Accessed: July 3, 2019)

 

Burns, C.A., Lambros, M,A,, Atkinson, H.H., Russell, G., et al. (2017) 'Preclinical medical student observations associated with later professionalism concerns', Medical Teacher, 39(1), pp. 38-43. https://doi.org/10.1080/0142159X.2016.1230185

 

Deane, R.P., Murphy, D.J. (2013) 'Student attendance and academic performance in undergraduate obstetrics/gynecology clinical rotations', JAMA, 310(21), pp. 2282-2288. https://doi.org/10.1001/jama.2013.282228

 

Eisen, D.B., Schupp, C.W., Isseroff, R.R., Ibrahimi, O.A., et al. (2015) 'Does class attendance matter? Results from a second-year medical school dermatology cohort study', International Journal of Dermatology, 54(7), pp. 807-816. https://doi.org/10.1111/ijd.12816

 

Hamdi, A. (2006) 'Effects of lecture absenteeism on pharmacology course performance in medical students', Journal of the International Association of Medical Science Educators, 16(1). http://www.iamse.org/mse-article/effects-of-lecture-absenteeism-on-pharmacology-course-performance-in-medical-students/ (Accessed: July 3, 2019)

 

Kauffman, C.A., Derazin, M., Asmar, A., Kibble, J.D. (2018) 'Relationship between classroom attendance and examination performance in a second-year medical pathophysiology class', Advances in Physiology Education, 42(4), pp. 593-598. https://doi.org/10.1152/advan.00123.2018

 

Laird-Fick, H.S., Solomon, D.J., Parker, C.J., Wang, L. (2018) 'Attendance, engagement and performance in a medical school curriculum: Early findings from competency-based progress testing in a new medical school curriculum', Peer J, 6: e5283. https://doi.org/10.7717/peerj.5283

 

Luder, A. (2016) 'Lecture attendance by medical students - is it a compelling issue?' Harefuah, 155(4), pp. 223-255. https://www.ncbi.nlm.nih.gov/pubmed/27323538

Appendices

None.

Declarations

There are some conflicts of interest:
Dr. Strowd contributed to the conception or design of the work, interpretation of the data and contributed to the drafting of the manuscript. Dr. Strowd serves as Deputy Section Editor of the Resident and Fellow Section of Neurology. Dr. Strowd has received salary support from the American Academy of Neurology for research unrelated to this project. Dr. Strowd receives consulting support from Innocrin Pharmaceuticals, Peloton Therapeutics, and Monterris Medical that are unrelated to this project.
This has been published under Creative Commons "CC BY-SA 4.0" (https://creativecommons.org/licenses/by-sa/4.0/)

Ethics Statement

This study was approved by the Johns Hopkins IRB: Reference #00156893.

External Funding

This article has not had any External Funding

Reviews

Please Login or Register an Account before submitting a Review

Ken Masters - (10/09/2019) Panel Member Icon
/
An interesting paper on the relationship between student absences and grade outcomes in a neurology clerkship setting. I particularly like the fact that the authors examine a wide range of assessments, rather than only overall performance, so that one has a more granular view of the issue than I have seen in other papers. For the most part, the paper is well written, although there are areas that need to be addressed:

1. The most important is the assumption of a one-way causal impact. Although the authors speak of examining the association between absenteeism and performance, they too easily conflate association with causation. This is immediately obvious in the paper’s title which speaks of the “impact” of absenteeism, and they frequently refer to “impact” throughout the paper, including the Conclusions. While there is an association between the absenteeism and grade, the authors have not established a causality. (Yes, I have seen that the authors do write “Finally, as this study is entirely correlational, one limitation of this research is the inability to determine causality in regard to the impact of absences on clerkship performance.” But this is a single line comment buried in the Discussion, and, by the time the reader reaches that point, the argument of causality is firmly entrenched, and this line also appears to have been ignored in the repeated use of “impact”).

So, the single line in the Discussion is not enough. I would strongly recommend that, beginning with the paper’s title, the authors should carefully inspect their paper, and whenever they find any suggests of causation or impact, they modify it with association or correlation.

This includes the citing of papers: For example, the authors write: “Other studies have shown that absence from classes or lectures even into the third year (BinSaeed et al., 2009) and fourth year (Hamdi, 2006) of medical school can significantly affect academic performance. Yet
• BinSaeed et al state: “This study highlights that while student absenteeism may contribute to low achievement, the reverse is also possibly true, where low achievers are more likely to absent themselves than higher achievers.” This acknowledges that saying “affect” may not be accurate.
• Similarly, although Hamdi also conflates association with causality, nothing in that study supports this conflation, as Hamdi appears to examine statistical associations only.

This is especially significant in light of the other research cited in this paper that appears to show no association at all, and rather an association between other factors and performance.


2. Tables 2 and 3 are part of the Results, and would be better placed into that section of the paper.

3. The information in the paragraph in the Discussion that starts “It is also important to consider the possibility that…” should really be in the Introduction or the Methods, as this is crucial contextual information that has a direct bearing on the Results. (It can be referred to again in the Discussion, but should have been first mentioned earlier.)


Minor issues:
• There is an error on the Hamdi reference hypertext link. Although the text is written correctly, the hypertext link points to the BinSaeed reference.
• In Table 1, the total n for each academic year should be stated.
• “data…was” should be “data…were”
• Figure 1 is placed very far away from where it is first mentioned, and could probably be moved up to ease the reading experience.

So, although this is good study, I am awarding only two stars because of the causation issue. If that were to be addressed in a revised version of the paper, then I could foresee a four-star rating.


Possible Conflict of Interest:

For transparency, I am an Associate Editor of MedEdPublish. However I have posted this review as a member of the review panel with relevant expertise and so this review represents a personal, not institutional, opinion.

Trevor Gibbs - (07/08/2019) Panel Member Icon
/
An interesting paper that has an almost inevitable conclusion, but I do applaud the authors for tackling this difficult issue, which can add to the importance of dealing with the struggling student.
Its good to see the authors looking at the limitations of their research, which I do feel affect the final results and perhaps do not call for such a forthright conclusion. However the authors quite rightly suggest further research in this are; I would suggest carrying out a more expansive field that covers culture and geographical diversity.
I found the list of references particularly helpful, many of the papers I had not seen before.
Despite the limitations I would suggest this paper to any faculty responsible for assessment and the care of the struggling student.
Possible Conflict of Interest:

For transparency, I would inform that I am an Associate Editor of MedEdPublish