Disruptive Innovation: successful implementation of an electronic marking in an Objective Structured Clinical Examinations – a single institution case study

Problem: Objective Structured Clinical Examinations are commonly included in the suite of assessment tasks in health professional programs. Conducting these assessments using paper-based methods has resulted in challenges that have been identiﬁed across the range of disciplines. These challenges include laborious administrative processes, missing assessment data, delays in data analysis and result release, which may limit the capacity to provide feedback to students. Intervention: The implementation of an electronic marking system was piloted in an attempt to address these issues. The pilot included the identiﬁcation and testing of the software, evaluation of its impacts on the conduct and outcomes of the assessment and the identiﬁed challenges. The outcomes were considered through the lens of disruptive innovation. Context: This medical program has an annual intake of more than 500 students with learning, teaching and assessment occurring at multiple distributed campuses in metropolitan, regional and rural areas of Queensland, Australia, and the United States of America. This results in a complex and resource-intensive assessment and data management process. The identiﬁed challenges were found to be compounded when the examination was delivered concurrently at several examination centres to a large cohort. Outcomes: The introduction of an electronic marking system was feasible to implement across multiple examination sites and to a large cohort, and was found to be more reliable and eﬃcient compared with traditional paper-based assessment systems. Examiner feedback regarding accessibility and acceptability of electronic marking on tablet devices for clinical assessments was favourable with enhancements to qualitative feedback and no adverse impact on student results being identiﬁed. Lessons Learned: The utilization of an electronic marking system in Objective Structured Clinical Examinations was found to be consistent with both disruptive and sustainable innovation. The opportunity to provide immediate feedback was consistent with the deﬁnition of a disruptive innovation and improved data management and processing outcomes consistent with a sustainable innovation. Furthermore, the use of electronic marking systems technology is likely to be transferable to other competency and workplace–based assessments.


Introduction
In 2014, the use of an electronic marking system for Objective Structured Clinical Examinations (OSCEs) was piloted by a medical school in Queensland, Australia to address a number of challenges that had been identified in previous years. These included the complexity of the preparatory administrative processes, the need to ensure a complete data set after the OSCEs, the requirement for efficient access to assessment data for result analysis and processing for both academic and administrative staff, and the provision of meaningful and timely feedback to students. These challenges are not unique to this program and have been described in other medical and health professional programs. [1][2][3] In this institution, however, other factors were also relevant including a large cohort size, and the geographically distributed nature of the medical program. The medical school enrols more than 500 students per academic year into a four year post graduate medical program, and has clinical teaching sites in urban, regional, and rural centres, and international locations. Summative OSCEs are included in the assessment program at the end of Year 2 and Year 4 and these are conducted concurrently at a number of these sites. However, the OSCEs are not run at the international clinical sites in this program. Prior to 2014, OSCEs were administered by traditional paper-based assessment methods. Because of the cohort size, OSCEs were run in four half-day examination sessions, across both days of the weekend to allow for access to appropriately equipped venues. Four equivalent papers were developed for each OSCE and administrative preparation procedures for such a complex undertaking were both labour-intensive and time-consuming. In a resource constrained environment, this was not sustainable, and the key innovation to ensure that these high stakes examinations remained feasible was the introduction of an electronic marking system.
The challenges with data management when utilizing paper-based assessments have been described in the literature and include identifying and remedying missing results and incomplete or inaccurate data entry. 4 Addressing these tasks after the completion of an OSCE consumes an appreciable amount of academic and administrative staff time, especially when coordination is necessarily spread across distributed examination centres. Schmitz and colleagues concluded that up to sixty percent of examination papers have missing or incomplete results, though in our experience this figure was much lower. 4 However minimal, any missing data requires further clarification. This can be difficult to achieve and is a threat to the validity of the assessment item for an individual student or a whole cohort if the missing data rate is high. Manual data entry of results from paper-based assessment sheets is timeconsuming and potentially susceptible to error, especially for a large data set. Consequently, results release to students may be delayed.
The provision of feedback on learning to students was identified by Harden as one of the important considerations for utilizing this assessment design. 5 This aim had not been optimally realised at this medical school due in part to the use of paper-based marking sheets and the associated administrative issues noted previously. Access to examiners' narrative feedback in real-time using the electronic marking system makes this goal achievable. In addition, the use of electronic marking may change the nature of the feedback to students with Denison and colleagues reporting that both the quantity and quality of narrative feedback was enhanced by the use of tablet devices in OSCE marking, in particular for the poorer performing student. 6 Rogers discussed the factors that may facilitate or hinder the acceptance and implementation of new technology. These included characteristics of the technology and the users, and how users are persuaded to adopt the technology. He noted that the uptake of innovation was aided by the following -the innovation provided attributes desired by the users, there were similar personal characteristics between the proposers and the users, and there was a staged implementation process for the innovation. 7 Christensen defined a disruptive technology as one that triggers new practice or process, beginning with inferior performance metrics but supported by a small number of early adopters and expanding to ultimately dominate the pre-existing process. The technology is not designed to be a disruptor, but rather disruption occurs because of the application of the technology. 8 Christensen and Raynor further emphasised this by renaming the term 'disruptive innovation'. 9 This is in contrast to sustainable innovation which produces a quality improvement but does not, in essence, change the existing process. 10 Christensen applied this definition to studies of higher education in the USA and concluded that whilst there has been broad adoption of technology in educational activities, the technology has not resulted in disruptive innovation in this field. He further contended that an important marker of technology producing a disruptive innovation in higher education will be the development of personalised learning opportunities. 8,9 Viewing the implementation of the electronic marking system for OSCEs through this lens will allow further understanding of the scope of educational technology and its potential contribution to the goal of enhanced personalised student-centred learning.

Project Implementation
The primary aim of this project was to evaluate whether the introduction of an electronic marking system addressed the identified challenges. A secondary aim was to assess the effect of the introduction of electronic marking in OSCEs using the lens of disruptive innovation. The OSCE examiners' views on of the use of tablet devices compared with paper marking sheets was particularly relevant in the consideration of these aims.
The planning, development, and implementation of the transition from paper to digital formats in high stakes assessments such as OSCEs requires a team based approach with inputs of administrative, clinical, and educational expertise. The successful introduction of electronic marking requires careful attention to each aspect of Zeleny's technology support network. 11 In relation to the OSCE, these are the core network of physical, organisational, administrative and cultural structures which form the basis of the social relationships among the human participants in the preparation, delivery and finalisation of the OSCE. The electronic marking system pilot was conducted in October 2014 during the Year 2 summative OSCE at a regional site with eighty-five students and twenty-five examiners. Ethics approval for this project was obtained from the university human ethics review committee (University of Queensland, approval number 2014001396). The existing paper-based assessment templates were easily adapted to the software format and the examination upload to the server and download to the tablet devices was uncomplicated and time efficient. A demonstration of the software during the examiner briefing and a short hard copy user guide were adequate resources to upskill the examiners. Paper-based examination scripts and assessment sheets were available on site in case of software or tablet device malfunctions. However, no significant problems were encountered with the use of electronic marking system during the pilot. The use of the software had an immediate and positive effect on each of the challenges previously identified. Based on this outcome, the pilot project was extended to include the 2014 Year 4 OSCE at two examination centres (a major metropolitan site and a single rural site), and additionally at the Year 2 and 4 extended OSCEs.
Following the successful completion of these assessments, the use of the electronic marking system was reviewed with each local team of administration and academic staff. The 2014 OSCE results were congruent with previous years' results and academic staff were satisfied that the introduction of the electronic marking system did not indicate any impact on cohort results.
The examiners who took part in the pilot (n=165) were emailed a link to an online survey requesting their feedback regarding the acceptability and utility of the electronic marking system. The survey was anonymous and voluntary. The response rate for completed surveys was 58% (n=96). The respondents represented a diversity of age, gender and clinical backgrounds. Of this cohort of examiners, seventy-five percent (75%) were experienced examiners, defined as having examined at two or more previous School of Medicine OSCEs. When questioned about previous experience with tablet devices, twelve percent (12%) of the respondents stated they had never used a tablet device before the OSCE, and forty-six percent (46%) of the respondents self-identified as basic users. Ninety-two per cent (92%) of the respondents reported that marking using a tablet device was easier than using paper-based checklists. These findings are consistent with Schmitz and colleagues who found the software to have a usability rating of 6.5 on a 7-point Likert scale, a strong examiner preference over paper-based checklists and that the use of tablet devices required lower levels of mental effort by examiners. 4

Outcomes
For this large distributed medical school, the success of the electronic marking system pilot was significant in many areas. The administrative preparation time for OSCEs was reduced, with a flow on effect of more manageable timelines for the secure distribution of the final examination material to the various examination centres. Data management was streamlined by having complete data sets with no requirement for manual data entry or location of missing data. The ease of results upload and download allowed for early availability of results for analysis by academic staff. This allowed prompt release of final results to students.
Although not formally evaluated, it was noticeable that the use of tablet devices for marking also enhanced the quantity, quality and legibility of the written comments from examiners, consistent with Denison's findings. 6 This allowed immediate verbal feedback to be given to the student cohort on exiting from their examination session based on the real-time analysis of the examiners' comments. These comments were also emailed to students individually after the OSCE, and thus provided a more meaningful and specific foundation for personalized student learning.
At this medical school, there were no additional resource implications for the pilot project in terms of information technology support, licensing fees or tablet devices. Importantly, no data security issues were identified with the implementation of the electronic marking process.

Limitations of this project
A formal evaluation of the change in administrative preparation time and impacts on the other identified challenges may have been useful, however, anecdotal evidence of significant improvement in all areas was sufficient that the School has not returned to the use of the paper-based assessment format since this successful pilot. The electronic Page | 5 marking system has subsequently been utilized in all OSCEs in the program, as well as other clinical skills assessments. Due to the existing availability of tablet devices within the School at the time of the pilot, the resource implications may have been less at this medical school, compared to other medical programs.

Future research directions
The success of implementation of electronic marking systems in clinical assessments leads to a number of related research questions, including the impact of electronic marking on examiner cognitive load, the impact on case writing and the impact on inter-rater reliability.