New education method or tool
Open Access

Using a modified Delphi process to develop a structured case-based handover assessment tool

Nicholas Holt[1][a], Zoe Hutcheson[1], Kirsty Crowe[1], Daniel Lynagh[1]

Institution: 1. Medical Education, NHS Lanarkshire
Corresponding Author: Dr Nicholas Holt ([email protected])
Categories: Assessment, Curriculum Planning, Continuing Professional Development, Research in Health Professions Education
Published Date: 19/01/2021



Objective assessment of medical handover skills is challenging.  This short communication will illustrate the use of a modified Delphi technique in utilising both clinicians and educationalists to produce a fair and robust tool to assess medical student competence in handover.



Using three rounds in a reactive Delphi process, two case-specific assessment tools were formulated. Round two involved clinical experts to respond to pre-defined criteria set in round 1, rather than being invited to develop their own benchmarks. Round 3 saw medical educationalists identify the components that a student should be expected to hand over at their stage of training.


For Case 1, twenty-eight initial key handover components were refined down to eighteen which were then encompassed in the final Case 1 assessment tool.

For Case 2, thirty-one initial components were refined down to seven key handover components. These were then used to populate the final Case 2 assessment tool.


Through the survey of clinical and medical education experts, the modified Delphi process allows the formation of a standardised assessment for individual handover scenarios, whilst ensuring a reflection of current clinical best practice. It is however, time consuming and the final tool is not necessarily faultless, as it invariably affected by the experiences and opinions of the group members. Continual validity judgements in assessment standardisation are essential.


The modified Delphi process allows the formulation of a robust and fair assessment tool but challenges were encountered.

Keywords: Medical Education; Handover; Delphi process; Communication; SBAR


The Delphi survey is a recognised, reliable and established method for reaching a consensus of opinion from experts (Boath, Mucklow and Black, 1997; Hsu and Sandford, 2007). There is no standardised method of conducting a Delphi survey and several modifications exist, with variable numbers of feedback rounds and respondents to fit the needs of the research (Gene and George, 1999; Michels, Evans and Blok, 2012).

The teaching of handover in undergraduate education is essential (General Medical Council, 2018), however, its inclusion in curricula is variable (Liston et al., 2014; Thaeter et al., 2018).

A local medical school was identified as not explicitly teaching handover and so an educational workshop was developed.

The modified Delphi approach was selected to produce a handover performance assessment tool to assess medical students’ competence in delivering a medical handover before and after the handover teaching intervention.

This approach enabled sampling across a range of experts within a large health-board. When used to produce a tool assessing skills competence, it offers high face and concurrent validity (Gibson, 1998; Williams and Webb, 1994) of the tool, thus strengthening the project validity. It has been widely accepted and utilised as a research method within healthcare settings (Gibson, 1998; Hasson, Keeney, and McKenna, 2000; Williams and Webb, 1994). 


The project’s assessment tool was formulated from a modified or “reactive” Delphi model, where panels of multidisciplinary experts respond to pre-defined criteria rather than being invited to develop their own benchmarks (Hasson, Keeney, and McKenna, 2000; Davies, Martin and Foxcroft, 2016; Tonni and Oliver, 2013). Three rounds were utilised to finalise the tool.

Modified Delphi Round 1

The researchers, a group of mixed specialty doctors, formed a focus group to decide upon the basic structure of the assessment tool. A Situation-Background-Assessment-Recommendation (SBAR) format was chosen due to its widespread use within our clinical context and endorsement from various professional and regulatory organisations (World Health Organisation, 2007). A pre-existing, validated tool that could be modified to fit our purpose was sought. The “Clinical Handover Assessment Tool” (Moore et al. 2017) was chosen because of its previous use in undergraduate medical education research. The focus group adapted the tool to the SBAR format, and derived assessment competency domains within it.

Two mock medical cases were produced in a similar format to local health board medical case notes to provide authenticity. The students utilised these to formulate a handover for the pre and post-intervention assessments. The medical cases included explicit instructions and a detailed management plan to minimise any confounding effects of the students’ clinical knowledge on their handover performance.

Modified Delphi Round 2

A panel of fifteen multidisciplinary consultant doctors were invited to review the cases and assessment tool structure. The materials and responses were sent electronically. They were asked to identify components from the casenotes that were essential to handover. When 50% of respondents agreed on a component, it was deemed that a consensus had been reached. A list of key components was formed for each case.

Modified Delphi Round 3

The cases, assessment sheets and key component lists were circulated to ten medical education experts from across NHS Lanarkshire Medical Education faculty. From the key components list, they were asked to select those that they expected the students to be able to identify as essential to the handover. The components, on which a 50% consensus had been agreed, populated the final case-specific tools.

During the assessment, in order to achieve ‘observed’ for each individual assessment criterion, the student had to handover all the key information that had been identified by this process under each specific domain.


Case 1

Seven of the fifteen consultants responded. Individually they identified twenty-eight separate elements as essential for students to handover with consensus being reached on nineteen. 

In the final round, four of the ten medical education experts responded. Consensus was reached for eighteen of the components carried forward from round 2, and therefore these eighteen elements populated the final version of the assessment tool for Case 1 (Supplementary file 1).

Case 2
Thirty-one elements were identified by the seven consultants as being essential to the handover for case 2. Of these, consensus was reached on eight components. The final round saw a consensus in seven of these eight components and the final assessment sheet for case 2 was formed (Supplementary file 1).


The modified Delphi process allowed us to survey clinical and medical education experts to form a standardised assessment for individual handover scenarios. Using the clinical experts ensured the assessment process reflected current clinical best practice. The medical education faculty have knowledge of the expectations of students at this stage of training which helped ensure the assessment was contextualised appropriately.

However, the Delphi process also poses challenges. The process is time consuming and dependent on the engagement of multiple experts outwith the immediate research group (Williams and Webb, 1994; Gibson, 1998). Indeed the number of respondents in our project was small and inclusion of larger panels of experts may have increased the project’s validity.

Furthermore, the process does not necessarily produce a faultless assessment tool as it relies on group consensus. It allows anonymity and an expression of opinion without influence, whilst ensuring a controlled feedback mechanism (Michels, Evans and Blok, 2012; Murphy et al., 1998), however, the final tool is invariably affected by the experiences and opinions of the group members. In this project, expert consensus was that escalation of care status should be explicitly handed over. In our clinical experience, unwell patients are deemed for discussion with, and escalation to the intensive care unit in the event of deterioration, unless explicitly stated beforehand. Both fictional patients in the cases were previously well individuals and explicitly recorded in the mock case notes “for full escalation.” In the assessment, the majority of the students omitted this information and did not achieve this assessment component. The researchers feel this omission reflects authentic clinical practice to which the students are exposed. Potentially, strict adherence to the modified Delphi process has compromised the integrity of the research tool due to not truly reflecting current clinical practice. This highlights the importance of continual validity judgements in assessment standardisation. A fourth Delphi round informed by student performance data may help with further refinement.


The use of the modified Delphi is accepted and widely used in healthcare research for forming valid and objective assessments (Williams and Webb, 1994). The Delphi process provides validity to an assessment tool, especially one assessing competency based skills that require a large group consensus on expected standards and where consistent best practice evidence is lacking (Tomasik, 2010). However, the Delphi process is not infallible. Our study highlighted the potential for the process to compromise the integrity of the product if it diverges from true clinical practice and experience (Holt et al., 2020).

Take Home Messages

  • The Delphi process provides validity to an assessment tool assessing handover competency based skills which require a large group consensus on expected standards and where consistent best evidence practice evidence is lacking
  • Delphi process allows experts to form a standardised assessment for individual handover scenarios which reflects current clinical best practice
  • The process can be time consuming and dependent on the engagement of multiple experts outwith the immediate research group

Notes On Contributors

Dr Nicholas Holt* – MBChB, MRCP, MRCPS(Glas), PGCert Med Ed, Honorary Clinical Lecturer (University of Glasgow), Clinical Teaching fellow in NHS Lanarkshire. ORCID ID:

Dr Zoe Hutcheson* - BSc (Hons) Medicine, MBChB, MRCEM, PG Diploma Health Professions Education, Honorary Clinical Lecturer (University of Glasgow), Clinical Simulation Fellow in NHS Lanarkshire.

*Denotes co-lead authors

Dr Kirsty Crowe - BMedSci (Hons), MBChB (Hons) MRCP, Honorary Clinical Lecturer (University of Glasgow), PGCert Med Ed, Clinical Teaching Fellow in NHS Lanarkshire. 

Dr Daniel Lynagh - MBChB (Hons), MRCP, MRCPS(Glas), Honorary Clinical Lecturer (University of Glasgow), PGCert Med Ed, Clinical Teaching Fellow in NHS Lanarkshire.




Boath, E., Mucklow, J., and Black, P. (1997). ‘Consulting the oracle: a Delphi Study to determine the content of a postgraduate distance learning course in therapeutics’, British Journal of Clinical Pharmacology, 43, pp. 643–647.

Davies, E., Martin, J. and Foxcroft, D. (2016). ‘Development of an adolescent alcohol misuse intervention based on the Prototype Willingness Model’, Health Education, 116(3), pp. 275-291.

Gene, R. and George, W. (1999). ‘The Delphi Technique as a forecasting tool: issues and analysis’, International Journal of Forecasting, 15, pp. 353–375

General Medical Council (2018). ‘Outcomes for graduates’. London: General Medical Council. Available from: (Accessed: 21.08.2019).

World Health Organization (2007). ‘Communication during Patient Hand-Overs’. WHO Collaborating Centre for Patient Safety Solutions. 1 (3). Available from: (Accessed: 21.08.2019).

Gibson, J. (1998). ‘Using the Delphi to identify the content and context of nurses continuing professional development needs’, Journal of Clinical Nursing. 7, pp. 451–459.

Hasson, F., Keeney, S. and McKenna, H. (2000). ‘Research guidelines for the Delphi survey technique’, Journal of Advanced Nursing, 32, pp. 1008-1015.

Holt, N., Crowe, K., Lynagh, D., Hutcheson, Z. (2020) ‘Is there a need for formal undergraduate patient handover training and could an education workshop effectively provide this? A proof-of-concept study in a Scottish Medical School.’ British Medical Journal Open. 10:e034468.

Hsu, C. and Sandford, B. (2007). ‘The Delphi Technique: Making sense of consensus’, Practical Assessment, Eesearch and Evaluation, 12(10).

Liston, B., Tartaglia, K., Evans, D., Walker, C. et al. (2014). ‘Handoff practices in undergraduate medical education’, Journal of General Internal Medicine, 29(5), pp. 765–769.

Michels, M., Evans, D., and Blok, G. (2012). ‘What is a clinical skill? Searching for order in chaos through a modified Delphi process’, Medical Teacher, 34(8), pp. e573-e581.

Moore, M., Roberts, C., Newbury, J. and Crossley, J. (2017). ‘Am I getting an accurate picture: a tool to assess clinical handover in remote settings?’, BMC Medical Education, 17(1).

Thaeter, L., Schröder, H., Henze, L., Butte, J., et al. (2018). ‘Handover training for medical students: a controlled educational trial of a pilot curriculum in Germany’, BMJ Open, 8(9), p.e021202.

Tomasik, T. (2010). ‘Reliability and validity of the Delphi method in guideline development for family physicians’, Quality in Primary Care, 18(5), pp. 317-26.

Tonni, I. and Oliver, R. (2013). ‘A Delphi approach to define learning outcomes and assessment’, European Journal of Dental Education. 17(1), pp. e173-e180.

Williams, P. and Webb, C. (1994). ‘The Delphi technique: a methodological discussion’, Journal of Advanced Nursing, 19(1), pp.180-186.




There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (

Ethics Statement

This project was granted ethical approval from the University of Glasgow College of Medical, Veterinary and Life Sciences Ethics Committee in December 2018, ID number 200180052.

External Funding

This article has not had any External Funding


Please Login or Register an Account before submitting a Review