Doctors As Teachers Assessment ( DATA ) – a multisource feedback tool for faculty development

Background: Positive role models have a significant impact on the delivery of education and training by creating a favourable culture for learning. Although assessment and evaluation are key components of medical education, assessment of teacher performance has received scant attention to date. Aim: The development and pilot testing of a multisource feedback tool (Doctors As Teachers Assessment – DATA) for assessing teacher performance is described. Method: The Delphi technique was used to identify the key attributes of an ideal teacher. Four domains that encompassed a range of descriptive terms were then developed. The tool was pilot tested and validated for consistency and feasibility of use. Results: The four domains of the DATA tool consisted of: Teaching at the Correct Level; Clinical Knowledge and Skills; Teaching Techniques; and Inspirational Role Model, with four point rating scale. Fifty-seven doctors in training rated their teachers (educational and clinical supervisors) on two separate occasions, 2 – 4 weeks apart, to test consistency. The tool was then used to rate 311 teachers. Conclusion: Doctors As Teachers Assessment (DATA) is a valid, reliable multisource feedback tool for assessing teacher performance and promoting faculty development.


Introduction:
The case for professionalising medical education is compelling (Benor 2000, Steinert 2000& 2006).Medical teachers are required to have the necessary knowledge and skills whilst also displaying the values, cultures and behaviours that promote lifelong learning (Tekian & Harris 2012).The General Medical Council (http://www.gmc-uk.org/Approving_trainers_implementation_plan_Aug_12.pdf_56452109.pdf)and CanMEDS (www.royalcollege.ca/canmeds)Framework set out rigorous, internationally recognised criteria for the accreditation of doctors in educational roles in the UK and Canada, respectively.Teacher accreditation and support feature prominently in the GMC Promoting Excellence: standards for medical education and training (GMC 2015).
Multisource feedback (MSF), originally developed in industry, is now well accepted in management for employee assessment and performance improvement (London & Smither 1995, Brutus et al 1998, McCarthy & Garavan 2001).It has been used for peer assessment of physician performance and by regulatory authorities in North America (Lockyer & Violato 2004, Lockyer et al 2006) and for assessing professional skills of clinical directors in the United Kingdom (Palmer et al 2007).MSF has been shown to be a valid assessment of the professional behaviours of trainees that forms part of their appraisal and development (Whitehouse et al 2005(Whitehouse et al , 2009)).However, the concept of multisource feedback in faculty development has yet to become embedded into the educational appraisal of teachers (www.faculty.londeanery.acu.uk/supervisor-MSF).Assessing teacher performance, as distinct from clinical skills and performance or academic output, is essential for promoting good educational practices, identifying areas of strength as well those which require further attention and support (Egbe & Baker 2012).The role of teacher extends beyond the ability to teach a specialist subject and includes: enthusiasm; understanding of the curriculum; the ability to employ various techniques to reach students at different stages of learning; assist in personal development; provide constructive feedback and be available for assessment and appraisal, as per curriculum (Wright et al 1998, Academy of Medical Educators Professional Standards, London 2009).Many good teachers also act as inspirational role models (Wright 1998, Harden & Crosby 2000).Students and trainees themselves are best placed to evaluate teacher performance and professionalism (Berk 2009).
The availability of well developed, reliable and valid multisource feedback tools for assessing teacher performance is limited.This paper describes the development of a novel multisource feedback tool for evaluating teacher performance.

Delphi Technique:
The Delphi technique (Hsu & Sandford 2007) was used to develop a consensus amongst thirty hospital-based experienced medical leaders (Directors of Medical Education and Postgraduate Clinical Tutors), with oversight for medical education, who were set the task of identifying attributes of a good medical teacher.They worked in four groups for one hour.Each group presented its findings to the wider audience, allowing debate and discussion.Various attributes identified and defined by consensus of the expert group were then thematically analysed into four domains.

DATA (Doctors As Teachers Assessment) Tool:
The four domains (with descriptors) identified by thematic analysis included: Teaching at the Correct Level (teaches at the level based on the curriculum and on individual learning needs, recognises struggling students / trainees, and provides effective, targeted support); Clinical Knowledge and Skills (does clinical work expertly, inspires confidence clinically, puts patients first, is an impressive evidence-based practitioner and demonstrates continuous learning); Teaching Techniques (is an effective teacher who maximises every teaching opportunity, uses questions to assess understanding, gives frank and constructive feedback, makes it easy to ask questions or challenge and never makes students / trainees feel foolish or anxious, is available for support, assessment, e-portfolio marking etc, and is approachable); and Inspirational Role Model (is an enthusiastic, inspiring exponent of both their subject and teaching role, this is the kind of doctor and teacher I would like to be).
Trainees were requested to rate their teachers against the four domains, with a four point scale: poor; adequate; good; excellent; and unable to comment.Space for freehand comments on all domains was provided.The proforma identified the teacher and the trainee providing feedback.

Confidentiality:
Teachers and their raters were assured of confidentiality and the information was collected by medical education managers, who were independent.Assurance was provided that the information gathered, in this instance, was for validating the tool only.As this was a tool in development, information on individual teachers, based on two returns, albeit from one trainee, was not released; this would be contrary to the concept of multisource feedback.All raters and consultant teachers participated entirely voluntarily.No written consent was sought.

Phase I:
Using the DATA tool, 57 Foundation doctors (first two years post graduation) rated their consultant teachers (physicians, surgeons, paediatricians, obstetricians, gynaecologists, emergency medicine consultants and others) who, in the preceding four months, performed the role of educational and / or clinical supervisors.The raters were then asked to rate their consultant teachers again after a two -four week period in order to test the consistency of the results.

Phase II:
Once satisfied that the DATA tool was clearly understood by the raters and providing consistent results, 311 teachers who volunteered for their trainees to rate their performance were scored.

Statistics:
Inter-rater agreement between the first and second ratings by trainees in Phase I was analysed by Cohen's kappa (k) coefficient (Cohen 1960).

Phase I:
The 57 doctors in training rated their teachers in the four domains, across the full range of the rating scale.There was good agreement between the first and second ratings for all four domains as indicated by the kappa values: Teaching at the Correct Level, kappa = 0.61 (CI 0.42, 0.81); Clinical Knowledge and Skills, kappa = 0.55 (CI 0.3, 0.79); Teaching Techniques, kappa = 0.53 (CI 0.35, 0.71); Inspirational Role Model, kappa = 0.6 (CI 0.41, 0.78).See Table 1.
When a difference between first and second responses was observed, this did not record any substantial change of opinion, e.g.initial response of excellent was recorded as good subsequently.There were no instances of ratings changing from poor to excellent or vice versa.
Phase II: The four ratings (poor, adequate, good, excellent and unable to comment) for the four domains are presented (based on 311 raters) in Fig 1.
A representative sample of freehand comments on all four domains from Phase I and Phase II is included in Table 2.

Discussion
The Delphi technique (Hsu & Sandford 2007) consulting experts followed by thematic analysis used in this study has enabled the identification of the four themes that characterise the many attributes of a good teacher (Harden & Crosby 2000, Berk 2009).Providing descriptors (or anchor statements) for each of the four domains made the questions unambiguous as the results show a high level of reproducibility.The agreement between the first and second ratings was analysed using the kappa statistic, demonstrating a measure of agreement beyond what would be expected by chance.The interval between trainees using the tool on the first and second occasions for validation was 2 -4 weeks, which is a realistic timeframe.The rating scale is easily understandable, with the facility for freehand comments, where necessary, minimising survey fatigue.The DATA tool described is comparable to the widely used TAB (Team Assessment of Behaviour), significant differences being that the former relates to trainer or teacher performance and the four point rating scale.Previously published (Egbe & Baker 2012) multisource feedback tools for teacher performance have been specialty specific (Violato & Lockyer 2003, Violato et al 2006, Violato et al 2008, Lockyer et al 2009) and not undergone the same rigorous development and quality testing.This study has validated the DATA tool and shown it to be reliable in assessing generic teacher performance across a range of specialties, facilitating comparison.
The DATA tool, by definition, focuses primarily on the role of the teacher in creating a favourable educational culture, conducive to teaching and learning.This includes the teacher's ability to convey topics new to the learner, assess the trainee's understanding, promoting their learning whilst demonstrating their enthusiasm for both the subject and its teaching.Teaching informed by formative assessment, not necessarily formal or summative, is an important component of the teaching role and includes constructive feedback, appraisal and other developmental techniques to facilitate learning.It can be argued that Clinical Knowledge and Skills and Inspirational Role Model are subjective perceptions by trainees.The knowledge, skills and professional attributes of clinicians are best displayed in the clinical setting, where their trainees are also present.Although the trainees are not peers in the strictest sense, they are best placed to observe and comment not only on the knowledge and skills of their teacher, but also on the care and compassion demonstrated, identifying good role models.Whilst positive as well as negative role models can have influence, it is the former who have the greatest influence on the development of their trainees (Harden & Crosby 2000).
The four domains described in the DATA tool can be directly linked to the seven standards for trainer accreditation set out by the GMC.Table 3.It is recognised that the DATA tool does not specifically address the sixth GMC Standard, namely supporting the professional and personal development of trainees, in particular, careers support.This is because its focus is on the performance of the teacher and their professional development, which links to the seventh GMC Standard.The distinction between the teacher role and the supervisor (clinical and / or educational) role requires clarification.The latter includes activities such as developmental support; careers guidance; appraisal; e-portfolio guidance and completion; and overall competence in supporting the learning and professional development of the trainee.The GMC Standards for trainers apply to clinical and educational supervisors in postgraduate medicine and to the programme directors at Medical School and Trust levels, who have responsibility for delivering the undergraduate curriculum.Importantly, the GMC Standards do not apply to consultants teaching medical students in Trusts.However, in practice, many consultants who are enthusiastic teachers will also be educational or clinical supervisors; this is good practice and to be promoted.
The DATA tool has been developed and validated with postgraduate medical trainees as raters for their consultant teachers.Clearly, it can be applied in a wider context to obtain multisource feedback of general practitioners and allied healthcare professionals who perform teaching roles.The design of the tool, with four domains and unambiguous ratings, also allowing freehand comments, minimises survey fatigue.Further application also includes medical students and allied healthcare profession students, who should be in a position to rate their teachers' performance.
It should be noted that this paper describes the development and evaluation of the tool.For optimum validity, it is recommended that the DATA tool should be administered to gather multisource feedback from more than ten raters for consultant and other teachers (Berk 2009).The tool should be forwarded to raters, with the agreement of their teachers (but not by the teachers themselves, who should complete a separate self assessment).Instructions should clearly specify that the raters must return their completed forms to the relevant administrator (i.e.postgraduate centre manager) for tabulation of the responses and generating a report.The anonymised report generated, including freehand comments, should form the basis of respectful discussion at the annual educational appraisal of the teacher, contributing towards their Trainer Accreditation and personal development as a teacher.
Important criteria to be fulfilled by 360 degree multisource feedback have been published (Berk 2009).These include: the teacher being involved in the selection of raters who should be credible and have first hand experience of the teacher's behaviours or attributes, which reflect actual performance of the teacher; ideally, up to twelve raters should be involved to preserve anonymity and increase reliability.Likert type scales have been recommended, with the process taking place online.The more specific rating scale chosen for the DATA tool differentiates opinion from perception.The freehand comments often provide valuable insights, identifying both good practice and areas for improvement.Furthermore, the unambiguous rating scale allows opinions and perceptions to be recorded.Feedback to the teacher should be sensitive, timely and face-to-face, with improvements in performance documented over time.It has also been recommended that raters should include fellow teachers, fellow faculty and administrators.We have not taken this approach in this developmental stage.Arguably, fellow teachers and administrators who are colleagues may not be inclined to provide critical feedback.For MSF to be valid, there should be more than one group of raters, enabling triangulation.The importance of the entire process remaining confidential cannot be over-emphasised; this needs to be clearly understood by trainees, teachers and their appraisers.The DATA tool does identify the name of the teacher clinician and specifically request the trainee completing the tool to identify themselves by name, whilst providing an assurance of confidentiality.The justification for this is to ensure that the information is provided by a genuine trainee who is acting as the rater and, most importantly, to capture serious concerns, such as patient safety, in particular, which can emerge and must not be overlooked.It is recognised that the current paper-based version of DATA does need to be converted for online use, thereby enhancing robust security of confidential and sensitive information.An e-DATA tool would also generate a report once the responses of ten or more raters are received.
Caveats which make the use of multisource feedback challenging include: teachers with limited numbers of trainees; teachers supervising trainees in difficulty for a variety of different reasons, including trainee performance.Discretion should therefore be applied in the interpretation of trainee ratings in such exceptional circumstances.Presenting information based on the ratings of one trainee (or any number less than ten) is inappropriate and limits the true value of multisource feedback (Berk 2009).
Multisource feedback is developmental to the recipient and must, at all times, be supportive.This is essential to ensure teacher engagement with multisource feedback as a developmental tool.If used well, multisource feedback should identify areas of strength as well as the developmental needs of a teacher, which can and should be addressed through reflection, change of approach where necessary, and monitoring of future performance.Persistent poor performance as a teacher should prompt the teacher and their appraiser to direct attention to other areas of a doctor's performance, such as patient care, management and research.
It is concluded that DATA is a well validated, reliable multisource feedback tool for assessing teacher performance.It is ready and intended for use in educational appraisal and its electronic application for wider use is envisaged.

Take Home Messages
The assessment of teaching skills is a crucial part of the professionalisation of medical education.

1.
The key teaching attributes of the ideal teacher were developed using the Delphi Technique as a 2.
consensus method.The domains were teaching at the correct level, clinical knowledge and skills, teaching techniques and 3.
inspirational role model.A multisource feedback tool using these themes was developed.

4.
The tool was acceptable, valid and reliable in use. 5.

Inspirational Role Model
"Very enthusiastic; keen to teach; goes out of their way to find teaching opportunities" "Enthusiastic about their specialty and wants people to learn" "Kind, caring, supportive and understanding"

Clinical Knowledge and Skills -kappa 0.55 (CI 0.3, 0.79) Second rating
K.A. Nathavitharana MB BS PhD FRCP FRCPCH FAcadMEd DCH is a Consultant Paediatrician and Associate Dean for Quality Health Education West Midlands A. Griffiths MA MSc MPhil is a statistician currently working for NHS Blood and Transplant A.B. Whitehouse MA FRCP MB ChB is retired Consultant Physician, Associate Postgraduate Dean, Head of School of Medicine, Head of Foundation Programmes, Health Education West Midlands, and current Vice-Chair of the National Association for Clinical Tutors UK.D.W. Wall MB MMed PhD FRCP is a Tutor in medical education at the University of Dundee and previously was Professor of Medical Education and Deputy Postgraduate Dean in the West Midlands E.A. Hughes BSc MB ChB FRCP DSc is currently Director and Dean for Education and Quality Health Education England, London and South East (formerly Regional Postgraduate Dean and Director of Education and Quality Health Education West Midlands)