New education method or tool
Open Access

Better training, better births project: a preliminary evaluation

Alison Gale[1], Jacky Hanson[1], Catharina Schram[2], Mike Davis[3][a]

Institution: 1. Lancashire Teaching Hospitals NHS Trust, Preston, 2. East Lancashire Hospitals NHS Trust, 3. Independent researcher
Corresponding Author: Dr Mike Davis ([email protected])
Categories: Teaching and Learning, Postgraduate (including Speciality Training), Continuing Professional Development, Curriculum Evaluation/Quality Assurance/Accreditation
Published Date: 10/12/2019


This paper describes the experience of planning and presentation of a short human factors training intervention aimed at staff working in maternity services, in light of some challenging conclusions about negative outcomes of existing maternity provision. A local initiative, initiated and supported by Health Education England (HEE) and involving 3 trusts in the north west of England, was aimed at providing appropriate training for senior practitioners. While many participants claimed knowledge of and confidence in Human Factors (HF) issues, the course was well received and ongoing training needs were identified, if only for their junior colleagues. The paper explores plans for a Phase 2 programme which is designed to roll out the training to those junior staff and to begin the process of gathering data from observation of practice on a day to day basis. Data from this activity will inform a plan for Phase 3 when issues of shop floor culture will be subject to productive intervention.


Keywords: Human Factors training; evaluation; maternity safety


In November 2015, the Secretary of State for Health set out the UK Government’s national maternity ambition to reduce by 50% the number of stillbirths, neonatal deaths, maternal deaths and brain injuries that occur during or soon after birth. To ensure progress was made quickly, an initial target of 20% was expected by 2020. 


To facilitate this aim, Health Education England (HEE) made financial support available to the 136 maternity units in England, to fund multi-disciplinary training to upskill the workforce, focussing on areas such as leadership, multi-professional team working and clinical skills. A training catalogue of suggested training packages provided by external organisations was made available to provide ideas of how/where the funding could be used. 


Within the Lancashire and South Cumbria Local Maternity System (LMS), the loose organisation tasked with implementing the National Maternity Transformation Strategy, three of the Trusts opted to work on a joint bid to maximise the potential to improve maternity training for the benefit of the women, babies and families in Lancashire and South Cumbria. The fourth provider within the LMS opted (at that time) to focus their bid on their own organisation’s needs.


To achieve this wider aim, the Better Training, Better Births (BTBB) Consortium was established, with representation from the Heads of Midwifery and senior Obstetricians from each provider. 


The consortium’s strategy was to develop a greater understanding of non-clinical skills, thereby improving effective care across the LMS, and to enable delegates to:                                                        

  • appreciate the common challenges in maternity care and ensure that collaborative learning and practice will improve the experience for women, families and staff
  • identify the key areas for improving the safer practice within the units 
  • develop a greater understanding of communication styles and improve effective communication


The project’s objectives were to provide both practical and human factors training, specifically:


Practical emergency clinical skills training

Although this was currently offered by the three provider sites, the funding opportunity facilitated the improvement and standardisation of training across the wider area. This included the development of a Practice Educator Network, and wider access to training and shared knowledge and experience. A local adaption of The PRactical Obstetric Multi-Professional Training (PROMPT) (Abdeirahman and Murnaghan, 2013) was used to provide this training and was offered on each of the three sites.


Human Factors training

Following perusal of the training catalogue distributed by HEE, there was not a single training course that was specific to maternity healthcare providers which was easily accessible and cost effective for our local workforce. The consortium therefore made the decision to commission the development and presentation of a bespoke course from one of the member Trusts where there was significant experience and expertise in human factor programme development and presentation (Gale and Hanson and supported by local expertise). All presenters were experienced medical educators with an interest and engagement in simulation and/or human factors issues.


A 2-day training course was designed to include non-technical skills training (e.g. communication, decision making, teamwork) and wider human factors (e.g. medical error, system analysis, stress effects). In this pilot year, training was made available to senior maternity staff, including Consultant Obstetricians, Consultant Obstetric Anaesthetists and senior (band 7 and above) midwives. To facilitate breaking down silos between professions and organisations, the training was offered across the three sites to both multi-professional and cross-site teams. Each day would be a stand-alone event and participants could attend at any of the three centres. The expectation was that they would attend at least one day outside of their own centre to minimise the time gap between each element of the course, and to facilitate the sharing of experiences. 


The importance of whole team inclusion is considered paramount in the management of a safe maternity service and as such all staff will be involved in the training (Better Births Consortium Strategy, 2018).


The focus is on the following components:

  • systems analysis
  • communication
  • decision making
  • situation awareness
  • leadership
  • stress and resilience

thereby contributing towards an improvement in maternity care.


Evaluation strategy

As required in the BTBB Strategy Document (HEE, 2017), the programme would be subject to a structured evaluation, and the strategy that follows was agreed by the Consortium Board.


This research was designed to blend qualitative and quantitative approaches in a framework of evaluative enquiry. A widely used model for evaluating training interventions is the Kirkpatrick hierarchy (Kirkpatrick and Kirkpatrick, 2009) and this project has this model in mind, as suggested by the project brief, but expanded to accommodate the higher levels of competence and expertise.


It is widely acknowledged that it is very easy to gain access to the lowest level of this hierarchy. The project had a 100% response to these immediate evaluation questionnaires, almost invariably recognising the course experiences as being effective and engaging, despite some initial reservation and resistance from colleagues who saw themselves as being more experienced in human factors and Non-technical skills (NTS) issues. See the discussion for more on this.


There was, however, recognition that the higher levels would need a more complex approach to data collection and accordingly, the research design drew on insights explored by Yardley & Dornan, (2012) using a model that fosters cooperative enquiry and acknowledges the complexity of evaluation and the value of mixed methods. In this context, the term mixed methods specifically relates to exploration using both quantitative and qualitative approaches in order to derive a richness and breadth of data.


This framework guides an investigation into a complex intervention, such as this in order to clarify purpose while allowing a degree of freedom to explore emergent themes through constructive and cooperative inquiry. Accordingly, this research develops strategies for process evaluation described as combining quantitative and qualitative data collection methods in:


a range of study methods and cross-sectional or longitudinal designs, including surveys … and the measurement of variables, through interviews, [and] direct observation (Portela et al, 2015)


Observation in professional settings (i.e. consulting rooms, delivery suites) is yet to be conducted and this will form the basis of later interventions – see conclusions for a brief exploration of this. All other methods, however, have been implemented or in the case of longitudinal designs, been initiated.


Yardley and Dornan quote Weiss who writes:


Programs are nowhere near as neat and accommodating as the evaluator expects. Nor are outside circumstances as passive and unimportant as he might like. Whole platoons of unexpected problems spring up. (Weiss, 1972)


The strategies developed in this project address these issues by acknowledging the limitations of the study and its intent.


In practical terms, the chosen methods involved three approaches to data collection:

1. Post course satisfaction surveys designed to elicit:

  1. demographics of the community
  2. background and attitudes towards the intervention
  3. responses to the intervention
  4. articulation of challenges associated with the intervention
  5. next steps

The purpose of this phase was primarily designed to gather perceptions of participation levels and general satisfaction with the course content and presentation These instruments, however, included some deliberately naïve questions designed to elicit perceptions of a) existing knowledge of human factors issues among teams and b) the nature and scale of human factors issues as they presented in participants’ professional experience.


2. Invitations to engage in writing reflective accounts, using a) responses to short on-line surveys and/or b) CPDme (a web-based reflective platform designed to collect Continuing Professional Development (CPD) data). In either case, the expected outcome would be an exploration of a recent experience using a model of reflective practice.


3. Participation in small group semi-structured interviews (possibly including follow up one-to-one interviews or invitations for further written accounts).


Written responses were analysed to identify dominant themes and recorded focus groups and semi-structured interviews were partially transcribed and similarly analysed for key issues.


In addition to these data, interviews were also conducted with some of the Obstetric leads of the programme in order to help identify some of the issues that the intervention was designed to address.


The Day 1 evaluation had two purposes:


to establish a baseline attitude towards human factors issues and related training;

to identify the extent to which the course met needs


The first of these generated largely expected outcomes, as follows:

Table 1: Baseline evaluation of need

Familiarity with HF and NTS issues


HF training is a positive development


My team can benefit from HF training input


HF training should be mandatory



There is a claimed high level of familiarity with key issues and strong support for training for colleagues and this may indicate a combination of over-confidence in self and under-confidence in others, which may be an example of the Dunning-Kruger effect (Kruger and Dunning, 1999), described as follows:


People tend to hold overly favourable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.


The final question in this opening section invited delegates to consider the extent to which their team would benefit from training, by inviting a response to:


“I have concerns about how colleagues work effectively in teams”


and the score for this item indicated a general level of disagreement, with an average score of 2.6/4. This may represent a degree of defensiveness and a failure to recognise a possible deficit in their colleagues’ performance. Whatever the interpretation, there seems to be a degree of inconsistency in expressed attitudes and some of this will be explored when looking at open comments that emerged from both Day 1 and Day 2 evaluations and comments made in focus group interviews and the very few online responses.


The remainder of the questionnaire focussed on the extent to which participants considered the programme to be effective and scores for these were generally very favourable:


Table 2: Sessions evaluation Day 1

Effectiveness: Strongly Agree = 4

Introduction to human factors in Obstetrics & Gaenocology




Situation Awareness


Cognition and decision making






Workshop – small group


Workshop – plenary



Similar levels of enthusiasm were evident in the evaluations of Day 2 programme elements, as follows:


Table 3 Sessions evaluation Day 2

Effectiveness: Strongly Agree = 4

Stress effects on human performance




Human factors and ergonomics


Human Factor analysis


Issues in evaluation


Closing discussion



Delegates were also asked a number of open questions, and these produced some expected outcomes, but also some more interesting insights into the challenges of implementing changes in practice, as follows:


Responses to open questions

Q1. Name up to three issues from the course that may have the most impact on your thinking about Human Factors so far.


While open responses are sometimes difficult to interpret, there was ample evidence in these data to indicate the issues that delegates (n=125) considered important, specifically:


Human factors analysis (45 mentions)

Resilience (28)

Stress (23)

Understanding of the issues/challenges (15)


Teamwork, reflection on own performance and leadership all had significantly fewer mentions (6, 4 and 3).


Q2. What would you consider to be your most pressing next step in terms of addressing the concerns of the BTBB programme?


The dominant issue to emerge from this stimulus was the view that the course needed to be rolled out to junior members of staff (28 responses)


Other recurring themes are:

Analysis of incidents (and sharing with colleagues) (18)

Teamwork and motivation (11)

Handover (5)

Reflection and personal learning (4)

Resilience management (3)


Q3. What do you consider to be the nature of the challenges you would face in implementing change in your practice?


By far the most significant response to this question related to cultural obstacles and there were 49 mentions of this phenomenon, one respondent writing:


“Culture change is very difficult in the NHS especially within maternity settings.”


Another wrote:


“Implementing change and changing people’s thought processes … encouraging personal reflection and self-awareness will be difficult."


Other issues identified were:


Time (23)

Staffing (21)

Other resource issues (7)


Follow up responses

The third stage of evaluation was planned to be conducted in greater depth, through three possible options:


Participation in focus group discussion

Online questionnaires

Entries on CPDme


and delegates were asked for their preference. Among those who responded to this request, the preferred mechanism for further engagement was online questionnaire, followed by focus group, and finally CPDme. As it turned out, there was significant lack of engagement in any of these, with the exception of some focus group interviews and some individual discussions, all of which were recorded and are summarised below. Participation in these processes was as follows:


Table 4: Data collection indication of preference

Data collection



Focus group


12 (+ 2 paper based)

Online questionnaires







Despite the preferences indicated in Table 4 above, there was little enthusiasm for engaging in the evaluation process beyond the group sessions (which had timetabled evaluation slots). There may be many explanations for this and they will be explored in the Discussion section below, as will alternative models for identifying the extent to which the training experience has impacted on practice.


Online questionnaires

There were two responses to follow-up emails. The first was a depiction of a clinical emergency that while having good patient outcomes, was unsatisfactory from the point of view of team management. It would seem that the course experience provided a structure and vocabulary to describe the team failings, but failed to establish leadership (“Could you leave if not required?” for example) or situation awareness. These issues were identified in subsequent debrief.


The e-mail responses were limited to expressions of enthusiasm and commitment to the idea of human factors and NTS training and there were no reflections on the impact of either individual or shared experience on practice.


Focus groups

The focus group interviews, however, elicited richer material, that while endorsing the general high levels of satisfaction, commented on the impact on behaviour in managing complex situations. Particular mention was made of:


Improved patters of communication (e.g. changes from “Can someone get me …” to “[Name] could you get me …”; “Can I have some quiet?”) and more effective communication across disciplines. Specific mention was made in respect of who spoke to who (initiation and response) and paralinguistic features – indicating attitude as well as meaning.


Enhancing teamwork, moving from “noisy and messy” to a more systematic and psychologically safe environment (particularly for the patient aware of staff behaviour). This feature, however, was considered to be more evident among those more senior staff who had experienced the course, while junior members are “… a bit clueless”.


There was suggestion of a more flexible and fluid perception of leadership – being able to hand over responsibility for both specific actions and overview.


The view that changing culture (evident in comments on evaluation forms) was a challenge within the NHS.


There was widespread agreement on the value of rolling the programme to junior staff and there was a distinct preference for simulation-based work, with the opportunity to explore complex issues, but in role. This latter point was considered an important component of effective simulation experience, as was the importance of “expert” feedback and debrief.


The almost universal response to initial questions about the utility and value of the course was that it was an effective contribution to an understanding of complex issues. Responses to the questionnaire items, however, revealed some interesting contradictions:

  • Claimed high levels of knowledge vs having gained new knowledge and understanding
  • Teams don’t have human factors problems vs they could improve behaviour by being exposed to insights from the course
  • On the whole, maternity services are acceptable vs there is a problem with maternity services that the project is designed to address.

All of these contribute towards a degree of cognitive dissonance, a psychological condition described as the state within which there can be a mismatch between actions and beliefs (Festinger, 1957). The theory argues that we are driven to reduce dissonance, either by changing our actions, changing our perceptions of our actions, or changing our beliefs.

The dominant view of our participants is that a) they are knowledgeable and b) that there is not a problem arising from inattention to NTS. This implies that their chosen strategy is to change their perception of the situation and thus justify avoiding the challenge of change. Where the need for change is recognised, it is identified as being too problematic and challenging as a consequence of its external origins (i.e. it is not within the capacity of individuals to make a difference at the organisational level – viz “the culture of …”)

The evaluation has faced the very real challenge of addressing resistance to engagement beyond the course itself and this needs to be explored.


There are, we believe, three possible reasons for this:


that the evaluation instruments and agencies were not sufficiently engaging or interesting;


the instruments do not claim sufficiently high priority in what is otherwise a very busy professional and personal life


that engagement in the evaluation process beyond the course itself would make more explicit the personal responsibilities for the mismatch between beliefs and actions, particularly in the context of inter-relationships between different levels in the hierarchy.


Barriers to participation: Engagement

In some respects, the first of these is a somewhat mundane barrier to participation but it has been of concern to the management group and efforts have gone in to providing a variety of options for data collection. Nevertheless there have been high levels of dropout, which we think is disappointing but not uncommon. Inevitably, this represents a challenge to the nature of the conclusions that the evaluation would draw as it introduces the notion of “measurement bias”, skewing conclusions drawn from those participants who by engagement in the ongoing evaluation are demonstrating higher levels of commitment. Ideally, we would want to conduct observation studies but these were ruled out because of resource and possible ethical concerns. They will, however, be incorporated through a novel approach to participant observation as a consequence of roll-out of Phase 2 training. This will be explored in more detail below.


Some thoughts about culture change

Key “culture change and the NHS” into a search engine and you are given many links to various versions of the Taylor Report and its interpretation and implications. The document exploring culture change arising from this report has 30 specific statements in its executive summary and eight of these relate specifically to the development of an appropriate culture, as follows:


The need for a culture of learning (4)

Openness – leading to a culture of care (10)

Avoiding invisibility – a norm of poor practice that was not subject to comment (13)

Normalisation of compassionate care (19)

Failure to prioritise compassionate care à normalisation of toxic care (22)

A listening culture (26)

Selecting and training the emotionally competent health worker (27)

Speaking up (29)


The other issues (Inspection, sustainability, transparency, patient engagement, accountability, leadership) addressed in this report all contribute towards culture in different ways but are beyond the scope of this intervention.


More general searches related to changing organisational culture point to the complexity of the challenge and a neat summary of the nature of this can be seen here:


… because an organization’s culture comprises an interlocking set of goals, roles, processes, values, communications practices, attitudes and assumptions (Rick, 2015)


When considering the nature of the BTBB intervention to date, it would be reasonable to argue that the course design and presentation addressed aspects of all of these elements, (perhaps to a lesser degree). What it did not do, almost certainly because it was beyond the scope of the initial phase of the project, was address the need to impact in any systematic way with the “back home” cultural environment.


Beyond the course experience

As discussed in the Evaluation Strategy section above, there is recognition that the higher level outcomes are difficult to gain access to and this proved to be the case with this intervention, at least as far as we have progressed so far. While in-course or immediate post course impressions are straightforward and familiar, more longer-term data are more elusive:

  • there was a non-existent take up of the CPDme platform, despite the almost 25% of respondents who indicated their willingness to engage with this approach
  • attendance at focus group meetings/interviews was limited to 15 participants
  • there was a low (1%) response to requests for further online survey questionnaires

There are a number of reasons to account for this, ranging from the mundane (pressures of work, number of email communications per day) to more challenging but speculative notions as to the perceived value and contribution of the training and its minimal impact on performance.


What was clear, however, was that alternative strategies for collecting post course experience was going to be essential as the programme moved into its second and an anticipated third phase.


Next steps

Among the very strong recommendations from participants in the first round of this provision is that it is rolled out to other colleagues, particularly the more junior members of the maternity teams (including midwives and junior doctors). Despite the expressed view that participants in Phase 1 were familiar with NTS issues, there is clear evidence that they gained access to new insights and a strong sense of the main barrier to successful remedial action: the culture that exists within maternity services that mitigates against effective implementation of good HF/Ergonomics and NTS practices.


The suggested design, therefore, will address these needs through the provision of an effective programme of dissemination of key issues in HF and NTS, through a short experiential course aimed at the wider target community. In practical terms, this involves about 500 people to be trained over a 12 month period, starting in September 2019.


Because of the numbers involved, this course will be presented by recently trained colleagues who will gain the support of more experienced faculty through a short educational provider course and ongoing mentorship. This training provision is due to start in Summer 2019 and the course will include a number of elements:

  • pre-course reading drawn from the course handbook for life support instructors (Bullock et al, 2015)
  • pre-course web-based reading specific to the planned teaching modalities (Davis, 2019)
  • access to an online forum comprising all members of the BTBB faculty community
  • one day course addressing specific teaching modalities related to the modified human factors course
  • instructor “apprenticeships” comprising a staged engagement with the human factors course and supported by ongoing mentoring from the development team

This phase of the programme will run into 2020 by which time all Band 6 midwives and associated medical trainees will have experienced attendance at the revised 1 day Human Factors course. It is expected that by the end of this phase, and with the support of the originating group of instructors, that the instructor candidates will offer the courses at the local level.


As part of this training provision, all instructor apprentices will also be exposed to the concept of “participant observer” and will be encouraged to look out for, and briefly report on incidents of relevance in their day to day practice. As participant observers, they will record perceptions of events (suitably anonymised) that have resonances for the aims of the programme and will contribute towards the design of a possible work-based and in situ phase 3 of the programme which the authors will bid for in 2020 and beyond.


It is clear from the evidence we have gained access to that this has been a successful intervention, subject to a number of partially expected limitations.


Participants valued the programme highly and showed considerable enthusiasm for rolling the programme out to junior colleagues. A number of them expressed their willingness to engage in that process, and they will be given appropriate support to enable them to do this.


The challenge of culture change is recognised and all informal evidence points to enthusiasm for developing strategies to enable good practice to be more firmly embedded in day to day management of maternity care. Making steps in this direction would be a productive outcome of any second phase, allowing for fuller implementation of culture change initiatives into the medium-term future.

Take Home Messages

  • It is both desirable and possible to provide basic instruction in human factors issues that is well recieved and is understood.
  • That specific effort is required to roll out provision if the nature of the culture can change to accommodate new ways of thinking in the wider community.
  • That while immediate feedback as to the impact of a course is very easy to get, there are real challenges in gaining access to insights into the impact on organisations.

Notes On Contributors

Alison Gale is a founding member of the BTBB Consortium, conceived the HF training course, and co-designed, produced and delivered it. Her ORCID number is


Jacky Hanson is also a founding member of the BTBB Consortium, conceived the HF training course, and co-designed, produced and delivered it. Her ORCID number is


Catharina Schram is chair and founding member of the BTBB consortium, leading the procurement of funding, strategy and development of the programme. Her ORCID number is


Mike Davis has extensive experience of medical educator training course design, presentation and evaluation and has supported the development of the BTBB programme through evaluation of Phase 1 and planning and presentation of Phase 2. His ORCID number is


The authors would like to acknowledge the work of:

Dr Jeremy Ward, consultant Surgeon at LTHTR.

Dr Louise Graham, Head of Organisational Development LTHTR.

Dr Amanda Bellis, consultant Obstetrician and Gynaecologist at LTHTR.


all of whom contributed to the success of the BTBB Phase 1 training courses.


Abdeirahman, A. and Murnaghan, M. (2013) ‘Practical Obstetric Multi-Professional Training course’, BMJ

Better Births Consortium Strategy (2018) (Internal document).

Bullock, I., Davis, M., Lockey, A. and Mackway-Jones, K. (2016) Pocket guide to teaching for clinical instructors (3rdedition). Oxford: Wiley-Blackwell.


Davis M. (2019) BTBB Instructor Course: pre-course reading for instructor candidates (internal document).

Department of Health. (2015) 'Culture change in the NHS: applying the lessons of the Francis inquiries' Available at: (Accessed: 9th November 2018).

Festinger, L. (1957) A theory of cognitive dissonance. California: Stanford University Press.


Health Education England. (2017), BTBB Strategy Document.


Kirkpatrick, J. and Kirkpatrick, W. (2009) 'The Kirkpatrick four levels 1959-2009'Available at: (Accessed: 13th November 2018).


Kruger, J. and Dunning, D. (1999) ‘Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments’, Journal of Personality and Social Psychology, 77(6), 1121-1134.


Portela, M.C., Pronovost, P.J., Woodcock, T., Carter, P., et al. (2015) ‘How to study improvement interventions: a brief overview of possible study types’, Postgraduate Medical Journal 91, 343-354.


Rick, T. (2015) 'Changing organizational culture is daunting'Available at: (Accessed: 9thNovember 2018)


Stephenson, R and Moore, D. (2018) 'Ascent to the summit of the CME pyramid', JAMA. 2018;319(6): 543-544.


Weiss, C.H. (1972) Evaluation Research: Methods of Assessing Program EffectivenessEnglewood Cliffs, N.J.: Prentice-Hall.


Yardley, S. and Dornan, T. (2012) ‘Kirkpatrick’s levels and education evidence’, Medical Education 46(1), 97-106




There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (

Ethics Statement

Details of the project were submitted to the Centre for Health Research and Evaluation at LTHTR and was informed that it would be considered a "Service Evaluation" and would not require ethics approval - SE215 dated 9th February 2018.

External Funding

The Consortium received a £200,000 bursary from HEE Maternity Safety Training Fund from which LTHTR received £50,000 to cover the human factors training element.


Please Login or Register an Account before submitting a Review

Ken Masters - (22/03/2020) Panel Member Icon
The paper deals with a training intervention designed to reduce birthing complications. While the intervention appears to have been well done, the structure and presentation of the material, and the large amount of missing crucial information, make a real assessment difficult.

Some of the most important issues that need to be addressed are:

• The Introduction to the paper speaks of the goal to reduce stillbirths, neonatal deaths, maternal deaths and brain injuries by 50%. For people outside the UK, it would be useful if raw figures could be given, so that we could have a clearer picture of the scale of problem being tackled.

• The Methods section suffers from two major challenges that need to be addressed:
o Much of the Methods is interspersed with information that forms the Background, and should go into the Introduction.
o There is no indication of the scale of the intervention, how many sessions, how many participants, demographics, etc.. Because of this, the information given in the Results cannot really be evaluated.

• Similarly, in the Results, the n is not given, and there is no form of data analysis based on demographics/professions/sites, so the reader cannot grasp the significance of the results. Only in response to one of the open questions, are we given an n of 125, but we do not know how that figure relates to the overall number.

• From the scores given in Table 1, a reader can probably infer that it was a Likert scale of 1-4, but there is no indication of category labels, and it really is dangerous to have the readers infer this information. (It also appears that the question “I have concerns about how colleagues work effectively in teams" should be part of Table 1, rather than sitting by itself).

• While the number of responses to the open questions is interesting, simply giving the number of mentions is not enough. The paper needs to present coherent themes (the snippets are sometimes presented as themes, but even then, “Stress” or “Resilience” without an explanation is not a theme), and then give example quotations supporting those themes. (There is the start of this in response to Q3, but not much elsewhere.)

• Again, with the focus group/s, the paper tells only the final number of participants. There is no information about how many groups, their demographic/professional composition, how they were run, for how long, the questions used, the structure, etc. It appears that there may have been only one focus group, but the phrasing “…questionnaire, followed by focus group, and….” and the term “focus group interviews” leads the reader with little indication of what happened. Similarly, the resultant themes need to be far more formally presented.

It is really only after these issues have been addressed that the validity of the Discussion and the Conclusions can be assessed. For example, the issue of “Claimed high levels of knowledge vs having gained new knowledge and understanding” is interesting, but can only really be evaluated as an issue when we see data that compares responses and then looks for correlations with p-values and other measures of significance, or, if these are to drawn from the qualitative data, then supporting quotations in the qualitative data.

So, while the intervention appears to have been done well, I'm afraid that the researchers have done themselves a grave disservice in the presentation of their work. I look forward to a Version 2 in which the issues are addressed.

Possible Conflict of Interest:

For transparency, I am an Associate Editor of MedEdPublish.

Trevor Gibbs - (15/01/2020) Panel Member Icon
A very interesting paper to read that cut across many aspects of research, organisational management and clinical practice. I got the feeling that this is still work in progress but can equally see that change has or will occur as a result of this project. A major opinion from this paper that strikes true for much of today's research is the need for higher level Kirkpatrick evaluation, but how difficult that higher level is to reach.
I would hope that the researchers carry on with this project and I look forward to reading that further development. In the meantime I would recommend the paper to all those involved in hospital and clinical management systems
Possible Conflict of Interest:

For transparency, I am one of the Associate Editors of MedEdPublish