Research article
Open Access

Medical students’ understanding of cost effectiveness in feedback delivery

Fahd Mahmood[1], David Hope[2], Helen Cameron[3]

Institution: 1. Golden Jubilee National Hospital, Clydebank, 2. Centre for Medical Education, University of Edinburgh, 3. Aston Medical School, Aston University, Birmingham
Corresponding Author: Prof Helen Cameron ([email protected])
Categories: Assessment, Educational Strategies, Students/Trainees, Curriculum Evaluation/Quality Assurance/Accreditation, Undergraduate/Graduate
Published Date: 08/02/2019

Abstract

Introduction

Feedback is an important influence on student achievement, yet students report it lacking in both quantity and quality and there is an unexplained mismatch between students’ and staff’s perceptions about the adequacy of feedback offered. Despite the financial constraints on Higher Education providers, there is little evidence about students’ understanding of the costs involved in delivering educational resources. We therefore investigated students’ views on feedback, focusing our analysis on feasibility and cost effectiveness.

Methods

An online questionnaire was delivered to students in the first, third and fifth (final) year of a UK undergraduate medical programme over two academic years. Students were asked to identify the ‘main problem’ with feedback and any positive aspects of feedback. A thematic analysis was undertaken to analyse the data collected.

Results

A total of 690 responses were received, representing a 38.3% response rate. A number of themes were identified, with students highlighting areas for improvement related to numerous facets of feedback delivery but often focusing on the number of opportunities for feedback.  Numerous suggestions were also made. There was little acknowledgement of the resources required to improve feedback delivery and no mention of the cost implications of making such improvements.

Conclusions

Students appear unaware of the practical aspects of assessment and feedback delivery and as a result, make suggestions which are very demanding within the resource constraints of contemporary higher education.  Students believe these requests are reasonable and are therefore likely to become frustrated when they are not fulfilled. Engaging students in the discourse of cost effectiveness may enable students and staff to develop a shared vision for feasible improvements to feedback.

Keywords: feedback; medical education; cost effectiveness; medical students; resource limitations

Introduction

Feedback is an important influence on student educational achievement, similar in effect size to direct instruction, prior cognitive ability and reciprocal teaching (Hattie 1999). Kluger and DeNisi (1996) note that while feedback typically has a moderately positive effect on performance, poorly delivered interventions can reduce student achievement. In their meta-analysis 38% of all attempted interventions negatively impacted student performance. Ten Cate et al (2013) argue that such negative effects occur because feedback interferes with feelings of competence, which, along with autonomy and a sense of belonging or relatedness are necessary for the maintenance of intrinsic motivation according to self-determination theory (Deci 1971). The ideal feedback cycle therefore requires more than an adept teacher who provides targeted feedback in a considered manner.  Student commentary on feedback provision is critical (Seldin 1989) in allowing development of an effective feedback system and there is increasing emphasis on students participating in curriculum development as part of a wider drive towards accountability and quality improvement (Hendry & Dean 2010) in higher education. However there is conflicting evidence about the effectiveness of student reports on improving teaching (Cohen 1980; Kember et al. 2010).  We therefore need to periodically evaluate how effectively we are using those student contributions and consider how to build a stronger partnership and shared vision. Doing so will help us to ensure these contributions lead to better outcomes.

 

Students view feedback delivery as poor and lacking in quantity (Liberman et al. 2005; Duffield & Spencer 2002). Feedback is often regarded as inconsistent (Bevan & Badge 2008), late and irrelevant (Gil et al. 1984). Doan (2013, p. 6) found that 64% of 206 students surveyed agreed that, ‘Students are more interested in their grade and pay little attention to feedback,’ suggesting that students’ views on feedback are unlikely to be grounded in the pedagogical aspects of education. Delivering such feedback is beyond most medical schools in all but the most exceptional cases (Gibbs & Simpson 2004).

Students are rarely made aware of the pedagogy and practical aspects of feedback, including the often contrasting definitions and purposes of feedback in the literature (Ende 1983; Hattie & Timperley 2007).  Students must therefore use ad-hoc definitions (Scott 2014) that differ from those of experts. Despite this, they are asked to make sweeping judgments on feedback in medical education.

Reducing costs is a priority in medical education (Altbach et al. 2010, p. xii), despite recognition that there is already insufficient resource available for its provision (Lowry 1992). Prystowsky and Bordage (2001) highlight the importance of considering cost in the current era of ‘cost containment’ and students perceiving themselves as customers (Finney & Finney 2010) of the university ‘business’ (Knapp & Siegel 2009). Levin (2001) notes that cost effectiveness in education is understudied and suggests this is because of a lack of both supply – clinical educators are not skilled at performing cost effectiveness analyses – and demand – policymakers are disinterested in the outcomes of cost effectiveness analyses, relying instead on their own discretion. Although factors such as student performance and satisfaction are commonly investigated in the literature, cost is considered only in approximately 2%  of studies as an outcome measure (Prystowsky & Bordage 2001; Zendejas et al. 2013). There is limited evidence suggesting that students understand or appreciate the costs involved in developing and delivering educational resource, though Taplin et al (2013) found that less than half of students were willing to pay $5 to download digital versions of lectures for an entire course unit.

Students’ views on feedback development are strongly at odds with the views of their educators (Gil et al. 1984). They request interventions educators view as unfeasible (Carless 2006), are unaware of the extensive pre-existing literature surrounding feedback (Scott 2014) and there is no evidence that they have a mechanism to meaningfully appraise cost when making their judgments. Added to the subjective interpretation of many factors around assessment and feedback (O’donovan et al. 2004) it is not surprising that student proposals are so at odds with those suggested by educators.

Students’ limited view of relevant factors interferes with their ability to constructively engage with the feedback delivery process. Notably, student feedback is biased by numerous factors including the grade received (Zabaleta 2007), ease of the topic and the teachers’ attractiveness or entertainment value (Davison & Price 2009). ‘Survey fatigue’ can follow repeated requests for feedback; students are more likely to engage with such surveys if they feel their contributions are making a significant difference (Porter et al. 2004). Furthermore, there is little consensus on what questions to ask of students (Aleamoni & Spencer 1973; Coffey & Gibbs 2010; Spencer & Aleamoni 1970), as well as concern regarding validity of existing questionnaires (Kember & Leung 2008).

So although student contributions to improving feedback are important and should be prioritised (Seldin 1989) there are significant challenges.  Students need to better understand feedback – both its principles and practicalities to make those contributions as constructive and useful as possible. In order to better understand students’ perspectives we investigated their views on feedback focussing our analysis on feasibility and cost-effectiveness, with the intention of identifying potential training opportunities for students and ultimately promoting better engagement with feedback improvement.

Methods

Methodology

We conducted a qualitative study using a phenomenology based approach. This approach was used in an attempt to understand the lived experiences of the students, identifying positive and negative views. We adopted the transcendental phenomenology approach described by Moustakas (1994), which focuses on the experiences of the research participants, in an attempt not to colour them with the investigator’s views. Moustakas describes key steps including:

  • Identifying a phenomenon
  • Collecting data through broad open ended questions
  • Analysing the data and reducing it into significant statements or themes
  • Further analysis to convey an essence of the subjects’ experiences.

Ethical approval for the study was obtained from the relevant University Student Ethics Committee.

 

Study design

An online questionnaire was delivered through the students’ virtual learning environment. The questionnaire comprised 80 questions, in a mixture of formats. This included a personality inventory questionnaire (Goldberg 1992) and a questionnaire investigating student locus of control. The final component asked students:

  • “Could you please summarise in your own words what you see as the main problem with feedback to students?”
  • “Please summarise in your own words what you think is good about the feedback you receive.”

This study focuses on these two items only, with the remainder of the questionnaire reviewed as part of a larger study. The questionnaire was offered to medical students in the first, third and final year of the 2010/11 and 2011/12 academic sessions of a UK MBChB. The answers for the second question, regarding positive responses were only available for the second academic year as the question was added to the questionnaire in 2011-12. Thus we gathered information from approximately 6 cohorts, with a small number of possible duplicate responses due to students repeating a year.

 

Analysis

Responses were imported into QSR NVivo 10 (QSR International 2012), which was used to code the data. Prior to encoding, the dataset was read through in its entirety to allow for familiarisation and development of an understanding of the context of responses.

In coding, the approach used by Boyatzis (1998) was followed. Here, coding involves identifying an important comment and coding it as such, prior to interpretation. A properly defined code captures the “qualitative richness of the phenomenon” (Boyatzis 1998, p. 31). Encoding was undertaken line by line, allowing emergence of broad themes identified as repeatedly featuring in comments. A further round of coding was undertaken by reviewing the dataset.

Themes were split into three broad categories. Having identified positive and negative themes, a number of core concepts were developed.  

Results/Analysis

A response rate of 38.3% was recorded across both questions, with 453 responses from a potential 1184 respondents received. For the first question, an average of 62 words per response was recorded, whilst the second question received an average of 27 words per response.

A total of 11 themes were identified. The themes were classified into practical and theoretical types, as illustrated below (Table 1). In responding to the question regarding the ‘main problem’ with feedback, students made numerous suggestions on how to improve feedback and these were coded as a separate theme to allow for review.

We chose to focus on the practical themes since they provided insight into students’ appreciation of cost effectiveness and practical means of improving feedback delivery. 

Table 1 - Theme categories

Practical

Quantity

Interactivity

Timeliness

Reasons for limitations

Suggestions for improvement

Theoretical

Detail

Consistency

Direction

Individualisation

Exams

Need for support

 

Quantity

Some students suggested they had received no feedback at medical school. A number felt that quantity was dependent on the module they were studying. This response was typical:

“The lack of information regarding our performance throughout the year can be frustrating, as it feels as though we're going into an exam having no idea how we're doing.”

Students admitted to feeling frustrated, suggesting that ‘adequate’ feedback could significantly improve their learning experiences. They indicated that they felt more resources were required to allow feedback delivery; their comments did not acknowledge address costs in relation to this.

 

Timeliness

Some students identified timeliness as the primary problem, suggesting that by the time feedback was received; they had either forgotten the assignment in question or already completed the assessment for the block, rendering the feedback useless:

“Feedback is not timely enough; by the time feedback arrives the exam/essay is no longer fresh so often there is a lack of motivation to go back and improve”

There was no evidence that students understood the reasons for the timing of feedback, or the staff resource required to deliver immediate feedback.

 

Interactivity

Although this theme did not feature as heavily as others, it was an issue raised by a number of students in all years. Students described being unable to ‘go through problems’ or ‘understand why the marker has found the work poor or excellent.’ One described the occasional opportunity to interact with a teacher on feedback as ‘invaluable.’ Others described being unsure as to whom to consult when seeking feedback, as in this comment:

“Feedback is woefully underprovided and there is rarely indication of where to go for additional feedback, I feel there are barriers to asking directly for feedback as most of the time I never even know who has marked my work!”

The students’ comments did not refer to the resources and logistics required to put such a scheme into operation with hundreds of tutors spread geographically and approximately 250 students in each cohort. 

 

Reasons for limitations

This theme was explored to ascertain what students perceived to be the limitations in delivering high quality feedback, allowing a contrast with the educationalists’ perception. One theory was that the ratio of students to staff was too high to allow tutors to familiarise themselves with students. Others suggested that the large number of assignments made it difficult for tutors to provide quality feedback due to insufficient time. One particularly interesting comment highlighted that tutors were not incentivised to provide good feedback:

“It’s not "invested" so there is no joint incentive to be successful, the tutors have no interest in improving you and so the feedback is inaccurate and often not well thought through. The times when work is done together e.g. work for publication with tutors, you receive real feedback and develop quickly.”

In a similar vein, students called for accountability from tutors providing feedback, critiquing the lack of standardisation in marking their work. This issue was further compounded by difficulty in identifying who had provided feedback to students.

Some responses showed insight into difficulties surrounding feedback provision, acknowledging that it was important not to provide answers to part of a year group when another group had yet to sit an assessment. Others commented that there were insufficient assignments to obtain feedback.

 

Suggestions for improvement

Most student suggestions were quantitative, rather than qualitative in nature, suggesting that it is the quantity of feedback provision which they feel is lacking. A large proportion of these suggestions related to universal access to feedback by having a tutor available to discuss one’s performance. One comment suggested having regular timetabled feedback sessions for individual feedback, perhaps with their director of studies.

Some students suggested that large group sizes caused problems and dropping the number of students per group to 2-3 would be beneficial. Another student opined:

“The reasons given for not having better feedback are either because of "technical difficulties in doing so" or "time/staff issues" which in my opinion could very easily be overcome.”

Others indicated that they felt staff should be held accountable for providing quality feedback – perhaps by providing tutors with a feedback proforma or minimum feedback requirements. An alternative suggestion was to double mark all assignments.  As well as this, students requested feedback on both weaknesses and strengths. One student said they wished to have open access to all data regarding their performance. No comments mentioned the cost implications of enacting such requests.

The students’ comments were summarised well by this student, who stated:

“Good feedback is given soon enough after the assessment as to be relevant and inform practice while memories are fresh; is preferably given face-to-face; is comprehensive (e.g. a paragraph or two of written points); and discusses a number of strengths and weaknesses, and how to improve for next time.”

Discussion

The responses from students demonstrate limited engagement with the practical aspects of assessment and feedback delivery at medical school and limited understanding of the context in which feedback is delivered. Some comments suggest that they feel that problems related to feedback are ‘easily surmountable.’ It is recognised (Altbach et al. 2010) that higher education institutions worldwide are subject to financial pressures, leading to larger class sizes and employment of part-time faculty instead of full-time academic staff. As the cost burden of higher education shifts from governments to students, many universities are being forced to diversify their revenue streams to ensure their financial survival (Johnstone 2004). Yet most student suggestions require substantial additional expenditure, would be impossible with typical staffing levels or would require individual members of staff to develop expertise across multiple domains. There is little evidence that students are aware of the logistical difficulties of their suggestions within current budget limitations. They believe their requests are reasonable and are inevitably frustrated when they are not fulfilled. If medical schools do not raise universal resource and cost issues with students such as the significant non-modifiable costs surrounding assessment processes (Reznick et al. 1993; Brown et al. 2015), it is not surprising that students’ lack of understanding leads to frustration.

Students displayed perceptive insight into reasons why feedback provision might be difficult – a few commented that the ratio of staff to students was too high, limiting tutors’ ability to familiarise themselves with individual students. However, there were no comments on why it might be challenging to reduce the staff to student ratio, especially in terms of cost.

Some suggestions, such as providing a proforma for feedback or tutor guidance, are normal practice at the institution. Given that students have no knowledge of the job plans or commitments of their tutors, they have limited understanding of the competing demands on their time beyond teaching (Cotten & Wilson 2006) – whether these be in terms of clinical care or administrative work. Therefore, teaching staff may view students’ requests for more teaching time as unreasonable given the bulk of their contracted time is dedicated towards non-teaching activities. 

The increase in staff to student ratios (Parliamentary Select Committee on Education and Employment 2001), coupled with the use of part time teaching staff (Altbach et al. 2010) has also led to difficulty in tutors familiarising themselves with their students. As such, tutors see little to no return on their investment in feedback. Tutors are unlikely to encounter these students once their attachments are complete and do not have the ability to develop meaningful relationships with them, leading to students feeling disenfranchised (Watson 1999). One particularly insightful student comment noted that the quality of feedback received significantly improved when student and tutor had a shared goal, such as a journal publication. Incorporating information to tutors on their investment in giving feedback through anonymised student results, comparing tutor performance across taught units or allowing students longer attachments with individual tutors to build familiarity may encourage tutors to deliver more meaningful and effective feedback. There is evidence of such efforts in the form of nomination of individual tutors for excellence awards (Thompson & Zaitseva 2012), but such awards reward a limited number of tutors and rely on motivated students to nominate their tutors. 

O’Donovan (2004) suggests that to allow students to develop a meaningful understanding of standards and criteria, they need to engage with or use these standards. Previous work (Gil et al. 1984; Duffield & Spencer 2002; Bevan & Badge 2008) has identified that students will often find feedback to be lacking in quantity, in contrast to the opinions of staff members. We have expanded on these findings, asking students for constructive criticism with a view to improving feedback delivery. In the process, we have identified gaps in students’ knowledge of today’s context of higher / medical education, which ultimately limits their ability to provide workable solutions.

 

Strengths and limitations

A number of factors add to the credibility and reliability of the study. These include collections of data from three separate year groups over consecutive years, resulting in a large dataset. Triangulation of responses, through questioning different year groups adds to the reliability also. A large number of responses were obtained. The overall response rate for these items was slightly below 40% of the population sampled. However, it should be noted that the data collected here is in the form of free text responses, requiring more time and thought than Likert scale answers, and this may have resulted in a lower response rate.

There are some limitations to this work. It was a single-centre study based on a single programme, within a specific context with respect to fees and funding higher education and these factors may limit transferability. However, our focus on evaluating students’ perspectives around feedback delivery is broadly transferable across higher education and students have been found to make similar requests in other areas of research (Doan 2013). The data was not linked to candidate performance, which would have allowed a more detailed analysis. As participation was voluntary, response bias may also have had an influence.

Reflexivity

We have attempted to overcome our own biases from our experiences as both students and teachers.  Nevertheless, we have undoubtedly presented our own perspectives and it is quite possible that this differs from what the students intended or from what others might find. The transferability of the findings here can also be debated. Some of the problems and potential solutions described will be familiar to medical schools internationally. Others, may not be equally applicable.

Conclusion

Students are dissatisfied with many aspects of feedback and make suggestions for change.  However, they fail to demonstrate awareness of costs which in turn limits their engagement with cost effectiveness as a key criterion for good feedback.   If students lack this understanding, developments in feedback may ignore local student commentary and suggestions, or be poorly implemented due to resource limitations.  

Medical schools should evaluate the cost effectiveness of their approaches to feedback, identify the local determinants of cost-effective feedback and train tutors in the most effective ways. 

Future work should explore the ways in which tutors are incentivised to invest in their students including offering high quality individual feedback and the impact such investment may have on students’ perception of feedback as well as their academic performance.

Staff need to help students develop their assessment literacy and in particular to engage them in the discourse of cost effectiveness in approaches to feedback in order to develop students as informed partners in their own education, and to create a shared vision of affordable, effective, and enjoyable education. 

Take Home Messages

  1. There is little evidence that students consider cost effectiveness when critiquing feedback.
  2. Students’ suggestions to improve feedback are sometimes logistically difficult.
  3. Medical schools should develop means of measuring the cost effectiveness of their educational interventions.
  4. Medical schools should engage students in discussions around cost effectiveness.
  5. Future research should explore incentivising tutors to invest in their students.

Notes On Contributors

Mr Fahd Mahmood is a Specialist Registrar in Trauma & Orthopaedics and holds an MSc in Clinical Education.

Dr David Hope is a psychometrician whose work includes investigating the academic and personal correlates of feedback satisfaction.

Professor Helen Cameron is Dean of Medical Education at Aston University. She is particularly interested in assessment that encourages and supports effective learning. 

Acknowledgements

We would like to acknowledge and thank all the students who completed our questionnaires and provided us with insights about how to improve educational experiences in our programme, and colleagues who helped with the creation and distribution of the questionnaire.

Bibliography/References

Aleamoni, L. M. and Spencer, R. E. (1973) ‘The Illinois Course Evaluation Questionnaire: a Description of Its Development and a Report of Some of Its Results’, Educational and Psychological Measurement. SAGE Publications, 33(3), pp. 669–684. https://doi.org/10.1177/001316447303300316

Altbach, P. G., Reisberg, L. and Rumbley, L. (2010) Trends in global higher education : tracking an academic revolution. UNESCO Pub.

Bevan, R. and Badge, J. (2008) ‘Seeing eye-to-eye? Staff and student views on feedback’, Bioscience Education, 12(1), pp. 1–15. https://doi.org/10.3108/beej.12.1

Boyatzis, R. E. (1998) Transforming Qualitative Information: Thematic Analysis and Code Development. SAGE Publications.

Brown, C., Ross, S., Cleland, J. and Walsh, K. (2015) ‘Money makes the (medical assessment) world go round: The cost of components of a summative final year Objective Structured Clinical Examination (OSCE)’, Medical Teacher, 37(7), pp. 653–659. https://doi.org/10.3109/0142159X.2015.1033389

Carless, D. (2006) ‘Differing perceptions in the feedback process’, Studies in Higher Education, 31(2), pp. 219–233. https://doi.org/10.1080/03075070600572132

ten Cate, O. T. J. (2013) ‘Why receiving feedback collides with self determination.’, Advances in health sciences education: theory and practice, 18(4), pp. 845–9. https://doi.org/10.1007/s10459-012-9401-0

Coffey, M. and Gibbs, G. (2010) ‘The Evaluation of the Student Evaluation of Educational Quality Questionnaire (SEEQ) in UK Higher Education’, Assessment & Evaluation in Higher Education. Taylor & Francis Group, 26(1), pp. 89–93. https://doi.org/10.1080/02602930020022318

Cohen, P. A. (1980) ‘Effectiveness of student-rating feedback for improving college instruction: A meta-analysis of findings’, Research in Higher Education. Kluwer Academic Publishers, 13(4), pp. 321–341. https://doi.org/10.1007/BF00976252

Cotten, S. R. and Wilson, B. (2006) ‘Student–faculty Interactions: Dynamics and Determinants’, Higher Education. Kluwer Academic Publishers, 51(4), pp. 487–519. https://doi.org/10.1007/s10734-004-1705-4

Davison, E. and Price, J. (2009) ‘How do we rate? An evaluation of online student evaluations’, Assessment & Evaluation in Higher Education. Routledge, 34(1), pp. 51–65. https://doi.org/10.1080/02602930801895695

Deci, E. L. (1971) ‘Effects of externally mediated rewards on intrinsic motivation.’, Journal of Personality and Social Psychology, 18(1), pp. 105–115. https://doi.org/10.1037/h0030644

Doan, L. (2013) ‘Is Feedback a Waste of Time? The Students’ Perspective’, Journal of Perspectives in Applied Academic Practice, 1(2), pp. 3–10. https://doi.org/10.14297/jpaap.v1i2.69

Duffield, K. E. and Spencer, J. A. (2002) ‘A survey of medical students’ views about the purposes and fairness of assessment.’, Medical education, 36, pp. 879–886. https://doi.org/10.1046/j.1365-2923.2002.01291.x

Ende, J. (1983) ‘Feedback in clinical medical education’, JAMA: the journal of the American Medical Association, 250(6), pp. 777–781. https://doi.org/10.1001/jama.1983.03340060055026

Finney, T. and Finney, R. (2010) ‘Are students their universities’ customers? An exploratory study’, Education + Training. Emerald Group Publishing Limited, 52(4), pp. 276–291. https://doi.org/10.1108/00400911011050954

Gibbs, G. and Simpson, C. (2004) ‘Conditions Under Which Assessment Supports Students’ Learning’, Learning and Teaching in Higher Education, (1), pp. 3–31.

Gil, D. H., Heins, M. and Jones, P. B. (1984) ‘Perceptions of medical school faculty members and students on clinical clerkship feedback.’, Journal of medical education, 59(11 Pt 1), pp. 856–64.

Goldberg, L. R. (1992) ‘The development of markers for the Big-Five factor structure.’, Psychological Assessment, 4(1), pp. 26–42. https://doi.org/10.1037/1040-3590.4.1.26

Hattie, J. (1999) Influences on student learning (Inaugural professorial address, University of Auckland, New Zealand). Available at: http://www.education.auckland.ac.nz/webdav/site/education/shared/hattie/docs/influences-on-student-learning.pdf (Accessed: 4 January 2014).

Hattie, J. and Timperley, H. (2007) ‘The Power of Feedback’, Review of Educational Research, 77(1), pp. 81–112. https://doi.org/10.3102/003465430298487

Hendry, G. D. and Dean, S. J. (2002) ‘Accountability, evaluation of teaching and expertise in higher education’, International Journal for Academic Development. Taylor & Francis, 7(1), pp. 75–82. https://doi.org/10.1080/13601440210156493

Johnstone, D. B. (2004) ‘The economics and politics of cost sharing in higher education: comparative perspectives’, Economics of Education Review, 23(4), pp. 403–410. https://doi.org/10.1016/j.econedurev.2003.09.004

Kember, D. and Leung, D. Y. P. (2008) ‘Establishing the validity and reliability of course evaluation questionnaires’, Assessment & Evaluation in Higher Education. Routledge, 33(4), pp. 341–353. https://doi.org/10.1080/02602930701563070

Kember, D., Leung, D. Y. P. and Kwan, K. P. (2002) ‘Does the Use of Student Feedback Questionnaires Improve the Overall Quality of Teaching?’, Assessment & Evaluation in Higher Education. Taylor & Francis Group, 27(5), pp. 411–425. https://doi.org/10.1080/0260293022000009294

Kluger, A. N. and DeNisi, A. (1996) ‘The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory’, Psychological Bulletin, 119(2), pp. 254–284. https://doi.org/10.1037/0033-2909.119.2.254

Knapp, J. C. and Siegel, D. J. (2009) The Business of Higher Education. Praeger.

Levin, H. (2001) ‘Waiting for Godot: Cost-Effectiveness Analysis in Education’, New Directions for Evaluation. Jossey‐Bass, A Wiley Company, 2001(90), p. 55. https://doi.org/10.1002/ev.12

Liberman, S., Liberman, M., Steinert, Y., McLeod, P., et al. (2005) ‘Surgery residents and attending surgeons have different perceptions of feedback.’, Medical teacher, 27(5), pp. 470–2. https://doi.org/10.1080/0142590500129183

Lowry, S. (1992) ‘What’s wrong with medical education in Britain?’, BMJ (Clinical research ed.). BMJ Publishing Group, 305(6864), pp. 1277–80. https://doi.org/10.1136/bmj.305.6864.1277

Moustakas, C. (1994) Phenomenological Research Methods. SAGE Publications, Inc. https://doi.org/10.4135/9781412995658

O’donovan, B., Price, M. and Rust, C. (2004) ‘Know what I mean? Enhancing student understanding of assessment standards and criteria’, Teaching in Higher Education. Taylor and Francis Ltd, 9(3), pp. 325–335. https://doi.org/10.1080/1356251042000216642

Parliamentary Select Committee on Education and Employment (2001) Higher education: Student retention. London.

Porter, S. R., Whitcomb, M. E. and Weitzer, W. H. (2004) ‘Multiple surveys of students and survey fatigue’, New Directions for Institutional Research. Wiley Subscription Services, Inc., A Wiley Company, 2004(121), pp. 63–73. https://doi.org/10.1002/ir.101

Prystowsky, J. B. and Bordage, G. (2001) ‘An outcomes research perspective on medical education: the predominance of trainee assessment and satisfaction’, Medical Education. Blackwell Science Ltd, 35(4), pp. 331–336. https://doi.org/10.1046/j.1365-2923.2001.00910.x

QSR International (2012) QSR NVivo. Available at: http://www.qsrinternational.com/products_nvivo.aspx

Reznick, R. K., Smee, S., Baumber, J. S., Cohen, R., et al. (1993) ‘Guidelines for estimating the real cost of an objective structured clinical examination.’, Academic medicine : journal of the Association of American Medical Colleges, 68(7), pp. 513–7. https://doi.org/10.1097/00001888-199307000-00001

Scott, S. V. (2014) ‘Practising what we preach: towards a student-centred definition of feedback’, Teaching in Higher Education. Taylor & Francis, 19(1), pp. 49–57. https://doi.org/10.1080/13562517.2013.827639

Seldin, P. (1989) ‘Using student feedback to improve teaching’, New Directions for Teaching and Learning. Wiley Subscription Services, Inc., A Wiley Company, 1989(37), pp. 89–97. https://doi.org/10.1002/tl.37219893711

Spencer, R. E. and Aleamoni, L. M. (1970) ‘A student course evaluation questionnaire’, Journal of Educational Measurement. Blackwell Publishing Ltd, 7(3), pp. 209–210. https://doi.org/10.1111/j.1745-3984.1970.tb00718.x

Taplin, R. H., Kerr, R. and Brown, A. M. (2013) ‘Who pays for blended learning? A cost–benefit analysis’, The Internet and Higher Education, 18, pp. 61–68. https://doi.org/10.1111/j.1745-3984.1970.tb00718.x

Thompson, S. and Zaitseva, E. (2012) Reward and Recognition: Student Led Teaching Awards Report. Higher Education Academy.

Watson, N. A. (1999) ‘Mentoring today--the students’ views. An investigative case study of pre-registration nursing students’ experiences and perceptions of mentoring in one theory/practice module of the Common Foundation Programme on a Project 2000 course.’, Journal of advanced nursing, 29(1), pp. 254–62. https://doi.org/10.1046/j.1365-2648.1999.00881.x

Zabaleta, F. (2007) ‘The use and misuse of student evaluations of teaching’, Teaching in Higher Education. Taylor & Francis Group, 12(1), pp. 55–76. https://doi.org/10.1080/13562510601102131

Zendejas, B., Wang, A. T., Brydges, R., Hamstra, S. J., et al. (2013) ‘Cost: The missing outcome in simulation-based medical education research: A systematic review’, Surgery, 153(2), pp. 160–176. https://doi.org/10.1016/j.surg.2012.06.025

Appendices

None.

Declarations

There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (https://creativecommons.org/licenses/by-sa/4.0/)

Ethics Statement

Ethical approval for the study was obtained from the relevant University of Edinburgh Student Ethics Committee: The College of Medicine and Veterinary Medicine Committee on the Use of Student Volunteers. At the time of approval the committee did not issue reference numbers.

External Funding

This paper has not had any External Funding

Reviews

Please Login or Register an Account before submitting a Review

Megan Anakin - (16/02/2019) Panel Member Icon
/
Thank you for inviting me to review this article. My review is intended to build on the feedback provided by the other two reviewers and offer the authors some suggestions for improvement. While I appreciate the focus on students’ understanding of feedback, there are a number of issues that authors should consider addressing to improve their article.

In the paragraph about “reducing costs” and “cost effectiveness”, there is shift in the introduction’s focus from describing student views of feedback. The authors make a brief link between students’ views about feedback development with students’ understanding about cost to outline that students’ have a limited view of the relevant factors related to their education in order to offer constructive feedback. To enhance the introduction, the authors may wish to consider defining the terms ‘feasibility’ and ‘cost effectiveness’ and make the relationship among these concepts and student feedback stronger.

The aim of the study was state as an examination of how students’ understand of feedback, however, the analysis was focused on examining the feasibility and cost-effectiveness of the positive and negative views described by students. It would be helpful to present the research question so the link between the aim and the methods can be better understood by the reader.

In the data collection section, the authors note that a second question was added later in the data collection process. It would be helpful to know many students provided responses to questions 1 and 2.

To increase the trustworthiness of the data analysis procedures, it would be helpful for the authors to describe who conducted each round of coding and what assumptions they brought with them to this process. The authors might also consider describing how they applied the concepts of feasibility and cost-effectiveness to analyse their data and what steps they took to develop the core concepts from the positive and negative themes.
It is customary when presenting findings from qualitative analysis to describe all of the themes to allow the reader to judge the appropriateness of the interpretation of the data.
The authors may wish to consider reporting all of their findings and supporting each theme with two or three representative quotations from participants. In this way, the reader will be able to decide if the theoretical themes are relevant to informing the research question or not.
The authors might also like to consider how the paragraph about reflexivity could be strengthened. The discussion about reflexivity might be enhanced if specific decisions that the researchers made in the design and execution of their project were described earlier in the article.

At the end of this article, I am wondering how the authors may have used the finding from this study to inform the feedback collection and use practices in their institution. The authors may want to consider looking at feedback from a systems point of view. The work of work of Liz Malloy and David Boud might stimulate their thinking (For example see: Boud D. Feedback: Ensuring that it leads to enhanced learning. Clin Teach. 2015. 12(1), 3-7. Boud D. Molloy E. (Eds.). Feedback in higher and professional education: understanding it and doing it well. Routledge. 2013.). I look forward to reading about the next steps the authors take in addressing students’ understanding of the feasibility and cost-effectiveness of their feedback.
Tan Nguyen - (10/02/2019) Panel Member Icon
/
This paper has a major limitation in its interpretation as noted by the previous review. The use of data that arose from only two questions cannot adequately inform about the students' view on the cost of feedback systems and does not robustly support the Take Home Messages. The term "cost-effectiveness" is confusing, primarily because the cost of current feedback systems has not been stated, which provides a reference for comparison. In addition, how the effectiveness of the feedback is also very difficult to determine quantitatively. There is also likely cognitive dissonance between what is good feedback and what is not between students and tutors.
Gerens Curnow - (08/02/2019)
/
Thank you for submitting this article on the topic of student's perception of feedback, especially as relates to the costs incurred in providing it. This paper is unique (as far as I can see) in that it aims to investigate the learner's awareness of the costs and practicalities involved in providing high-quality feedback.

While this paper may demonstrate the need to engage students more with the costs involved in providing feedback, I feel it is fundamentally flawed in that the two questions asked, i.e. what's the biggest problem with the feedback you receive and what are the best bits about it, do not inherently lend themselves to students reporting the reasons why these problems may exist. A student who has studied Medical Education with a PhD thesis in feedback provision could conceivably answer both questions thoroughly without mentioning the potential limitations on the side of the organisation (i.e. cost, staffing etc), since these are not explicitly asked, or even implicitly asked.

Further, I feel the study itself is written from a strongly biased position, not in keeping with the requirements of good qualitative research. The starting point clearly appears to be that feedback cannot be improved without incurring significant cost to the organisation. I don't feel this is acknowledged within the article and is not necessarily in keeping with the current literature (e.g. Bartlett, M., Crossley, J. and McKinley, R. (2016) http://eprints.whiterose.ac.uk/103779/)

Overall I think this study will be interesting to those looking to understand student perceptions of feedback, but does not strongly support the argument that students are unaware of the potential costs incurred in improving the quality and quantity of feedback.