Research article
Open Access

Pilot study on the impact of lectures and introduction of digital technologies in the knowledge acquisition in medical students

Sérgio Teles[1], Inês Leal[2], David Sousa[2], Carlos Marques-Neves[2], Luís Abegão Pinto[2]

Institution: 1. Faculty of Medicine of Lisbon University, 2. Vision Sciences Study Centre, Faculty of Medicine of Lisbon University
Corresponding Author: Mr Sérgio Teles ([email protected])
Categories: Assessment, Educational Strategies, Teachers/Trainers (including Faculty Development), Teaching and Learning
Published Date: 11/01/2019

Abstract

Introduction: Academic teaching has been present in society for centuries. However, its methodology has not significantly changed. Though various articles were published analysing students´ satisfaction with the teaching method, there is still scarce data on the knowledge improvement associated with different methodologies. Therefore, the endpoints of this research were: 1) analyse the impact of Sophistic lectures in knowledge acquisition in medical students and 2) examine knowledge assessment using new digital technologies, compared with a more traditional paper-based method.

Methods: A repeated measures design was implemented in four classes of 4th year medical students, lectured by the same teacher on the same subject. A scientifically validated questionnaire was applied before and after each class, in paper and web-based tool Sli.do to two classes each. Results were compared by means of descriptive statistics.

Results: 55 answers were obtained in paper and 34 in Sli.do. Paper method questionnaires had mildly lower scores before and after class (46% and 74,2%, respectively) when compared with Sli.do (52,4% and 82,9%, respectively). Although basal scores were different among methods, both revealed a similar relative knowledge improvement, comparatively to the respective baseline (61,3% vs. 58,2%, respectively).

Discussion & Conclusions: The results showed that Sophistic classes are effective in learning, independent of evaluation method, which reassure that the task of the teacher is important and effective. This study supports the use of digital-based tools to assess learning in classes since they are more time-efficient, more ecological and logistically easier. Finally, assess the information that is being effectively taught to the students has several benefits for the teachers, the university and ultimately the students. Since a digital storage of the collected data makes it possible to carry out more effective internal audits over time, allowing the improvement of areas with lower results, benefitting the entire academic community.

Keywords: Teaching methodology; Technology; Knowledge acquisition.

Introduction

Academic teaching has been present in our society for centuries. However, its methodology has not significantly changed with time, keeping the same organization based on thematic oral exposure - Sophistic Method (Fry, Ketteridge and Marshall, 2009).

Though various articles have been recently published analysing the satisfaction of the students with the teaching method implemented, which is undoubtedly an important aspect, there is scarce data on the level of knowledge improvement associated with different methodologies, which is even more crucial in education (Kim, Kim and Getman, 2014).

This lack of information occurs because it is difficult to determine it. One aspect behind this unmet information is that students are already subject to a myriad of academic evaluations, ranging from OSCEs, clinical cases discussions, oral and written exams, which provide the basis for the final grades in each discipline. However, the main caveat in these evaluations is that it is not known how much of the knowledge assessed at the end of the semesters depends on individual study prior to the exam, rather than the level of knowledge transmitted in the classroom (Barr and Tag, 2012). This is a major point that should interest Universities and Faculties, since a deeper insight into this problematic can help monitor, and if so needed, identify areas for improvement at the pedagogical level.

One way of studying the impact of the lecture in knowledge acquisition by the students is with a validated questionnaire, presented before and after it, and analysing the variation. Nevertheless, traditional questionnaires are a cumbersome task, as they are usually paper-based. The time-consuming task of making such assessment twice in a classroom has so far shown it to be unfeasible in regular classes (Cohen, Manion and Morrison, 2005).

With this line of thought, we also tried to understand if the introduction of new digital tools in the classroom stimulates learning, considering that some of the most important components for an effective teaching are attention, curiosity, interest and motivation by the students (Barkley, 2009). The technology implemented was the web-based tool Sli.do (Bratislava, Slovakia), which allows real time interactive questionnaires.

Therefore, the endpoints of this research were to 1) analyse the impact of Sophistic lectures in knowledge acquisition in medical students and 2) examine if the introduction of new digital technologies in classroom have better results in performance analysis when compared with a more traditional paper-based method.

Methods

This was a prospective interventional study involving a repeated measures design, performed at Faculty of Medicine of the University of Lisbon during the 2017-2018 academic year.

From a universe of approximately 320 students enlisted in the 4th year, our university structures the classes, due to logistics and to improve the teacher-student ratio, in four of 80 students each, divided throughout the year. The class subject was chosen to be Glaucoma because the baseline knowledge was deemed to be basic among all students and thus more suitable to detect changes. Both the teacher and the content of these four classes were the same.

We presented the anonymous questionnaire, before and after each class, in paper format to two classes of students and in Sli.do format to the other two and compared the results.

In order to authenticate the web-based tool in study (Sli.do) and its accessibility for the target population we previously validated it with a method called Dummy Procedures, in a group of Ophthalmology residents. Afterwards, we evaluated precisely the accessibility to the target population by presenting the questionnaire to a small group of 4th year medical students.

We applied a scientifically validated and already published questionnaire, which was also validated for this specific target population - medical students with Portuguese as native language (Martins et al., 2014). This questionnaire consisted in 11 questions about epidemiology, risk factors, symptoms, diagnosis, treatment and consequences of glaucoma, all topics covered during class. Answers were ranked from 0-100%, according to their correctness.

The results were analysed by means of descriptive statistics. We weren´t able to perform more specific analysis tests, such as an analysis of variance (ANOVA), because Sli.do doesn´t provide the results in a discriminatory way person by person.

Results/Analysis

From a population of 320 medical students enlisted in the 4th year there were only a total of 109 attending them, which represents an attendance of 34%.

Combining the results from the four classes used as sample, completing an entire academic year, we obtained 55 answers to the questionnaire in paper, out of 55 attending students, and 34 in Sli.do, out of 54 (100% and 63% answer rate, respectively). The number of responses before and after class were the same in both methods, meaning there were no dropouts.

The combined results showed that the group who answered in paper had mildly lower scores before and after class (46% and 74,2%, respectively) comparing with the group who answered through Sli.do (52,4% and 82,9%). Although basal scores were different among methods, both revealed a similar relative knowledge improvement comparatively to the respective baseline (61,3% vs. 58,2%) (Figure 1).

 

Figure 1 - Results to the questionnaire, with different assessment of knowledge methods, before and after classes.

 

As previously referred, the questionnaire consisted in eleven questions of which two do not have a right or wrong answer being 1) Do you have someone in the family with glaucoma? and 11) Do you consider the knowledge acquired during the medical course enough to recognize a possible glaucoma case?. This makes that 9 of the 11 were direct questions with one or more correct answers, adequately indicated.

The results obtained, before and after class, and the knowledge variation observed (positive or negative), relatively to the respective baseline for each question, are shown in Table 1.

 

Table 1 - Results to the questionnaire, before and after class, and the impact in knowledge acquisition observed for each question

 

Paper

Sli.do

 

Before Class

After class

Δ

Before Class

After Class

Δ

2. What is the most prevalent cause of irreversible blindness in the world?

28%

46%

64%

30%

70%

133%

3. Blindness associated with glaucoma is:

44%

95%

116%

47%

97%

106%

4. What´s the most common type of glaucoma?

41%

91%

122%

29%

89%

207%

5. Which of the following are cause of primary open-angle glaucoma?

41%

68%

66%

37%

86%

132%

6. Which of the following are major risk factors for primary open-angle glaucoma?

48%

54%

13%

55%

56%

2%

7. Which of the following are signs and symptoms of primary open-angle glaucoma?

30%

84%

180%

51%

79%

55%

8. Which are the more common exams in a glaucoma?

62%

68%

10%

74%

88%

19%

9. How can glaucoma be treated?

52%

92%

77%

63%

100%

59%

10. When glaucoma´s treatment is effective it promotes:

69%

68%

-3%

84%

82%

-3%

Questions number 2, 3, 4, 5, 7 and 10 only have one right answer so the improvement represents exactly the percentage of students that did not know the answer before the class and after responded correctly. However, questions number 6, 8 and 9 have multiple right answers (adequately indicated) so the improvement revealed corresponds to an average of the number of correct answers.

Discussion

The results showed that classes, specifically those based on the Sophistic method (oral exposure of contents by the teacher), are effective in learning, independent of knowledge improvement evaluation method, since they revealed a similar increment in global knowledge acquisition on the subject at study (approximately 60%), relatively to the respective baseline. These are results that reassure the task of the teacher is unquestionably important and effective and should encourage students to attend more frequently theoretical classes.

However, of the entire population of 4th year medical students enlisted in Lisbon Medical University, the attendance to these optional lectures was only of 34%, which unfortunately supports the premise that, in present days, facultative academic lectures have low attendance rates (Kelly, 2012).

As can be seen in the results, there was a higher participation in the classes using paper than those with Sli.do (100% and 63% answer rate, respectively). This can be explained since in the classes that answered in paper, there was an inevitably face-to-face rapport between interviewer and interviewee, which in some way originates a personal responsibility making it less likely for the interviewee to actively deny participation. While inherently more time-consuming, the advantage of paper-based questionnaires seems to be a higher rate of responders, which decreases the risk of participation bias, thus making the results more generalizable to the intended population.

On the other hand, web-based questionnaires, such as Sli.do, imply a masked approach where the faceless invisibility provided by technology can create a selection bias, since it is more likely that only students more interested and keen to participate would reply.  Furthermore, technical issues inevitably associated with any such internet-based option, such as requiring a mobile phone or computer with battery connected to the university wireless or with data, could have decreased response rates.

The baseline difference of knowledge between the two groups was not very different, being slightly bigger with Sli.do (46% vs. 52,4%), which doesn´t precisely mean that the classes in which were given the questionnaires in paper had less overall knowledge of glaucoma than the classes using Sli.do. More probably might indicate that the facultative utilization of digital technologies, as previously referred, selected a more motivated group of students and therefore most probably to have prepared the class beforehand. However, it is important to point out that being motivated does not directly correspond with being more connoisseur of the subject (Soper, 2017).

Interestingly, despite non-significant differences in results, this study supports the use of digital-based tools to assess and increase knowledge transfer during classes.

Though a selection bias with the web-based method is possible, the almost complete overlap of results between the two approaches suggest that the loss in the number of participants is not associated with different outcomes. Considering this, there are important factors that support the change for a more digital way of evaluation, while on the meantime not jeopardizing the final outcome. These factors are: ecologic - using digital tools allows us to save paper (reducing deforestation), being more time-efficient - it´s a faster method of collecting and analysing data, and being logistically easier in the long term - although it involves an initial effort in creating the questionnaire in the platform, it can be reproduced every semester with no need of creating it again. Accordingly, interpretation and eventually fine tune of the pedagogical approach, if necessary, would be the same in both types of assessment.

Another important aspect that supports it, although not as expressive, is that although relative values of improvement in knowledge were similar, it is necessary to have in consideration the starting point of both populations, and since basal scores were better with Sli.do, and thus a population more difficult to improve, it shows that people who are integrated in the class through the use of digital tools benefit in terms of knowledge improvement.

There are various possibilities to be considered by teachers and universities to not disregard the non-participating students associated with web-based methods, involving all class in this beneficial teaching. For instance, if before a compulsory lesson, the presence list that traditionally is made by a signature in a paper could be changed to the record that the student answered the questionnaire in the platform, or in order for the student to have access to its grade he needs to have answered the questionnaire. These are just two examples on how to encourage student’s participation, and each professor or academic committee should analyse and discuss the best way to implement it on its specific student population.

Nevertheless, we encourage investigators to design and implement new studies to more accurately understand what the real value of improvement in knowledge is when all the class is involved.

On another subject, using a web-based tool like Sli.do to assess the information that is being effectively taught to the students by the teachers has several benefits for the teachers themselves, the university and ultimately the students, since it allows the professors to find gaps in teaching in order to amend them and improve their own skills, and for universities to carry out an internal evaluation of the performance of the employed teachers, which consequently benefit the students.

A specific example of this benefit was observed in question 10, as both methods revealed a regression in knowledge. In this question, students showed a regression in their confidence that glaucoma when effectively treated can prevent blindness, questioning themselves if their basal knowledge (which was elevated - 69% in paper vs. 84% in Sli.do) was wrong. Although being a very small regression in the correctness of the answers (69% to 68% in paper, a relative decline of 3% vs. 84% to 82% in Sli.do, a relative decline of 3%), it demonstrates that this specific information was not effectively passed on to the students, not due to the evaluation method, because the variation was equal, but due to some failure in communication or not being referred in class.

Another example of how these in-class questions can help detect miscommunications during the teaching process are the replies to question number 6. Regarding this question, in both set of questionnaires, students vastly selected an incorrect option after the lesson (incorrectly replying hyperopia to be a risk factor for open angle glaucoma - 7% up to 42% in paper vs. 3% up to 37% in Sli.do).

In both cases, having performed questionnaires would have allowed the teacher to detect what was not being properly understood by the audience.

In the opposite spectrum of the scale, this in-class questions also allow teachers to perceive the good results during the teaching process, as it can be seen in question number 3. In this question, in both set of questionnaires, students revealed a great increase in the correctness of the answers after class, comparatively to the results before class (44% to 95% in paper, a relative improvement of 116% vs. 47% to 97% in Sli.do, a relative improvement of 106%).

Since the introduction of new technologies seems to stimulate learning, we questioned what possible alternatives there were, besides interactive questionnaires, that could also be implemented in classrooms and benefit learning. We considered many hypotheses but the most inclusive and comprehensive was that a change in teaching panorama, from a Sophistic method to Flipped Classrooms, could be positive to knowledge improvement (Tune, Sturek and Basile, 2013).

Flipped Classrooms invert the usual organizational structure of the classrooms by providing educational tools and contents, such as recorded multimedia lectures, PowerPoints or other digital documents, before class, so students can view and study them outside of it and at their own pace. This asynchronous approach allows for more in class time for student centred learning activities, encouraging their participation and motivation through debates, presentations, questionnaires and other dynamics (Flaherty and Phillips, 2015).

We hope that, with this study, we further opened a door and encouraged other investigators to give more importance to methods in teaching and in finding better ways to reach the students, so in the future we can benefit from their excellence.

On a different matter, in the literature research we had access to various scientifically validated questionnaires. However, most presented some limitation to the objective, such as they were not in the mother tongue of the population at study (Portuguese) and they were not designed to it, being more appropriate for patients and their knowledge of this disease. Therefore, we choose the validated questionnaire that better accomplishes our specifics.

Although we did our best to minimize the limitations of this study by executing it the most impartial, professional and correct way, it is important to point out some of the limitations that we encountered in our study, which are an incentive for further studies. First, although we gathered a reasonable number of students (a total of 89), we think that with a larger sample it would be more representative of the population at study. Second, although we had the same professor lecturing all four classes and he prepared himself to give the best lecture possible and the most equal between them, it is still impossible to recreate exactly the same 4 classes of 50 minutes each, separated over a year, which might explain some of the result already discussed. Finally, as previously referred, we were not able to perform more specific analysis tests, such as an analysis of variance (ANOVA), because Sli.do does not provide the results in a discriminatory way person by person. Therefore, it was not possible to evaluate the statistical significance of the results obtained, being the results analysed by means of descriptive statistics.

Acknowledging these limitations is a needed step for, in the future, designing new studies that amend these aspects and consequently develop the knowledge on this subject, so we can all benefit.

Conclusion

Academic lectures based on the Sophistic method are effective in learning, independent of knowledge improvement evaluation method, resulting in an increase in knowledge of approximately 60%, relatively to the baseline results.

Web-based tools as a method of knowledge acquisition evaluation provide similar results compared to a more classic paper-based method. However, since it is an approach associated with clear advantageous it supports its implementation.

Additionally, assessing the information that is being effectively taught to the students allows for teachers to monitor and adapt their pedagogical methods and for Universities to carry out more effective internal audits over time, benefitting the entire academic community.

Take Home Messages

- Sophistic lectures are effective as a teaching method.

- The task of the professor is important and valuable in learning.

- Web-based tools, as a method of knowledge acquisition evaluation, seem to present similar results comparatively to more traditional methods and bring additional advantageous.

- This double-time knowledge assessment method is beneficial for all the academic community.

Notes On Contributors

Sérgio Teles is a medical student at Faculty of Medicine of Lisbon University, Portugal.

Inês Leal is an Ophthalmologist resident at Hospital de Santa Maria, Lisbon, Portugal. She is also an Ethics assistant professor at Faculty of Medicine of Lisbon University and is a member of the Vision Sciences Study Centre, Lisbon, Portugal.

David Sousa is an Ophthalmologist resident at Hospital de Santa Maria, Lisbon, Portugal. He is also a member of the Vision Sciences Studies Centre, Lisbon, Portugal.

Professor Carlos Marques-Neves is a senior Ophthalmologist and director of the University Clinic of Ophthalmology of the Faculty of Medicine of Lisbon University. He has a PhD in Medicine from Faculty of Medicine of Lisbon University and is a member of the Vision Sciences Studies Centre, Lisbon, Portugal.

Professor Luís Abegão Pinto is a senior Ophthalmologist at Hospital de Santa Maria, Lisbon, Portugal. He has a PhD in Medicine from Faculty of Medicine of Lisbon University, is an Ophthalmology assistant professor at the same University and is a member of the Vision Sciences Studies Centre, Lisbon, Portugal. Currently serves as Acta Ophthalmologica Editorial Board and in the Executive Committee of the European Glaucoma Society.

Acknowledgements

None.

Bibliography/References

Barkley, E. (2009) Student Engagement Techniques. San Francisco: Jossey-Bass.

Barr, R. B. and Tagg, J. (2012) 'From Teaching to Learning - A New Paradigm for Undergraduate Education'. Change: The Magazine of Higher Learning. 27(6), pp. 12-25. https://doi.org/10.1080/00091383.1995.10544672

Cohen, L., Manion, L. and Morrison, K. (2005) Research Methods in Education. 5th edn. London: RoutledgeFalmer.

Flaherty, J. O. and Phillips, C. (2015) 'The use of flipped classrooms in higher education: A scoping review'. The Internet and Higher Education. 25, pp. 85-95. https://doi.org/10.1016/j.iheduc.2015.02.002

Fry, H., Ketteridge, S. and Marshall, S. (2009) A Handbook for Teaching and Learning in Higher Education. 3rd end. New York: Routledge.

Kelly, G. E. (2012) 'Lecture attendance rates at university and related factors'. Journal Further and Higher Education. 36(1), pp. 17-40. https://doi.org/10.1080/0309877x.2011.596196

Kim, M. K., Kim, S. M., Khera, O. and Getman, J. (2014) 'The experience of three flipped classrooms in an urban university: An exploration of design principles'. The Internet and Higher Education. 22, pp. 37-50. https://doi.org/10.1016/j.iheduc.2014.04.003

Martins, S. C., Mendes, M. H., Guedes, R. A. P., Guedes, V. M. P. and Chaoubah, A. (2014) 'Knowledge about primary open angle glaucoma among medical students'. Revista Brasileira de Oftalmologia. 73(5), pp. 302-307. https://doi.org/10.5935/0034-7280.20140064

Soper, T. (2017) 'Knowledge into learning: comparing lecture, e-learning and self-study take-home packet instructional methodologies with nurses'. Wiley Online Library - Nursing Open. 4(2), pp. 76-83. https://doi.org/10.1002/nop2.73

Tune, J., Sturek, M. and Basile, D. (2013) 'Flipped classroom model improves graduate student performance in cardiovascular, respiratory, and renal physiology'. Advances in Physiology Education. 37(4), pp. 316-320. https://doi.org/10.1152/advan.00091.2013

Appendices

None.

Declarations

There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (https://creativecommons.org/licenses/by-sa/4.0/)

Ethics Statement

The current study was confirmed to not require formal research ethics approval from the Internal Border Review of University Clinic of Ophthalmology, Lisbon University, Portugal. Participants were all volunteers and gave their consent to perform the questionnaires. All the information collected was completely anonymous.

External Funding

This article has not had any External Funding

Reviews

Please Login or Register an Account before submitting a Review

Ken Masters - (17/06/2019) Panel Member Icon
/
A pilot designed to study the impact of lectures and introduction of digital technologies in medical students’ knowledge acquisition.

I was under the mistaken impression that the authors were going to perform a comparative assessment of knowledge gained in lectures versus knowledge gained through digital technologies, but this is not the case. The authors have evaluated the increase in knowledge of students on a course. The difference between the two groups is not in the method of instruction, but in the method of completing the questionnaire.

Issues with the paper that need addressing:
• The authors make the statement that “there is scarce data on the level of knowledge improvement associated with different methodologies, which is even more crucial in education” and supply (Kim, Kim and Getman, 2014) as a supporting reference. I’m not sure if the authors imply that the reference supports only the last clause of their sentence, or the entire statement (“there is scarce data on the level of knowledge improvement associated with different methodologies”). Either way, I think the authors should reconsider this statement as there is a great deal of literature that compares the impact of different teaching methodologies, so, to say that it is scare is not entirely true.
• Far more information about the questionnaire needs to be given. The authors state that they used a “scientifically validated and already published questionnaire” and supply a single reference in Portuguese that also used the questionnaire. They need to supply more information about the scientific validation performed upon this questionnaire. In addition, given that they have supplied data based upon a translation of some of the questions, they should supply an English translation of the questionnaire.
• The authors don’t explain the significance of examining the methods of completing a questionnaire. Again, I could understand if this were an assessment of difference in teaching methods, but it is not: the only difference between the two groups is the different methods employed when completing the questionnaire. Why is this important, and why would one expect a possible difference?
• Along this line, the authors write that “Interestingly…this study supports the use of digital-based tools to assess and increase knowledge transfer during classes.” While digital tools can be used to assess knowledge, they were not used in this pilot for knowledge transfer, so how can the authors conclude from this study that it supports the use of digital technologies for knowledge transfer? Particularly strange, given the paper response rate was 100% and online was only 63%)
• The very low response rate is significantly important, because we do not know what the impact would be of lectures on the students who were not able to complete the questionnaire.

While using digital technologies for questionnaires, and in the mode of audience-response systems is undoubtedly useful, the authors have not really explained the importance of the work. They had lectures, and gave half the students questionnaire on paper, and the other half online and compared the results, which, apart from the low participation rate do not really show any differences, although the students’ overall knowledge did appear to increase. While this is important to the course, it could probably have been better assessed through a detailed examination.

So, the authors have gone to quite a bit of trouble, but I don’t see a case made either way, apart from the fact that the lecturer appears to have been reasonably successful.
Possible Conflict of Interest:

For Transparency I am an Associate Editor of MedEdPublish

Balakrishnan(Kichu) Nair - (13/01/2019) Panel Member Icon
/

This is an interesting study looking at the knowledge improvement in lectures using a paper based evaluation against electronic evaluation.

The learning style of the new generation is different and we need to tailor the delivery and evaluation of teaching . The attendance at lectures were not ideal and this may represent the web- based learning preference of the learners . This could have been influenced by other factors too . What this study has shown is the students are more likely to take part ( or we get higher responses ) by electronic evaluation systems.

The sample size was adequate and conclusions are valid. I would like to see a description or definition of sophistic teaching and sli.do for the readership. The authors should make it clear how the students were selected to the paper or electronic feedback .