Research article
Open Access

A new model for teaching Opthalmoscopy to medical students

Paulo Henrique Lopes de Souza[1], Guilherme Mello Neiva Nunes[1], José Matheus Guerra de Alencar Bastos[1], Tiago Motta Fonseca[1], Vitor Cortizo[1]

Institution: 1. Federal University of Piauí
Corresponding Author: Mr José Matheus Guerra de Alencar Bastos ([email protected])
Categories: Medical Education (General), Research in Medical Education, Students/Trainees
Published Date: 06/11/2017


Background: Although its relevance, the direct ophthalmoscopy may be underused in the daily clinical exam by medical students. Current findings reinforce the need for new approaches in the teaching process of this exam.

Objective: Evaluate the applicability of an eye simulator model for teaching direct ophthalmoscopy to medical students.

Methods: This study was designed for fourth-year medical students, 20 in total, randomized into two groups: one with ophthalmoscopic training session on human volunteers and another one with a simulator model training session. We used a plastic canister with an aperture of 7mm to simulate a dilated pupil. A normal fundus photograph was affixed on the opposite side of the aperture. The participants filled in a pre-test to assess previous knowledge in ophthalmology. After, both groups performed ophthalmoscopy on human volunteers and filled in post-tests.

Results: The mean objective score was higher in the simulator group (82%) than in the group with human volunteer training (45%) (p = 0.01). There was no statistically significant difference between the degree of ease and frustration in performing ophthalmoscopy between the groups, neither between positive and negative feelings scores.

Conclusion: The simulator model proved to be effective in teaching direct ophthalmoscopy to medical students.

Keywords: Ophthalmoscopy / methods; Education, Medical / methods; Simulation; Models, Anatomic; Equipment Design


The direct ophthalmoscopy is a readily available tool for evaluation of the optic disc and retina, which is useful for diagnosing conditions that threaten vision, such as diabetic retinopathy, and life as papilledema in patients with intracranial hypertension (Schultz & Hodkings,2014). It is widely accepted that the ophthalmoscopy is an important clinical skill that should be dominated by medical students (Bradley,1999). However, little time is allocated for teaching this examination during the graduation in medicine, helping the fundus is considered one of the hardest propedeutic skills to acquire proficiency (Aikaishi et al., 2014).

A literature review listed the objectives to be achieved by undergraduate students in the training of direct ophthalmoscopy: ability to identify the red reflex and the optical disc; recognize signs of clinical emergencies in patients, mannequins or fundus photographs; and know about other retinopathies, but without necessarily identifying them (Benbassat, Polak & Javitt,2011). Previous studies have identified lack of confidence by medical students last year to achieve skill in achieving ophthalmoscopy. As factors that may be causing this finding are "absence of sufficient formal education" and "underexposed to ophthalmology during the course" (Schultz & Hodkings,2014).

Thus, try to overcome the challenges in the use of direct ophthalmoscope trough teaching of various approaches have been employed, for example, computer simulations 3D, polystyrene models online lessons and use of plastic containers to simulate the eye (Byrd, Longmire, Syme, Murray-Krezan & Rose, 2014).

This study aims to evaluate the applicability of a model of fundus simulator, built by the authors, in the teaching of direct ophthalmoscopy to medical school students.


This is a prospective, randomized study. The study was conducted from January to October 2015.

STUDY POPULATION: Participants are students of the fourth year of medical school at the State University of Piaui. Inclusion criteria consist of student informed consent enrolled in ophthalmology discipline, have better visual acuity less than 20/25 and participate in introductory classes and training sessions in ophthalmoscopy. The exclusion criterion is the incomplete filling of any of the questionnaires.

PRE-TEST PHASE: the fourth - year medical students attended an introductory lecture on direct ophthalmoscopy lasting about 50 minutes. The introductory lesson expounded on the handling of the direct ophthalmoscope and fundus changes in relevant Ophthalmic / clinical practice. Subsequently, the participants filled out a pre-test with 48 items collected: general demographic information; prior exposure to ophthalmology; baseline and diagnostic skills when interpreting changes in four fundus photographs with questions about aspect of the papilla, retina and vessels. The final score was converted to a scale of 0 to 100.

PHASE OF TRAINING AND POST-TEST: The rest of the study, students were allocated by permuted randomization in 4 groups of 5 students. Each training session consisted of 5 minutes of instructions and demonstrations on standardized use of ophthalmoscope and 4 minutes of training on human volunteers (two groups) or training on the left "eye" of the simulator (two groups). During the training session, the students were instructed about the proper technique ophthalmoscopy and any questions they have had about the ophthalmoscopy were answered. After training, all participants underwent fundus, for 8 minutes sessions each in both eyes of human volunteers and completed a post-objective test and a qualitative post-test. The post-objective test contained 24 items covering aspects of the papilla, retina and vessels of eyes of voluntary humans. The final score was converted to a scale from 0 to 100. In the qualitative post-test, we asked the students to classify the degree of easiness in viewing the fundus with an ophthalmoscope on a scale of probability of 10 points (10 = easiest) and degree of disappointment when performing ophthalmoscopy (10 = very disappointed). It was also questioned if the disappointment was due to insufficient time to examine individuals and if the students would incorporate ophthalmoscopy as part of their general physical examination in the future. Also, there was a table containing positive terms (trust, attention, interest, excitement and enthusiasm) and negative (annoyance, irritation, embarrassment, nervousness and tension) graded on a five (5) points scale to document the feelings of the participants at the end of training. The scores were calculated by the simple sum for each item. Hence, the positive feelings of scores ranged from 5-25, in which higher scores represent higher levels of positive feelings, the same goes for negative feelings. All tests were questionnaires adaptations used in the study Teaching Ophthalmoscopy to Medical Students (totems study) (Kelly et al., 2013). The questionnaires were translated into Portuguese and then subjected to reverse translation. They were developed to identify difficulties of medical students in ophthalmoscopy simulations.

SIMULATOR EYEBALL: Ocular simulator consists of a common plastic container (medicine bottle) with a normal fundus photograph posted on its cover, optically corrected by a convex spherical lens of + 18D (Carl Zeiss) measuring 5.5 x 3.5 x 1 cm. There was a hole in the central region of the bottom portion of the bottle measuring 7mm to simulate a dilated pupil. The lens was inserted by a higher opening in the side of the bottle being positioned next to "pupil." The flask was affixed inside a dummy head with two pairs of medium tapes (Command - 3M) enabling easy removal for replacement of the bottles. The dummy simulates the anatomical difficulties of the human face (Figure 1).

DATA ANALYSIS: Data were exported to Excel software and then imported into IBM software Statistical Package for Social Sciences (SPSS) version 22. averages and medians were reported for continuous variables and percentages for categorical variables. It was used the Mann-Whitney-Wilcoxon test for statistical analysis of differences between the groups since the data did not follow a normal distribution and due to the small sample size. A p value less than 0.05 was considered to indicate a statistically significant difference.

This study was approved by the Research Ethics Committee of the Hospital Vargas (CAAE: 40609514.1.0000.5613). All participants provided written informed consent


Twenty-four (24) fourth-year medical students at the State University of Piauí met all inclusion criteria, but only 20 (83%) remained in the study because 4 were excluded due to not completely filled questionnaires. Ten participants joined the group with ophthalmoscopy training in human volunteers (human group) and ten formed the group with simulator training (simulator group). Fifteen (15) of 20 (75%) were male and the average age was 24 years. None of the students had previous training in ophthalmology.

There was no statistically significant difference (p = 0.49) between the pretest score obtained by the human group (mean = 61%; median = 65%) and the simulator group (mean = 68%; median = 65%).

The scores at objective post-test clearly differed between the groups: the students of the human group obtained an average score of 45% (median = 35%) while the participants of the simulator group had an average of 82% (median = 90%), The difference is statistically significant (p = 0.01) (Figure 2).

No statistically significant difference between the groups was shown on the ratings of ease and deception on taking the examination, or in the rate of positive and negative feelings. Each group means for these variables are shown on Table 1.

A strong positive correlation between the degree of easiness and the scores obtained in the post-objective test by simulator group participants (Pearson correlation = 0.92; p <0.001) was identified, while the human group results did not present significant correlation (Pearson correlation = 0.59; p = 0.07). The differences are illustrated on Figure 3.

Among all participants, only one student (5%) of the human group considered insufficient time for the examination. Thirteen students (65%) would repeat the exam in the future without any explicit request of the patients, 7 of the human group and 6 of the simulator group.


In this study, the simulator model developed by the authors proved being able to improve the learning technique of direct ophthalmoscopy, which was objectively demonstrated by the higher performance achieved by the participants who trained on the simulator. This is in agreement with most studies using simulators for teaching techniques of ophthalmoscopy (Bradley,1999; Aikaishi et al., 2014; Larsen, Stoddart & Griess, 2014; Chung & Watzke, 2004; Levy & Churchill, 2003; Swanson, Ku & Chou, 2011). The best performance cannot be attributed to deeper knowledge in ophthalmology of one group over the other, because none of the participants had any previous ophthalmologic training and the pre-test scores in both groups were similar, showing equal initial funduscopy ability between participants and the homogeneity of the groups. The time destined to the training and answering the final exam cannot also be considered a contributing factor in the difference of performance between the two groups. However, the fact that a student of the “human” group considered insufficient time to perform the ophthalmoscopy may indicate an eventual tendency to need of more time for some students when training on real patients.

Qualitatively, the students of the simulator group also had a better experience performing the test in humans, considering easier and getting less disappointed than the participants of the human group. However, these differences were not statistically significant probably due to the small samples size.

The subjective perception of easiness when performing funduscopy is not necessarily correlated with better performance in the post-test in the “human” group. In contrast, the perception of easiness by simulator group students has real correlation with better performance, with proportionately higher objective post-test scores.

The simulator was not able to influence the positive feelings (such as trust) or negative feelings (such as shame) while performing the examination. Training with real patients probably made ​​the “human” group participants feel psychologically more comfortable to perform the objective test in humans, unlike the simulator group. This is shown by a small difference in the average ratings of the positive feelings score. The negative feelings score was also slightly higher in the “human” group, probably a reflection of the higher level of disappointment and more difficulty perceived in the “human” group.

The use of simulator also was not able to encourage the medical students to incorporate the funduscopy to the general physical examination routine in the future. However, we found a higher percentage compared to that seen in the TOTeMS study, in which 49% of students indicated they would perform direct ophthalmoscopy as part of the general physical examination (Kelly et al., 2013).

A recent Japanese study found that a direct ophthalmoscopy teaching program should include a minimum of 99 tests to achieve proficiency in this skill (Aikaishi et al., 2014).The fundus simulator is effective as an important help in the ophthalmoscopy learning, providing benefits such as higher time availability for training, reduced discomfort to patients or volunteers due to the strong light of the ophthalmoscope during attempts of performing the exam and more technical easiness at the beginning of ophthalmoscopy practice because the simulator is less complex than the real eye.

There are several direct ophthalmoscopy teaching reported in recent studies. One approach is to encourage students to perform direct ophthalmoscopy in themselves, to identify who owns colleagues optical disc photographs that were previously delivered to each student.  84% of participants improved ophthalmic skills in just one week of training (Milani et al., 2013).

There are commercially available mannequins to simulate the fundus and for ophthalmoscopic examination training, as the EYE (Kyoto Kagaku Co., Tokyo, Japan) and Eyesi indirect (VRmagic Holding AG, Mannheim, Germany). Leitritz et al. evaluated the effectiveness of the training Eyesi indirect indirect ophthalmoscopy and concluded that there was evidence that a single training with augmented reality indirect ophthalmoscopy is effective in increasing the ophthalmoscopic skills of students (Leitritz et al., 2014). Researchers at the University of Nebraska used the EYE model and concluded that ophthalmoscopy simulation experience was able to increase the participants’ confidence and skills (Larsen, Stoddart & Griess, 2014).

Other studies reported the use of simulators developed by the researchers, being more accessible. Among the proposed models include: styrofoam head with a holder for photographs; table tennis ball with printed words in its inner rear face; and plastic containers with retinal photographs in the bottom (Chung & Watzke, 2004). The TOTeMS study ( Kelly et al.,2013) used plastic containers with fundus photography adhered to the back of the container to evaluate the medical student direct ophthalmoscopy teaching. Their conclusion was not favorable to the use of simulators, because the students preferred learning by fundus photography. A research at the University of California used similar plastic containers models and achieved rates of 94% of correct answers in the test conducted after the group made ​​use of simulators against 60% in the control group (Swanson, Ku & Chou, 2011).

The proposed model in this study combines features from some previous models and has similar properties to those found in commercial simulators. It allows easy change of the bottle cap for others containing several fundus photographs, allowing its use even in practical students’ assessments of funduscopy. It is optically corrected, requiring focus adjustment on the ophthalmoscope. It has a wider field than the observed when performing ophthalmoscopy in human eyes, which facilitates the identification of anatomical marks for the beginner student. It also has the advantage of the low cost especially when compared to commercial models.


The proposed simulator was able to help in the direct ophthalmoscopy learning technique for medical students, presenting higher objective results than the initial training with real patients. In the study, there were no relevant differences between the subjective evaluation by training with simulator or with human volunteers. Most students would perform ophthalmoscopy on their general physical examination, independently of the use of simulators during training.

Take Home Messages

  • The direct ophthalmoscopy is an underused clinical exam skill, due to difficulties in mastering this ability.
  • A new way for teaching direct ophthalmoscopy is needed to develop confidence and to incorporate the ophthalmoscopy in the daily clinical routine.
  • The simulator model created is cheap and easy to replicate, and it is effective in teaching direct ophthalmoscopy to medical students.

Notes On Contributors

Paulo Henrique Lopes de Souza, MD, is an Ophthalmologist. He completed his residency in at Federal University of Piauí, Brazil.

Guilherme Mello Neiva Nunes is a Medical Student at Federal University of Piauí, Brazil.

José Matheus Guerra de Alencar Bastos is a Medical Student at Federal University of Piauí, Brazil.

Tiago Motta Fonseca, MD, is an Ophthalmology resident at Ruy Cunha Eyes Hospital, Brazil.

Vitor Cortizo, MD, PhD, is an Ophthalmologist. He is an Associate Professor in Ophthalmology at Federal University of Piauí and State University Piauí, Brazil



1. Schulz, C., & Hodgkins, P. (2014). Factors associated with confidence in fundoscopy. The Clinical Teacher, 11(6), 431-435.

2. Bradley, P. (1999). A simple eye model to objectively assess ophthalmoscopic skills of medical students. Medical Education, 33(8), 592-595.

3 Akaishi, Y., Otaki, J., Takahashi, O., Breugelmans, R., Kojima, K., & Seki, M. et al. (2014). Validity of direct ophthalmoscopy skill evaluation with ocular fundus examination simulators. Canadian Journal of Ophthalmology / Journal Canadien D'ophtalmologie, 49(4), 377-381.

4. Benbassat, J., Polak, B., & Javitt, J. (2011). Objectives of teaching direct ophthalmoscopy to medical students. Acta Ophthalmologica, 90(6), 503-507.

5. Byrd, J., Longmire, M., Syme, N., Murray-Krezan, C., & Rose, L. (2014). A Pilot Study on Providing Ophthalmic Training to Medical Students While Initiating a Sustainable Eye Care Effort for the Underserved. JAMA Ophthalmology, 132(3), 304.

6. Kelly, L., Garza, P., Bruce, B., Graubart, E., Newman, N., & Biousse, V. (2013). Teaching Ophthalmoscopy to Medical Students (the TOTeMS Study). American Journal of Ophthalmology, 156(5), 1056-1061.e10.

7. Larsen, P., Stoddart, H., & Griess, M. (2014). Ophthalmoscopy using an eye simulator model. The Clinical Teacher, 11(2), 99-103.

8. Chung, K., & Watzke, R. (2004). A simple device for teaching direct ophthalmoscopy to primary care practitioners. American Journal of Ophthalmology, 138(3), 501-502.

9. Levy, A., & Churchill, A. (2003). Training and testing competence in direct ophthalmoscopy. Medical Education, 37(5), 483-484.

10. Swanson, S., Ku, T., & Chou, C. (2011). Assessment of direct ophthalmoscopy teaching using plastic canisters. Medical Education, 45(5), 520-521.

11. Milani, B., Majdi, M., Green, W., Mehralian, A., Moarefi, M., & Oh, F. et al. (2013). The Use of Peer Optic Nerve Photographs for Teaching Direct Ophthalmoscopy. Ophthalmology, 120(4), 761-765.

12. Leitritz, M., Ziemssen, F., Suesskind, D., Partsch, M., Voykov, B., Bartz-Schmidt, K., & Szurman, G. (2014). Critical evaluation of the usability of augmented reality ophthalmoscopy for the training of inexperienced examiners. Retina, 34(4), 785-791.


Figure 1. Eye simulator used to teach direct ophthalmoscopy to medical students. Model human head attached to Styrofoam base (left). Plastic container with convex lens next to "pupil" (middle). Bottle with "pupil" of 7mm and fundus photograph posted on its cover (right).


figure 2

Figure 2. Percentage score obtained in the objective post-test by students, in both groups (n = 20).


Figure 3. Correlation between the exam perception of easiness and the scores percentage at the post-test in the group trained with humans (Figure 3a) and in the group trained with simulator (Figure 3b).



Simulator group

Human group

P value

Easiness degreea




Disappointment degreea




Positive feelings scoreb




Negative feelings scoreb




Table 1. Average ratings of easiness, disappointment, positive feelings and negative feelings given by the participants, by groups, and their p-values ​​for differences between groups.

a Ratings on a scale of 0-10.

b Score consists of the sum of ratings given, ranging from 5-25.


There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (


Please Login or Register an Account before submitting a Review

Trevor Gibbs - (07/11/2017) Panel Member Icon
The skill of ophthalmological examination of the eye is very important and remains, I believe, to be one of the key skills required of most if not all medical graduates. I am not sure that I concur with the authors that because it is a difficult skill to master that it is often omitted from a skills programme.
At the moment there are many variants of good task trainers on the market, many of them relatively cheap, that play an important part in the training of the ophthalmic examination, but I suppose if money is an issue than we should try and find cheaper ways to facilitate this training. Hence, It was interesting to read this paper which describes a seemingly very cheap way of simulation.
I did find the paper a little difficult to understand and had to read it several times to understand its real messages.
One main message that I thought the authors failed to convey was the point at which we introduce task trainers and when we move to human volunteers- a simple task trainer used in fourth year seems little late for me.
The other message that I felt rather confused about was the pre and post test scores. It appeared that the tests were of knowledge when in fact the model was designed fora skill- how was this skill measured?
Task trainers do tend to avoid the human element of this type of examination- head moving, blinking, not sitting still etc.- was it the lack of this that the students were rating?
I am sorry but I cannot concur with my co-reviewers in their rating of this paper, but perhaps if it was laid out in a more logical way then I might be able to see its message much clearer.
Susmita Reddy Karri - (07/11/2017) Panel Member Icon
I like the idea. A very simple and cost effective way of teaching students. Would love to see this study being replicated on a larger scale.
Rukhsana Zuberi - (06/11/2017)
Very good and interesting article