New education method or tool
Open Access

A proposal for a grading and ranking method as the first step toward developing a scoring system to measure the value and impact of viewership of online material in medical education - going beyond “clicks” and views toward learning

Poh Sun Goh[1]

Institution: 1. National University of SIngapore
Corresponding Author: Dr Poh Sun Goh ([email protected])
Categories: Medical Education (General), Research in Medical Education, Technology
Published Date: 09/12/2016

Abstract

This article will briefly examine the topic of the utility and value of “data analytics” (DA), often freely available, showing (potentially real-time) viewership data collected from websites (for example an online journal article), to propose a grading and ranking method as the first step toward eventually developing a scoring system to measure the value and impact of online viewership in medical education. 

Keywords: eLearning; Technology enhanced learning; Data analytics

Quotes

“Online learning offers one distinct advantage over its face-to-face counterpart: tangible artefacts…

instructors in an online classroom create teaching footprints, concrete evidence of each and every interaction”

from page 164, Data Analytics and Predictive Modelling: The Future of Evaluating Online Teaching, in Evaluating Online Teaching: Implementing Best Practices, by Thomas J. Tobin, B. Jean Mandernach, Ann H. Taylor, May 2015, Jossey-Bass

 

 

The proposal

Analytics collects the pages they visited, the traffic sources that brought them to your site and their level of engagement with your content.”

from 

Beyond clicks: Keys to online-to-offline tracking and attribution discussed at SMX West

by Mark Traphegan, March 25, 2016

 

DA gives us excellent visibility of the popularity of a particular website, specific pages and posts on each website, and even specific content on each webpage, particularly if attention is paid to facilitate collecting this information during webpage design. For example, if each piece of potentially useful content has its own webpage, then it is easy to monitor and track usage - so that the teacher, and website administrator can “see” how many site visitors there are at any time, when they visit, where they visit from, how long they stay (on each key webpage-piece of content) i.e. which are the most popular, most often visited webpages, where most time is spent on. 

Unfortunately while we know where a viewer visits, how often, and for how long, we have much less information about why they visited a particular webpage, why more or less time was spent on each webpage, why certain webpages are popular or repeatedly visited, what a viewer does on each webpage including what learning activity takes place on each webpage - e.g. passive reading vs. active reading by taking notes, reflecting on prior knowledge, attempts to link new knowledge to prior knowledge; unless specific learning exercises and interactive elements are embedded within each webpage. 

More confounding viewership data is generated when one viewer bookmarks a potentially useful webpage for later viewing, prints this page for later viewing, is interrupted while viewing a particular webpage while leaving the webpage “open”, or multitasks while having many webpages opened. Clearly the gross number of “clicks”, and duration spent on a webpage does not necessarily mean that part or all of a webpage is even read.

What about “citation”, when a quote is attributed to an online resource, or reference to an online sentence, paragraph, or illustration is made? In order to do this accurately, and successfully, this means that the student or viewer must have at least reflected on the value and significance of the passage or illustration, in order to accurately and meaningfully refer to or cite this. This is similar to the value that traditional academics give to “citations” as a measure of impact. Seeing considered “reviews” or recommendations of online material is another indicator of higher order viewer behaviour.

Let us then take this as the starting point to propose a grading system showing a ranked order of online viewership behaviour, with the topmost entries highest ranked and of highest value:

A. Quoting or citing, linking to in one’s own writing or webpage, either for a teaching/learning purpose, or in an academic scholarly piece (easy to track and measure) 

B. Recommending a website, and giving reasons for this (easy to track and measure), with a considered review of a website perhaps meriting a B+ or A

C. and D. Clicking, or visiting (longer, and repeated visits [= C] - need logins to capture this > shorter, and single visits [= D]; longer time spent on a collection of interrelated linked webpages, progressively visiting each linked subpage [= C] > short visit, with one or two drill down clicks or visits [= D]) (easy to track and measure, with online and embedded webpage analytics, has been referred to as “item-level dwell time”).

E. Downloading or printing (easy to track and measure with appropriate webpage setup) or

Bookmarking a website (harder to track and measure automatically, needs to be captured using another method like a survey or interview question)

(It can be argued that both A and B ranked viewer behaviour includes performance features similar to higher levels of performance on the Miller’s triangle, above knows/knowledge, and knowing how/understanding, to showing how and does.)

We could then use this grading system as the first step to develop a scoring system to measure the value and impact of viewership of online material in medical education, perhaps by attributing 5 points for A, 4 for B, 3 for C, 2 for D and 1 for E. This can then be incorporated into performance monitoring software linked to online viewership and behaviour data. 

It is hoped (by this author), that the ranking and grading system proposed in this paper forms the basis for further discussion, and active experimentation by educational scholars. It is very likely that modifications to the number and characteristics of ranked levels; and weighting of the numerical points awarded for different ranks and grades evolves as it is applied by different educational scholars. What is potentially useful is the principle or idea behind this proposal, of a ranking method as the first step toward developing a scoring system to measure the value and impact of viewership of online material in medical education.

Take Home Messages

Notes On Contributors

POH SUN GOH, MBBS, FRCR, FAMS, MHPE, is an Associate Professor and Senior Consultant Radiologist at the Yong Loo Lin School of Medicine, National University of Singapore, and National University Hospital, Singapore. He is a graduate of the Maastricht MHPE programme, and current member of the AMEE eLearning committee. 

Acknowledgements

Bibliography/References

Goh, P.S. The value and impact of eLearning or Technology enhanced learning from one perspective of a Digital Scholar. MedEdPublish. 2016 Oct; 5(3), Paper No:31. Epub 2016 Oct 18.

 https://doi.org/10.15694/mep.2016.000117

Appendices

Footnote:

An analogy can be used to think about online viewership, by comparing this with traditional book reading. 

A. Quoting passages from a book in a meaningful manner to illustrate points 

B. Recommending a book, and giving reasons for this; with a considered book review perhaps meriting a B+ or A

C and D. Making notes from and on, underlining passages in a book, dog earring a book, well worn pages and coffee stained pages [=C] > opened book with some signs that it has been read [=D]

E.   Purchasing a book, or borrowing a book from a library (note that this does not mean that the book is fully or partially read, or even read at all!)

Declarations

There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (https://creativecommons.org/licenses/by-sa/4.0/)

Reviews

Please Login or Register an Account before submitting a Review

Ken Masters - (18/03/2019) Panel Member Icon
/
The author identifies a very real problem dealing with the fact that current data analytics can give us plenty of data, but sometimes the overall picture is missing. The author then proposes a very early-stage way of combining and grading various analytics to form a grading system.

To develop the system, though, I would recommend that the author take a few more steps:
• Ensure that this grading system is clear about what is being graded. For instance, if one site has an overall grade of 3.5 and another has a grade of 4.0, what does that mean? Bear in mind that a website may be a particularly poor example of something, and so be cited or back-linked many times, so the grade may not be an indication of quality; it might be an indication of importance in showing something bad (or notoriety).
• Would the scale be a simple arithmetic scale, or would there be the possibility that it is exponential? For example, would a difference in score between 3.5 and 4.0 be same as a difference in score between4.0 and 4.5, or would one consider that getting from 4 to 4.5 is much harder than getting from 4.5 to 4.0?
• Finally, the author has limited the number of elements. This does have the advantage of ease of understanding and ease of calculation, but it also runs the risk that important elements have been omitted. I would recommend that the author first conduct a detailed literature review of current analytics used by others. This will, undoubtedly, give a large array of variables, but they really are necessary in order to ensure that the final product can claim to be evidence-based. From there, a Delphi study would help to reduce the variables to a manageable rubric. The final step would be to take a sample (perhaps 100 sites), evaluate them against the rubric to see how well the rubric stands up to real-life work. That would be a useful paper indeed.

I look forward to seeing further developments on this.
Richard Hays - (04/01/2017) Panel Member Icon
/
This paper raises an interesting issue - just how is the impact of academic publishing best measured? The author proposes a hierarchy and a new system for doing this but, while plausible, there is still not much evidence on which to base any system. I plan to review the available impact measure for MedEdPublish papers from our first year as a contribution to this discussion.