Practical tips and/or guidelines
Open Access

Guidelines for Evaluating Clinical Research Training using Competency Assessments

Elias Samuels[1], Phillip Anton Ianni[1], Haejung Chung[2], Brenda Eakin[1], Camille Martina[3], Susan Lynn Murphy[1], Carolynn Jones[4]

Institution: 1. Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, 2. Tufts Clinical and Translational Science Institute, Tufts University, Boston, MA, 3. Clinical Translational Science Institute, University of Rochester, Rochester, NY, 4. Center for Clinical and Translational Science, The Ohio State University, Columbus, OH
Corresponding Author: Dr Elias Samuels ([email protected])
Categories: Assessment, Education Management and Leadership, Learning Outcomes/Competency, Curriculum Evaluation/Quality Assurance/Accreditation, Research in Health Professions Education
Published Date: 14/11/2019


Effective training programs in clinical and translational research (CTR) are critical to the development of the research workforce. The evolution of global CTR competencies frameworks motivates many CTR institutions to align their training offerings with these professional standards. Guidelines for integrating competency-based frameworks and assessments into rigorous program evaluations are needed in order to promote the quality and impact of these training programs. These guidelines provide practical suggestions for how to ensure that subjective and objective assessments of CTR knowledge and skill can be effectively integrated in the evaluations used to improve these essential training programs. The approach presented here necessarily involves the systematic and deliberate incorporation of these particular types of assessments into comprehensive evaluation plans. While these guidelines are broadly applicable to the work of those charged with developing, administering and evaluating CTR training programs, they have been specifically designed for use by program directors.

Keywords: clinical and translational research; workforce development; competency-based assessment; competency framework; program evaluation; program improvement; logic model


Clinical and translational research in the United States is supported by numerous federal, industrial and academic organizations, and many other stakeholder groups (Callard, Rose and Wykes, 2012; Martinez et al., 2012; Trochim, Rubio and Thomas, 2013; Joosten et al., 2015). The NIH National Center for Advancing Clinical and Translational Science (NCATS) is a distinctive among them as it funds a broad network of research support centers, Clinical and Translational Science Awards (CTSAs), embedded in over 50 research intuitions located across the country (NCATS, 2018). A key strategic goal of these CTSAs regards their development of the entire clinical and translational workforce through dedicated research training programs (NCATS, 2017). Clinical Translational Research (CTR) training programs are providing highly valued instruction on relevant research skills and so demands for rigorous evaluations that demonstrate their impact on the research workforce also continue to grow (Bonham et al., 2012; Calvin-Naylor et al., 2017). 

Rigorous evaluations of CTR training programs typically require the periodic measurement of the research skills, or competencies, of the participants (Misso et al., 2016). However, the responsibility for evaluating the CTR programs often rests with small groups of investigators, research managers and administrators with little to no experience measuring research competencies despite having achieved many research accomplishments themselves. This work provides concrete steps they can take to integrate competency assessments into generalizable frameworks for developing and implementing rigorous evaluation plans of CTR training programs (Centers for Disease Control and Prevention, 1999; Trochim, Rubio and Thomas, 2013).

In this work we provide guidelines for CTR investigators and administrators responsible for evaluating research training programs in which the assessment of competence is a necessary component. The twelve guidelines discussed in this paper should inform the work of these individuals, but have been tailored to the role of the CTR training program directors. To ensure the relevance of these guidelines to this role, the authors carefully considered the typical demographics, job duties, motivations, skills and experiences of a typical administrator charged with guiding the evaluation and improvement of these training programs.  

Guidelines for using competency assessments in program evaluation

Review team roles and expertise related to trainee’s professional development

The responsibility for evaluating CTR training programs is often carried out by personnel in a number of positions and roles in any given program. The collaborative review of these roles can be facilitated by creating personas, which are defined as archetypes with distinctive needs, goals, technical skills and professional characteristics (Adlin and Pruitt, 2010). Creating a persona that defines who will be conducting assessments can help program teams and stakeholders discuss and negotiate changes to the ways this work is distributed and coordinated. Table 1 provides examples of personas of clinical research professionals who are likely to share some responsibility for administering a CTR training program. This process can be carried about by CTR program leads and administrators to help focus collaborative efforts on measuring the research knowledge and skills of researchers in ways that enable a rigorous evaluation of a CTR training program to be conducted.

Table 1. Professional roles involved with evaluating CTR training programs


CTR Investigator

CTR Training Program Director

CTR Training Program Administrator

Associated professional responsibilities

Junior investigators or senior research fellows

Research department supervisor or supervisor of training programs for research team members and junior investigators

Research and regulatory support or program manager

Professional motivation

Wants to provide rigorous training for research teams who are required to complete research training

Wants to use assessments of clinical research skill to revamp educational programs

Wants to provide consistent training and professional development experiences for research teams

Understanding of best practices in evaluating learning  

Expertise in program evaluation, use of logic models and postsecondary teaching

Understanding of learning outcome assessment, CTR competency frameworks and postsecondary teaching

Understanding of survey administration, data management and use of observation checklists

Responsibility for competency assessment administration, analysis and reporting

Identifying validated competency assessments and interpreting the results with stakeholders

Developing assessment forms and communicating with CTR trainees and developing results reports for stakeholders

Communicating instructions to CTR trainees and instructors, monitoring administration of assessment forms and management of resultant data

Note: CTR=Clinical Translational Research


Integrate competency frameworks into evaluation planning

Map the existing CTR training curriculum to competency-based education (CBE) frameworks (Dilmore, Moore and Bjork, 2013). It may be necessary to partner with subject matter experts who understand CBE when navigating the competency mapping process. There are multiple evidence-based competency frameworks applicable to CTR education and training (NCATS, 2011; Calvin-Naylor et al., 2017; Sonstein et al., 2018). Table 2 shows training opportunities that have been mapped to one domain of an established CTR competency framework (Joint Task Force, 2018).

Table 2. Sample Training Offerings for Scientific Concepts and Research Design

Developing Research Questions

Choosing an Appropriate Study Design

Selecting Valid Instruments

Determining an Adequate Number of Study Participants

Developing and writing research questions, aims and hypotheses

Experimental and observational study designs

Finding tests and measurement instruments: Library research guides

Hypothesis testing: Significance level, power, and basic sample size calculation

Formulating research questions, hypotheses and objectives

Introduction to clinical and tanslational research: Study population and study design

Measuring assessment validity and reliability Introduction to power in significance tests
The use of hypothesis testing in the social sciences The qualitative research process: Study designs for health services research Community engaged approaches to measuring study team dynamics Best practices in participant recruitment


The products of this mapping exercise should be shared with programmatic stakeholders to facilitate the collection of their feedback about the breadth and depth of the existing or potential CTR training opportunities. Collecting stakeholder feedback about the content of CTR training programs is an essential first step in many guides to evaluating health research training programs, including the U.S. Center for Disease Control and Prevention’s (CDC) guide for the evaluation of public health programs (Centers for Disease Control and Prevention, 1999).

Engage stakeholders in identifying critical knowledge and skill outcomes 

Engage programmatic stakeholders in conversations focused on identifying the most important knowledge and skills imparted to CTR trainees as soon work on an evaluation plan has begun. Partner with instructional designers and evaluators to develop valid and measureable lists of competencies which can be prioritized in collaboration with various stakeholder groups. When identifying which specific stakeholder groups to involve in this phase of the evaluation process it is important to ensure that those with divergent recommendations of which CTR skills are in greatest need of development and assessment are included (Callard, Rose and Wykes, 2012; Martinez et al., 2012; Trochim, Rubio and Thomas, 2013; Joosten et al., 2015).

Diverse stakeholder feedback can be systematically collected and synthesized using a variety of methods ranging from standard survey methods to more intensive methods such as interviews, focus groups or Delphi panels (Brandon, 1998; Geist, 2010). The opportunities created to collect feedback can also be used to collect stakeholder opinions about other short-term outcomes, such as those regarding participant experiences and behaviors (Kirkpatrick and Kirkpatrick, 2006). The collection of stakeholder feedback on all types of programmatic outcomes, including the knowledge and skills accrued through the program, is a necessary to the development of rigorous program evaluations (Centers for Disease Control and Prevention, 1999; Trochim, Rubio and Thomas, 2013).

Develop models depicting the links between program operations and outcomes

Logic models should be created in order to enrich and advance conversations with stakeholders and other administrators about the operation and impact of a CTR training program. Logic models are figures that typically depict the relationship between key programmatic A) inputs, B) activities, C) outputs and D) outcomes, often using itemized lists arranged into columns under each of these headers (McLaughlin and Jordan GB, 1999). Some also include references to relevant important contextual or environmental factors affecting all or select progrmmatic outcomes. The choice of which of these elements to represent in the figure should be motivated by the need to visualize the links between programmatic operations and outcomes that are likely to be of greatest interest to key stakeholders. Many funders ask that logic models be included in program proposals, and the production of these figures are standard practice in the evaluation of training programs across the health sciences (Centers for Disease Control and Prevention, 1999; Centers for Disease Control and Prevention, 2018).

Whenever possible logic models for CTR training programs should include the identification of short-, intermediate-, and long-term outcomes. The acquisition of critical research knowledge and skills are often represented as outputs or short-term outcomes of these training programs in logic models. In contrast, downstream impacts such as the production of research grants and peer-reviewed publications are often represented as intermediate- or long-term outcomes. To enhance the efficiency of this process, include lists of competency domains, each of which cover sets of related competencies, as outcomes rather than numerous specific competencies in logic models. Exemplars of logic models that include lists of CTR competency domains have been published and can be used inform development of logic models for similar programs (Rubio et al., 2010). Figure 1 shows a logic model that can be used a basic template for enabling the planning, implementation and evaluation of a CTR training program.

Figure 1. Sample Logic Model for an Evaluation of a CTR Training Program


Distinguish programmatic outcomes used for formative and summative evaluation

The choice of when outcomes should be measured should be informed by the intent to use the results for formative or summative evaluations (Newman et al., 1995; Yudkowsky, Park and Downing, 2019). The results of formative evaluations are used to improve programs and projects during their implementation. Outcomes chosen for the purpose of formative evaluation are often represented as short- or intermediate-term outcomes in logic models. The results of summative evaluations are used to produce valid and objective measures of programmatic impact at the end of the implementation process. Footnotes can be added to logic models to differentiate the use of certain metrics for these two distinct purposes, as shown in the template above (Figure 1). 

Measures of knowledge and skill can be used for both the formative and summative evaluation of CTR training programs. The results of relevant pre-program assessments, and of assessments conducted during the course of a program, enable formative evaluation when they are used to improve the experience of actively participating trainees. For example, the results of subjective or objective skill assessments can be shared with respondents to inform their own development of individualized training plans or used to inform modifications to training curricula to address perceived or objectively-measured gaps in knowledge and skill. The results of post-program skill assessments enable summative evaluation when compared to relevant benchmarks, including the results of pre-program skill assessments or measures of skill acquisition produced by similar training programs (Newman et al., 1995; Centers for Disease Control and Prevention, 1999).

Select validated assessments to measure critical knowledge or skills

The validation of knowledge or skill assessment requires that evidence be marshalled to advance the claim that the assessment actually measures what it was designed to measure (Kane, 1992). Peer-reviewed publications that demonstrate the validity of a competency-based assessments will include the results of tests suggesting the assessment provides reliable and accurate measures of knowledge or understanding among the types of learners targeted by the training program. The use of validated competency-based assessments for program evaluation lends credibility to this work and typically requires fewer resources than does the development of novel assessments of knowledge or skill, although both types of assessments can be used simultaneously.

More validated self-assessments of CTR skills for investigators and research professionals have been published in recent years (Bakken, Sheridan and Carnes, 2003; Streetman et al., 2006; Bates et al., 2007; Ellis et al., 2007; Mullikin, Bakken and Betz, 2007; Lowe et al., 2008; Cruser et al., 2009; Cruser et al., 2010; Lipira et al., 2010; Murphy et al., 2010; Poloyac SM et al., 2011; Robinson et al., 2013; Ameredes et al., 2015; Awaisu et al., 2015; Robinson et al., 2015; Sonstein et al., 2016; Jeffe et al., 2017; Patel et al., 2018; Hornung et al., 2019). When choosing between validated assessments it is critical to select ones which are most closely aligned with the competency framework chosen for a given CTR program and which have been validated using learners with similar credentials and research experience to those participating in that program. Be sure to obtain all the necessary permissions from the creators of any validated assessments before using the instruments for evaluation purposes.

The design and purpose of a CTR training program may require the use of subjective and objective assessments. Subjective assessments through which participants rate their own knowledge or skills can provide, for example, valid measures of self-confidence in ones’ abilities, but have not been shown to correlate with the results of objective measures (Hodges, Regehr and Martin, 2001; Davis et al., 2006). Subjective and objective assessments of CTR knowledge and skill can be used simultaneously, but only the results of the latter type should be used to make inferences about the actual knowledge and skills currently possessed by CTR participants.

Clinical and translational research training programs that confer any level of certification which are formally recognized by professional institutions or organizations may require that objective assessments be used to verify the actual research capabilities of the graduates. In these cases, the specific assessments that should be used by CTR training programs may have already been identified by these associated professional groups. When multiple or conflicting assessments are required by these groups conversations with programmatic stakeholders will be needed before any final determination about the use of any competency-based assessment can be made. 

Estimate the time and effort required for implementing an evaluation plan

Evaluation plans take many different forms, but all detail how evaluation data will be collected, analyzed, reported and used (Trochim, Rubio and Thomas, 2013). The costs of implementing rigorous evaluation plans can be substantial, so it is essential that the resources available for this work are accurately estimated and budgeted for.  Some evaluation activities, such as the administration of publically-available skill assessments using free online platforms, have comparatively low costs. The costs of other evaluations involving focus groups for example, can be considerably more.

The effort required for each step of the evaluation plan can be estimated in a table (Table 3). When reviewing an evaluation plan, carefully consider the risks and benefits of proposed assessments and choose those that are feasible to administer given the resources available. Collaborate with stakeholders to ensure that key evaluation activities are aligned with other project timelines and plans that guide the allocation of financial, human, and institutional resources needed to implement a CTR training program (Centers for Disease Control and Prevention, 1999).

Table 3. Example evaluation activities and time required for an evaluation of CTR training

Evaluation Activities Hours

Evaluation Planning


Develop competency crosswalk for program components


Draft logic model with short, intermediate & long-term outcomes


Draft and submit IRB application


Data Collection


Institutional records of participant affiliations


Competency assessment administration


Focus group administration


Focus group transcription


Data Analysis


Cleaning and management of all quantitative data


Quantitative analysis of competency assessment data


Qualitative coding of focus group data


Qualitative coding of participant research projects




Draft Stakeholder Reports


Total 116 hrs. (~3 weeks)


Train evaluation team members to collect assessment data in reliable ways.

Once an evaluation plan has been developed, and a formal evaluation team has been assembled, it is important that they understand the steps required for the reliable collection of data using competency-based assessments. For example, use of an CTR assessment of regulatory compliance may require that all persons administering the assessment be perfectly consistent with the use of the instrument as well as their subsequent scoring of individual’s performance. Even objective scoring systems include risks related to subjective interpretations (Van der Vleuten et al., 2010).  And research has shown that individuals in apparent positions of power may dissuade respondents from giving honest responses on tests of their knowledge or skills (Taut and Brauns, 2003; Van der Vleuten et al., 2010). When evaluating CTR training programs, it is essential that the appropriate members of the evaluation team receive and demonstrate their understanding of the relevant procedures.

Use technology platforms that best facilitate data collection, analysis and reporting

To maintain consistency, accuracy and accessibility of the assessment results, it is recommended to use one platform to administer, analyze and report survey results whenever possible. For example, common online survey platforms, such as REDCap, Qualtricsxm, and SurveyMonkey®, provide automatically generated results reports that include options for basic analyses, such as descriptive statistics and cross-tabulations of response options. If necessary, the resultant data can also be extracted from these platforms so that further analyses can be performed.

The deliberate, coordinated use of a platform for data collection, management and analysis can also facilitate further validity tests that can be used to measure CTR knowledge and skills. Many standard quantitative validation tests are included in statistical software packages like STATA, SAS and R, including exploratory and confirmatory factor analysis (Levine, 2005; Osborne and Costello, 2005), which are commonly used to identify and to validate the accuracy of the competency domains that structure many competency-based assessments. While there are many valuable validity tests (Kane, 1992), these are the ones used most commonly to validate skill assessments. Software programs for qualitative analysis, such as Dedoose or NVivo can be used to code transcripts obtained from focus groups and interviews in ways that reveal patterned relationships between the content of participant responses.

Consult with subject matter experts to interpret assessment results

The results of competency-based assessments of CTR skill may not be readily interpretable, particularly when no established criteria or rubric is associated with the assessment. In fact, many validated assessments do not prescribe how the resultant scores should be interpreted by respondents to better understand their own training needs or by training program administrators to enable programmatic improvements. In these cases, it is important to consult with subject matter experts in clinical and translational research, psychometrics and statistical analysis while conducting analyses of the assessment results.

The need to consult with these types of subject matter experts is particularly acute with subjective and objective assessments are used to evaluate CTR training programs. Subjective tests of knowledge and skill have been shown to be poorly correlated with objective measures (Hodges, Regehr and Martin, 2001; Davis et al., 2006). There is evidence suggesting that while subjective measures of CTR knowledge and skill often increase between pre- and post-program tests the scores obtained through such objective tests do not increase at a similar rate (Ellis et al., 2007; Cruser et al., 2010). Measurement and educational experts can help ensure that assessment results are interpreted in ways that are justified by the design and administration of the assessment instrument.

Collect stakeholder feedback about options for programmatic improvement

An essential step of program evaluation involves sharing of evaluation results with stakeholder groups in order to facilitate the collection of their feedback about programmatic improvement (Wandersman et al., 2000). For example, in the four overlapping and iterative phases of the Plan, Do, Check, and Act (PDCA) quality improvement cycle, the third stage typically involves studying the outcomes of a given initiative in ways that enable the articulation of what was learned through the implementation process (Juran and DeFeo, 2010; Kleppinger and Ball, 2010). The involvement of stakeholders in this step of the process is critical to the rigorous evaluation of any CTR training program (Trochim, Rubio and Thomas, 2013).

Reports of evaluation results should be customized to speak to stakeholder subgroups whenever it is not possible or productive to share the same report with all of them. For example, stakeholders with distinctive interests in the scientific content or pedagogical approach of a CTR training program may be most interested in reports showing how the results of competency-based assessments are being used to help participants identify and address their personal research learning challenges (Chatterji, 2003). In contrast, stakeholders who value training programs as an institutional resource enabling the CTR enterprise may be more interested in the research careers or achievements of the participants (Frechtling and Sharp, 2002). Whenever possible thoroughly document stakeholder feedback so that it can be used to inform future discussions about programmatic improvement and impact.


The guidelines presented here are intended to support the work of all clinical research professionals who are charged with the administration and evaluation of CTR training programs. In particular, this work fulfills a need for guidelines that clinical research investigators and administrators can follow to integrate competency assessments into their evaluation plans. Doing will better enable research centers to collaborate with programmatic stakeholders efficiently and effectively in order to measure and improve the quality and impact of CTR training using the results of competency-based assessments of research knowledge and skill.

Take Home Messages

  • Effective training programs in clinical and translational research are critical to the development of the research workforce.
  • Guidelines for integrating competency-based frameworks and assessments into program evaluations are needed to promote the quality and impact of research training programs.
  • The stakeholders of clinical and translational research training programs should be routinely consulted throughout evaluation processes that involve competency frameworks and assessments.
  • The systematic incorporation of competency-based approaches into evaluation plans facilitates the work of those developing, administering and evaluating research training programs.
  • The use of validated competency assessments for programmatic evaluation is essential to the collection of reliable and relevant performance metrics.

Notes On Contributors

All of the Co-authors contributed to the development of the guidelines presented in this work, informed the conclusions it advances and participated in all rounds of revisions required for submission.

Elias Samuels PhD, is the Manager of Evaluation at the Michigan Institute for Clinical and Health Research at the University of Michigan. ORCID ID:

Phillip Anton Ianni PhD, is a Postdoctoral Research Fellow at the Michigan Institute for Clinical and Health Research at the University of Michigan. ORCID ID:

Haejung Chung MA, is the Manager of Instructional Design and Technology at the Tufts Clinical and Translational Science Institute at Tufts University.

Brenda Eakin MS, is an Instructional Designer at the Michigan Institute for Clinical and Health Research at the University of Michigan. ORCID ID:

Camille Martina PhD, is a Research Associate Professor in the departments of Public Health Sciences and of Emergency Medicine at the University of Rochester. ORCID ID:

Susan Lynn Murphy ScD OTR/L, is an Associate Professor in the Department of Physical Medicine and Rehabilitation at the University of Michigan. ORCID ID:

Carolynn Jones, DNP, MSPH, RN, FAAN is Associate Clinical Professor in the College of Medicine at The Ohio State University and Co-Director of Workforce Development for The Ohio State Center for Clinical Translational Science. ORCID ID:


This work was made possible through the thoughtful guidance of Vicki L. Ellingrod, Pharm.D., Sarah E. Peyre, Ed.D., and the Development, Implementation and Assessment of Novel Training in Domain-Based Competencies (DIAMOND) study team. All tables and figures included in the manuscript were created by the authors.


Adlin, T. and Pruitt, J. (2010) The essential persona lifecycle: Your guide to building and using personas. Burlington, MA: Morgan Kauffman Publishers.

Ameredes, B. T., Hellmich, M. R., Cestone, C. M., Wooten, K. C., et al. (2015) 'The Multidisciplinary Translational Team (MTT) Model for Training and Development of Translational Research Investigators', Cts-Clinical and Translational Science, 8(5), pp. 533-541.

Awaisu, A., Kheir, N., Alrowashdeh, H. A., Allouch, J. N., et al. (2015) 'Impact of a pharmacy practice research capacity-building programme on improving the research abilities of pharmacists at two specialised tertiary care hospitals in Qatar: A preliminary study', Journal of Pharmaceutical Health Services REsearch, 6(3), pp. 155-164.

Bakken, L., Sheridan, J. and Carnes, M. (2003) 'Gender differences among physician-scientists in self-assessed abilities to perform clinical research', Acad Med, 78(12), pp. 1281-6.

Bates, I., Ansong, D., Bedu-Addo, G., Agbenyega, T., et al. (2007) 'Evaluation of a learner-designed course for teaching health research skills in Ghana', BMC Med Educ, 7, p. 18.

Bonham, A., Califf, R., Gallin, E. and Lauer, M. (2012) Appendix E Discussion Paper: Developing a Robust Clinical Trials Workforce. In: Envisioning a Transformed Clinical Trials Enterprise in the United States: Establishing an Agenda for 2020: Workshop Summary. Washington DC: Institute of Medicine, National Academies Press.

Brandon, P. R. (1998) 'Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging the gap between collaborative and non-collaborative evaluations.', American Journal of Evaluations, 19(3), pp. 325-337.

Callard, F., Rose, D. and Wykes, T. (2012) 'Close to the bench as well as the bedside: Involving service users in all phases of translational research', Health Expectations, 15(4), pp. 389-400.

Calvin-Naylor, N., Jones, C., Wartak, M., Blackwell, K., et al. (2017) 'Education and training of clinical and translational study investigators and research coordinators: a competency-based approach', Journal of Clinical and Translational Science 1(1), pp. 16-25.

Centers for Disease Control and Prevention (1999) 'Framework for program evaluation in public health', Morbidity and Mortality Weekly Report, 48(RR-12).  Available at:  (Accessed: September 17, 2019).

Centers for Disease Control and Prevention (2018) Step 2B: Logic Models. Available at: (Accessed: July 19, 2019).

Chatterji, M. (2003) Designing and using tools for educational assessment. Boston: Allyn & Bacon.

Cruser, D., Brown, S. K., Ingram, J. R., Podawiltz, A. L., et al. (2010) 'Learning outcomes from a biomedical research course for second year osteopathic medical students', Osteopathec Medicine and Primary Care, 4(4).

Cruser, D., Dubin, B., Brown, S. K., Bakken, L. L., et al. (2009) 'Biomedical research competencies for osteopathic medical students', Osteopath Med Prim Care, 3, p. 10.

Davis, D., Mazmanian, P. E., Fordis, M., Van Harrison, R., et al. (2006) 'Accuracy of physician self-assessment compared with observed measures of competence: A systematic review', JAMA, 296(9), pp. 1094-1102.

Dilmore, T. C., Moore, D. W. and Bjork, Z. (2013) 'Developing a competency-based educational structure within clinical and translational science', Clin Transl Sci, 6(2), pp. 98-102.

Ellis, J. J., McCreadie, S. R., McGregory, M. and Streetman, D. S. (2007) 'Effect of pharmacy practice residency training on residents' knolwedge of and interest in clinical research', American Journal of Health-System Pharmacy, 64(19), pp. 2055-2063.

Frechtling, J. and Sharp, L. (2002) User-friendly handbook for project evaluation. National Science Foundation,. Available at: (Accessed: September 17, 2019).

Geist, M. (2010 ) 'Using the Delphi method to engage stakeholders: A comparision of two studies.', Evaluation and Program Planning, 33(2), pp. 147-54.

Hodges, B., Regehr, G. and Martin, D. (2001) 'Difficulties in recognizing one's own incompetence: Novice physicians who are unskilled and unaware of it', Academic Medicine, 76, pp. S87-89.

Hornung, C., Ianni, P. A., Jones, C. T., Samuels, E. M., et al. (2019) 'Indices of clinical research coordinators' competence', Journal of Clinical and Translational Science, in press.

Jeffe, D., Rice, T. K., Boyington, J. E. A., Rao, D. C., et al. (2017) 'Development and evaluation of two abbreviated questionnaires for mentoring and research self-efficacy', Ethnicity and Disease, 27(2), pp. 179-188.

Joint Task Force (2018) Joint Task Force for Clinical Trial Competency Blog. Available at: (Accessed: August 31, 2018).

Joosten, Y., Israel, T. L., Williams, N. A., Boone, L. R, et al. (2015) 'Community engagement studioes: A structured approach to obtaining meaningful input from stakeholders to inform research', Academic Medicine, 90(12), p. 1646.

Juran, J. and DeFeo, J. (2010) Juran's Quality Handbook: The complete guide to performance excellence. 6th edn. New York: McGraw Hill.

Kane, M. (1992) 'An argument-based approach to validity', Psycological Bulletin, 112(3), pp. 527-535.

Kirkpatrick, D. and Kirkpatrick, J. (2006) Evaluating training programs: The four levels. Berrett-Koehler Publishers.

Kleppinger, C. and Ball, L. (2010) 'Building quality in clinical trials with use of a quality system approach', Clinical Infectious Diseases, 51, pp. 5111-6.

Levine, T. (2005) 'Confirmatory factor analysis and scale validation in communication research', Communication Research Reports, 22(4), pp. 335-338.

Lipira, L., Jeffe, D. B., Krauss, M., Garbutt, J., et al. (2010) 'Evaluation of clinical research training programs', Clini Transl Sci, 3(5), pp. 243-248.

Lowe, B., Hartmann, M., Wild, B., Nikendei, C., et al. (2008) 'Effectiveness of a 1-year resident training program in clinical research: a controlled before-and-after study', J Gen Intern Med, 23(2), pp. 122-8.

Martinez, L., Russell, B., Rubin, C. L., Leslie, L. K., et al. (2012) 'Clinical and translational research and community engagement: Implications for researcher capacity building', Clinical and Translational Science, 5(4), pp. 329-32.

McLaughlin, J. and Jordan, G. B. (1999) 'Logic models: a tool for telling your program's performance story', Evaluation and Program Planning, 22, pp. 65-72.

Misso, M., Ilic, D., Haines, T. P., Hutchinson, A. M., et al. (2016) 'Development, implementation and evaluation of a clinical research engagement and leadership capacity building program in a large Australian health care service', BMC Medical Education, 16(1), p. 13.

Mullikin, E., Bakken, L. and Betz, N. (2007) 'Assessing the research self-efficacy in physician scientists: The clinical research appraisal inventory', Journal of Clinical Assessment, 88(9), pp. 1340-45.

Murphy, S., Kalpakjian, C. Z., Mullan, P. B. and Clauw, D. J. (2010) 'Development and Evaluation of the University of Michigan's Practice-Oriented Research Training (PORT) Program', American Journal of Occupational Therapy, 64(5), pp. 796-803.

NCATS (2011) Core Competencies for Clinical and Translational Research. Available at: (Accessed: September 16, 2019).

NCATS (2017) Strategic Goal 3: Develop and foster innovative translational training and a highly skilled, creative and diverse translational science workforce. Available at: (Accessed: September 16, 2019).

NCATS (2018) About NCATS. Available at: (Accessed: 2019).

Newman, D., Scheirer, M. A., Shadish, W. R. and Wye, C. (1995) 'Guiding principles for evaluators.', New Directions for Program Evaluation, 66, pp. 19-26.

Osborne, J. and Costello, A. (2005) 'Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis.', Practical Assessment, Research, and Evaluation, 10(7), pp. 1-9.

Patel, M., Tomich, D., Kent, T. S., Chaikof, E. L., et al. (2018) 'A program for promiting clinical scholarship in general surgery', Journal of Surgical Education, 75(4), pp. 854-860.

Poloyac, S. M., Empey, K. M., Rohan, L. C., Skledar, S. J., et al. (2011) 'Core competencies for research training in the clinical pharmaceutical sciences', Am J Pharm Educ, 75(2), p. 27.

Robinson, G., Moore, C., McTigue, K., Rubio, D., et al. (2015) 'Assessing competencies in a master of science in clinical research program: The comprehensive competency review', Clinical and Translational Science, 8, pp. 7770-775.

Robinson, G., Switzer, G. E., Cohen, E., Primack, B., et al. (2013) 'A shortened version of the Clinical Research Appraisal Inventory: CRAI-12', Academic Medicine, 88(9), pp. 1340- 45.

Rubio, D., Schoenbaum, E. E., Lee, L. S., Schteingart, D. E., et al. (2010) 'Defining translational research: implications for training', Acad Med, 85(3), pp. 470-5.

Sonstein, S., Brouwer, R. N., Gluck, W., Kolb, R., et al. (2018) 'Leveling the joint task force core competencies for clinical research professionals', Therapeutic Innovation and Regulatory Science.

Sonstein, S., Silva, H., Jones, C., Calvin-Naylor, N., et al. (2016) 'Global self-assessment of competencies, role relevance, and training needs among clinical research professionals', Clinical Reseacher, 30(6), pp. 38-45.

Streetman, D. S., McCreadie, S. R., McGregory, M. and Ellis, J. J. (2006) 'Evaluation of clinical research knowledge and interest among pharmacy residents: survey design and validation', Am J Health Syst Pharm, 63(23), pp. 2372-7.

Taut, S. and Brauns, D. (2003) 'Resistance to evaluation: A psychological perspective', Evaluation  9(3), pp. 703-19.

Trochim, W., Rubio, D. and Thomas, V. (2013) 'Evaluation Key Function Committee for the CTSA Consortium. Evaulation Guidelines for the clinical and translational science awards (CTSAs).', Clinical and Translational Science, 6(4), pp. 303-9.

Van der Vleuten, C., Schuwirth, L. W., Scheele, F., Driessen, E. W., et al. (2010) 'The assessment of professional competence: Building blocks for theory development: Best practice and research.', Clinical Obstetrics and Gynaecology, 24(6), pp. 703-19.

Wandersman, A., Imm, P., Chinman, M. and Kaftarian, S. (2000) 'Getting to outcomes: a results-based approach to accountability.', Evaluation and Program Planning, 30, pp. 389-395.

Yudkowsky, R., Park, Y. and Downing, S. (2019) Assessment in health professions education. 2nd edn. Routledge.




There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (

Ethics Statement

This work does not constituent human subjects research and so no IRB review was required. The work contains only the opinions and individual work of the authors.

External Funding

This work was funded by the National Center for Advancing Translational Sciences – NIH (1-U01TR002013-01).


Please Login or Register an Account before submitting a Review

Trevor Gibbs - (16/03/2020) Panel Member Icon
An interesting although rather difficult paper to read. I personally found it difficult due to the interchange of the words evaluations and assessments, which to me have very distinct meanings.
I would agree with all of my co-reviewer's comments and would add that I feel the main message(s) from this paper, i.e. the list of tips is lost in the complexity and lack of flow in the paper.
This paper does cover an important area and one that has not been satisfactorily addressed, so I do congratulate the authors in attempting to apply logic to the question of how competency-based assessments used in Clinical and Translational Research (CTR) training can help inform the overall evaluation of a CTR programme. I would wonder how much importance one should place on the product outcomes as an evaluation measure of the programme.
I do believe that this is a paper to recommend to those involved in evaluating research-training programmes, despite the reservations.
Possible Conflict of Interest:

For transparency, I am one of the Associate Editors of MedEdPublish

David Bruce - (30/11/2019) Panel Member Icon
I thought that this was a complex and interesting paper where the authors consider how competency-based assessments used in Clinical and Translational Research (CTR) training can help inform the overall evaluation of a CTR programme. Guidelines for the evaluation of CTR programmes have been developed and propose that the professional development and training within the programmes for all staff and their level of knowledge and skills should be considered when such programmes are evaluated. The authors propose 12 guidelines to help administrators and training programme directors used competency-based assessment in their evaluations.

The background to CRT programmes in the United States and funding across 50 research institutions (intuitions!) is outlined and the need for guidelines to help administrators and programme directors incorporate and make sense of competency based tests is proposed.

At this stage in the paper I felt as a reader that some definitions and more clarity about CTR training programmes and who the CRT trainees are likely to be would have been helpful. As an example of what I mean about definitions, medical education programmes in the UK (undergraduate and postgraduate) are subject to quality control (by the providers) and quality management and quality assurance by regulators. This ensures that the standards for training are being met. I was unclear how this differed from the evaluations that this paper discusses. In respect of the training programmes within the CTR centres – I would like to have known the staff members in the training programmes and indeed if different training programmes were in place for different staff members. I assume some will be clinicians and some will be scientists. Are all in the one training programme / are the programmes clearly defined (learning outcomes defined and teaching and assessment programmes in place)? All this may be obvious to those involved in CTR programmes – but not clear to non-specialist readers.

The paper then discusses each of the proposed guidelines. I counted 11 guidelines and not 12 as stated. Some of the guidelines proposed were general in nature - mapping of roles, data collection and use of IT platform - which matched previous guidelines for overall programme evaluation.

A number of specific guidelines looked at competency based assessments. I felt that this part of the paper was less clear and more explanation was needed. The task for the administrators and training programme directors appeared to be considering what outcomes the programmes should deliver and the selection of validated competency assessment instruments. It now appeared to me as a reader that perhaps ad hoc training might be happening within the programmes and the evaluators were fitting competency assessments into this process. This may not be the case – but more explanation of the actual training programmes would be required to help explain these guidelines.

I was also aware that the other factors normally considered when looking at the quality of an educational programme – such as the learning environment and how learners and their trainers are supported, did not seem to feature in these guidelines.

This paper will be of interest to those working in CTR centres and those involved in CTR education and training. I think that for the general reader the authors need to provide more background and explanation of their proposals.