Research article
Open Access

A needs assessment survey for clinical competency committee (CCC) faculty development

Anita V. Shelgikar[1], Douglas J. Gelb[1], Mollie McDermott[1], Russel S. Hathaway[1], J. Sybil Biermann[1], Zachary N. London[1]

Institution: 1. University of Michigan
Corresponding Author: Dr Anita V. Shelgikar ([email protected])
Categories: Assessment, Learning Outcomes/Competency, Teachers/Trainers (including Faculty Development)
Published Date: 11/04/2019


Background. The Accreditation Council for Graduate Medical Education (ACGME) requires training programs to have a clinical competency committee (CCC) that regularly reviews each house officer’s performance.

Objective. To identify areas for CCC faculty development via a survey of CCC faculty.

Methods. An anonymous online survey was sent to CCC faculty of ACGME-accredited residency and fellowship programs at our large academic institution.

Results. Response rate was 42% (177 of 417 identified CCC members). Respondents were from residency (51%) and fellowship (49%) programs. Fifty-nine percent (104 of 177) reported that more time during CCC meetings was spent discussing low performers whereas 41% (73 of 177) responded that equal time was spent discussing high and low performers. Fifty-four percent of respondents (95 of 177) reported being unaware of CCC resources available through the ACGME and 59% (104 of 177) indicated that faculty development sessions on CCC resources and best practices would be helpful. Seventy-two percent of respondents (128 of 177) thought that incorporation of a CCC had a positive impact on house officer assessment.

Conclusions. A survey of CCC members at our institution indicates that faculty believe CCC review is a valuable component of house officer assessment. Further faculty development is needed to increase familiarity with available CCC resources and best practices.

Keywords: Clinical competency committee; needs assessment; faculty development; graduate medical education


In 1998 the Accreditation Council for Graduate Medical Education (ACGME) began to shift from process-based accreditation toward an outcome-based system. Clinical training program accreditation requires evaluation of house officer competence in six core domains (patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice).(Swing, 2007)

In 2012, Nasca and colleagues proposed the Next Accreditation System, which includes milestone-based assessment of each house officer as determined by a clinical competency committee (CCC).(Nasca et al., 2012) Each ACGME-accredited program must have a CCC that meets at least semi-annually to perform a milestone-based assessment of each house officer.(Accreditaton Council for Graduate Medical Education, 2017b) CCCs are composed of program faculty who use multiple assessment tools to assign competency-based milestone ratings to each house officer.(Accreditaton Council for Graduate Medical Education, 2017a) CCCs are given considerable latitude to decide the weight given to each assessment tool, the definition of competence, and the procedures to follow in ensuring competence and remediating deficiencies.

A first step in evaluating the effectiveness of CCCs is to determine how much variability exists among CCCs with respect to how they have approached these decisions and how much variability exists among CCC faculty members with respect to how they perceive their CCC responsibilities. The literature offers some guidance to CCC faculty about their roles,(French, Dannefer and Colbert, 2014) and one cardiovascular fellowship program director has described how his faculty spent substantial time on CCC meeting preparation and attendance,(Hong, 2015) but a broader survey of CCC members from a variety of medical specialties is needed. Prior work has suggested that CCCs tend to use a problem identification approach rather than a developmental approach to performance reviews.(Hauer et al., 2015) Further study is required to help identify the specific types of tools and data inputs used during CCC reviews.

The goals of this study were to (1) conduct a single-site survey of CCC members to understand responsibilities involved with CCC service, (2) review the number and type of assessment tools used by CCCs across a broad range of medical specialties, and (3) assess the need for ongoing faculty development for CCC members to better understand CCC roles, responsibilities, and activities.


Survey. We developed a self-administered survey to collect information from CCC faculty at the University of Michigan, a large academic medical center, about their CCC experience. Because no preexisting survey specific to CCC faculty perspectives was available, this survey was created with input from program directors (AS and ZL), clinical faculty (DG and MM), and our designated institutional official (JB).  The survey included a combination of multiple choice questions and free-text answers. Responses were collected anonymously. The survey instrument is provided as supplemental content.

Data Collection and Analysis. An online inquiry form was sent to all program directors and program coordinators (n=106) of the ACGME-accredited graduate medical education programs at Michigan Medicine, asking for their program name, program type (core residency vs. fellowship) and number of CCC members. One week later, a link to the anonymous online survey instrument was sent to the same group of program directors and program coordinators with a request to share this link with their respective CCC members. Two mailings of the survey link were sent over two weeks to maximize response. Data were stored on a password-protected network drive and in the secure survey account. Descriptive analyses were performed to understand respondents’ characteristics and their responses to the survey items.

Ethical approval. This study received an exemption from the University of Michigan Institutional Review Board, identification number HUM00110834.


Respondents. The response rate to the initial inquiry form was 62% (66 of 101 programs), corresponding to 417 CCC members. The survey response rate was 42% (177 of 417 identified CCC members) with respondent characteristics noted in Table 1.

Table 1. Survey Respondent Characteristics (n=177)


n (%)

Faculty Rank



12 (7)

   Assistant Professor

74 (42)

   Associate Professor

45 (26)


46 (26)

Program Type



90 (49)


87 (51)

Role on CCC


   CCC Member/Chair not Program Director*

110 (61)

   Program Director*

67 (39)

*Includes Assistant and Associate Program Directors

Assessment tools. Respondents identified a number of assessment tools used by their respective CCCs during review of house officer performance (Figure 1). Evaluation forms with ACGME milestone terminology and faculty evaluations of house officers were the most commonly used data for CCC review.

Figure 1. Tools used by Clinical Competency Committee members to determine house officers’ milestone ratings.  HO = house officer. Totals exceed 100 because respondents could select multiple choices.

Time allocation. Sixty percent of CCC members (106 of 177) spent up to 1 hour per house officer to independently review data in preparation for the CCC meeting(s) in 1 review cycle (e.g., mid-year review). Forty-one percent of respondents (72 of 177) reported spending equal CCC discussion time on high and low performing house officers, while 59% (105 of 177) reported that more CCC discussion time was devoted to discussion of house officers with low performance. With respect to the amount of CCC meeting time that was spent discussing house officer remediation, 75% of respondents (132 of 177) indicated <25% of CCC meeting time, 19% (33 of 177) indicated 25-50% of CCC meeting time, and 3% (6 of 177) indicated > 50% of CCC meeting time; the remaining 3% (6 of 177) of respondents did not reply to this question. 

The majority of CCC members (76%, 135 of 177) reported they did not have protected time for their CCC involvement. Program directors, including assistant and associate program directors, were more likely to receive protected time for CCC responsibilities (41%, 18 of 44) compared to non-program director CCC members (19%, 25 of 133).

Impressions of CCC review. Overall, CCC members reported a favorable impression of how CCCs have affected the review of house officer performance (Figure 2). Fifty-four percent of respondents (95 of 177) reported being unaware of CCC resources available through the ACGME and 59% (104 of 177) indicated that faculty development sessions on CCC resources and best practices would be helpful.

Figure 2. Clinical competency committee (CCC) members’ impression of how house officer assessment has changed with the incorporation of CCCs. Panel A shows detailed responses. Panel B summarizes the responses. Totals exceed 100 because respondents could give more than one response. * = negative impact; ** = no change. Panel C includes examples of how the respondents perceive CCC to improve house officer performance reviews.


Our survey of CCC members at our large academic medical center showed an overall favorable impression of CCC activities. Respondents identified a multitude of assessment tools (for example, chart review, simulation center performance, and evaluation forms) used to inform CCC assignment of competency-based milestone ratings to house officers. Future surveys of CCC members should include more detailed inquiry on how CCCs select assessment tools, how often these tools are used, and how the resultant data are incorporated into CCC review of house officer performance. Most respondents found that incorporation of CCCs was a valuable component of house officer assessment, although some found that CCC review made the overall process more time-consuming and complicated. Identification and dissemination of best practices may facilitate more streamlined workflow for CCC members, particularly those who do not receive protected time for CCC responsibilities.

This survey identified three areas of need. First, the majority of respondents were not aware of ACGME resources available for CCCs. Second, many CCC members would like more faculty development sessions related to CCC activities. The third need is for protected time for CCC members to accomplish their CCC responsibilities. We have shared these results during a program directors’ meeting at our institution as part of our effort to increase awareness of national and local resources available for CCCs. At this meeting, program directors shared their experiences and challenges related to CCC service. Our institution is now working to share clinical assessment tools across programs. Program directors and other CCC members are encouraged to engage with medical education groups within their specialties to advance their understanding of CCC resources and best practices.

Our program response rate of 62% is similar to other needs assessment surveys in medical education.(Curran et al., 2012) The survey response rate of 42% was lower than the program response. Despite this limitations, we were able to collect information from CCC members to inform our needs assessment and guide next steps for faculty development. A potential opportunity identified through this work is the creation of a list of CCC members across programs, perhaps even across specialties and/or institutions, to facilitate future communication and research related to CCC responsibilities. CCCs could work to develop procedures to maintain the benefits indicated in our results (e.g., process standardization and thoroughness of reviews) that would also mitigate the complexity in using the Next Accreditation System.


This survey of CCC members at one institution showed that CCCs are perceived to be helpful in assessing house officer performance. Ongoing faculty development is required to enhance awareness of CCC-related resources. CCC members desire protected time to fulfill their responsibilities in this often unfunded mandate for house officer assessment and program accreditation. Opportunities for future study include efforts to facilitate communication and share best practices among institutional, regional, and national CCCs.

Take Home Messages

1. Faculty find implementation of clinical competency committees (CCCs) to be valuable in the assessment of house officer performance.

2. CCC faculty are unaware of central resources available to guide CCC efforts.

3. Institutional, regional, and national CCCs need better ways to communicate and share best practices.

Notes On Contributors

1. Anita Valanju Shelgikar, MD, MHPE is a Clinical Associate Professor, Department of Neurology, University of Michigan, Ann Arbor, MI, USA.

2. Douglas J. Gelb, MD, PhD is a Professor, Department of Neurology, University of Michigan, Ann Arbor, MI, USA.

3. Mollie McDermott, MD, MS is an Assistant Professor, Department of Neurology, University of Michigan, Ann Arbor, MI, USA.

4. Russel S. Hathaway, PhD is a Graduate Medical Education Research Specialist, Graduate Medical Education, University of Michigan, Ann Arbor, MI, USA.

5. J. Sybil Biermann, MD is the Associate Dean Graduate Medical Education and Designated Institutional Official, University of Michigan, Ann Arbor, MI, USA.

6. Zachary N. London, MD is a Clinical Associate Professor, Department of Neurology, University of Michigan, Ann Arbor, MI, USA.




Accreditaton Council for Graduate Medical Education (2017a) Clinical Competency Committees A Guidebook for Programs (2nd Edition). Available at: (Accessed: March 26, 2018).

Accreditaton Council for Graduate Medical Education (2017b) Common Program Requirements. Available at: (Accessed: February 24, 2018).

Curran, D., Xu, X., Dewald, S., Johnson, T. R., et al. (2012) 'An alumni survey as a needs assessment for curriculum improvement in obstetrics and gynecology', J Grad Med Educ, 4(3), pp. 317-21.

French, J. C., Dannefer, E. F. and Colbert, C. Y. (2014) 'A systematic approach toward building a fully operational clinical competency committee', J Surg Educ, 71(6), pp. e22-7.

Hauer, K. E., Chesluk, B., Iobst, W., Holmboe, E., et al. (2015) 'Reviewing residents' competence: a qualitative study of the role of clinical competency committees in performance assessment', Acad Med, 90(8), pp. 1084-92.

Hong, R. (2015) 'Observations: We Need to Stop Drowning-A Proposal for Change in the Evaluation Process and the Role of the Clinical Competency Committee', J Grad Med Educ, 7(3), pp. 496-7.

Nasca, T. J., Philibert, I., Brigham, T. and Flynn, T. C. (2012) 'The Next GME Accreditation System - Rationale and Benefits', New England Journal of Medicine, 366(11), pp. 1051-1056.

Swing, S. R. (2007) 'The ACGME outcome project: retrospective and prospective', Med Teach, 29(7), pp. 648-54.




There are no conflicts of interest.
This has been published under Creative Commons "CC BY-SA 4.0" (

Ethics Statement

This study received an exemption from the University of Michigan Institutional Review Board, identification number HUM00110834.

External Funding

This article has not had any External Funding


Please Login or Register an Account before submitting a Review

Subha Ramani - (02/05/2020) Panel Member Icon
The authors have embarked on a large survey-based study to explore how CCCs discuss residents' performance, what data are collected to inform level of performance, how much time is allocated to strong vs challenging residents.
Their main conclusions were that a variety of assessment tools are used to collect data which I can understand. I am pleased to note that there are multiple sources of data including self-assessment, staff assessment, direct observation etc. Patient assessment is missing and I hope this important source will gradually be included.
Given how important CCCs are to resident assessment and decisions about progression, members should have time protected for this valuable and mandatory activity. it is not surprising that many are unaware that there are resources at ACGME level.
CCCs are generally composed of members with varying levels of experience in assessment and education, I agree that faculty development is essential.
time spent on discussing remediation plans is inadequate as a single advisor may find it challenging to make plans without collective input and even knowledge about what resources are available at the program level.

Overall, a useful study that informs educators involved in postgraduate training and assessment. The results bring out some deficient areas that can inform program directors where and how to invest in resources in a variety of essential assessment activities- communicating learning outcomes with trainees and assessors, faculty development of assessors, ensuring multi source assessments, faculty development of CCC members is interpreting various data, identifying trainees with challenges and formulating remediation plans.
One thing I missed was how the discussion at the CCC is communicated back to the trainee and how remediation plans are carried out.
I agree with Professor Masters' comments and will not repeat.
I am also curious to see more results and hope the authors are intending a next phase of the study.
If so, qualitative narratives from a smaller sample could be very valuable to understanding the challenges in CCC functioning.
Ken Masters - (22/07/2019) Panel Member Icon
The paper deals with a needs assessment survey at an institution for clinical competency committee (CCC) faculty development; the aim is to understand responsibilities, assessment tools used, and faculty development needs.

Overall, a useful paper, with some small issues needed to be addressed:
• A little more clarity on the online form system (Google forms, SurveyMonkey, etc.) is required.
• Similarly, the software and version used for the statistics (Excel, SPSS, etc.)
• Figure 1: Not a major issue, but it would be better if the order of the tools were reversed, showing the most commonly-used first, especially as there is an “Other” category. (It might also be a good idea to state explicitly that these are listed in order of decreasing frequency: some readers may raise a concern that most people simply indicated those options that were asked first. (Yes, a strange criticism, but I speak from personal experience :-)) Similarly, for Figure 2.
• Time allocation data might be better displayed as a table.
• Although there is some discussion about what has been done in light of these results, I would like to see a little more. On the other hand, if the authors are intending to leave that for a much larger research project (as they hint at when they speak of “future surveys”), it would be useful if they give a little more detail about their intentions.
• As the survey form was created from scratch, it would be useful if the researchers could share their experience of the form, especially changes that they would make if they were to conduct the survey again, so that others may be able to use a strengthened (at the least, confirmed) version of the form for their work.

So, a relatively small and localised paper, but still of value to most readers.
Possible Conflict of Interest:

For Transparency: I am an Associate Editor of MedEdPublish

Felix Silwimba - (12/04/2019)
This study is timely and answers the situation in medical education in sub-Saharan African. A good number of rich African children are pursuing graduate medical education out of African environments and this has brought question marks on their clinical competencies. Therefore, clinical competency committees should apply beyond national boundaries and they should be the focus of discussion among national accreditation and regulatory agencies. This fits in well with the intentions of FAIMER.