An accrediting agency’s credibility is judged, in part, by consistency in decision-making. To this point, the United States Department of Education’s regulations for recognized accrediting agencies stipulates in regulation 602.18(b) that the agency “has effective controls against the inconsistent application of the agency’s standards.” (United States Department of Education, 2012) As noted by Hunt et al, “Standards are the lens through which accreditation views the world, creating the framework for consistency in review across programs (Hunt et al., 2016).
A key factor in reliable and accepted educational program accreditation is that accreditation standards are judged in a consistent manner within and across institutions over time (Barzansky, Hunt, and Busin, 2013). A criticism of accreditors has been a perception of inconsistency in how agency standards are applied to individual programs (Greenfield et al., 2013). For reasons that will be discussed, a perceived and potential source of inconsistency in the application of accreditation standards – particularly non-prescriptive standards – occurs at the level of the survey teams.
Despite the importance of consistency in application of standards between programs, little work has been published on processes used to achieve consistency, and the effectiveness of these processes. This manuscript will describe the steps a programmatic accreditor has taken toward achieving consistency in decision-making between survey teams and the decision-making body, and provide outcomes data on the effectiveness of the process.
Description of the agency’s accreditation process
The initial steps in a full accreditation survey are a self-study by the program and completion of a directed-inquiry data collection instrument (DCI) that includes questions related to each Element. The DCI is formatted to align with the structure of the team report. The survey team will review the DCI, perform an on-site survey, and make one of the following recommendations on performance regarding the program’s performance for each of the Elements: Satisfactory, Satisfactory with a need for Monitoring, or Unsatisfactory (see Appendix for definitions). The team report, which includes quantitative and qualitative information for each element, provides justification for that recommendation in the form of data or description of findings. The survey report therefore provides the basis for performance decisions by the decision-making body of the agency (the Committee) for each of the Elements. The Committee reviews the team’s recommendations and accompanying data and report, and either agrees with the survey team or changes the recommendation for the final determination for each Element.
Steps that are used by the agency to promote consistency:
Survey team member selection, survey team organization, standardization of reporting formats, survey team training, and agency support have been shown to be factors affecting consistency in decision-making for an accrediting agency (Greenfield et al., 2009), (Greenfield et al., 2015), (Greenfield et al., 2016). With the largely non-prescriptive and non-quantitative nature of the agency’s Elements, consistency requires that survey teams and the Committee have similar expectations of what constitutes satisfactory performance in the Elements. Several processes are in place within the agency to support consistency, as noted in the following paragraphs.
Team composition and training: Team composition and team member duties are designed to provide consistency in on-site evaluation. Survey teams typically consist of five or six members, selected from a pool of more than 100 volunteer surveyors. Most teams include at least one member of the Committee, and two or three volunteer professional members with experience on survey teams, and an experienced team secretary. Inexperienced members receive additional supervision and mentoring by the team chair and team secretary. Team members are required to attend an annual team-training webinar, which includes discussions on how to evaluate a school’s performance for many of the elements. Team members are evaluated after the survey visit for their knowledge of accreditation expectations, preparation, writing ability, and contributions to the team.
Team secretaries are responsible for assembling and editing survey reports, and are either professional agency staff, Committee members, or professional educators who are contracted for this purpose after demonstrating exceptional team and writing skills in previous team assignments. They perform multiple surveys each year, receive additional training, and may attend Committee meetings. The team secretaries also meet annually to discuss the application of the elements and provide feedback to the agency on the visit process and the interpretation of elements. Through these steps, the team secretaries add to consistency through experience, interpretation of data, and providing the rationales for team recommendations for performance in elements. The presence of an accreditation Committee member and/or a professional staff member on the survey team is intended to improve the consistency and the credibility of the recommendations by providing team members with insight into how the Committee reviews the reports, views data, and interprets the accreditation elements. After accreditation action on a survey report, team members are sent a communication indicating the changes to team findings that were made by the Committee. This helps team members understand the expectations of the Committee and “calibrate” their responses in future surveys.
Draft report review: Before the Committee takes action on the final survey reports, draft versions of the reports are reviewed on multiple levels. The team secretaries compile and edit the team members’ writing assignments for accuracy, formatting, and completeness. The team secretaries’ drafts are then reviewed by two or more members of the agency’s professional staff for completeness, internal consistency, and to ensure that adequate documentation is included to support the team’s findings. Draft reports with comments are returned to the team secretary for review and editing as deemed appropriate by the team, and reviewed by leadership of the surveyed educational program for correction of factual errors.
Committee structure: The 17 professional and public Committee members serve staggered three-year terms, with the option for reappointment for a second term, which almost all members accept. This provides for significant “institutional memory” on the Committee. Each Committee member will review more than 100 survey reports and thousands of recommendations for elements during their tenures on the Committee.
Informational publications: There are numerous publications for schools, survey teams, and Committee members that aim at creating consistency in both format of reports, the process of visits, and interpretations of the intent of elements. These documents are available on the agency’s website. Both the DCI and the survey team report have standardized templates to ensure that teams and the Committee have a uniform set of information to inform the decision for each Element. For some Elements, the agency has produced guidance documents that explain the intent of specific accreditation elements and how the Committee applies Elements to programs. These documents, which are posted on the website serve to guide the schools, survey teams, reviewers, and Committee members and as they prepare and review reports.