The 19-person ACE Task Force began in Spring 2018 as an effort to revise the end-of-semester Assessing the Classroom Environment (ACE) survey, but the scope of the effort grew. After reviewing initial objectives, the committee agreed that the guiding principle should be promoting high quality instruction and its continual improvement. This principle, coupled with the desire to increase student input and minimize well-documented biases in student ratings, demanded that we expand the measures used to assess teaching.

Following 18 months of research, consultation, and discussion, we offer six broad recommendations:

Revise end-of-course student ratings. A shorter set of items is recommended for use across all courses taught at the University. In keeping with student requests to keep the survey short, collegiate units may add up to three additional rating items, but central institution support will not be offered for any additional items. The questions will include six Likert questions: Three instructor-focused questions and three course-focused questions. In keeping with faculty requests, three open-ended items with clear prompts were added. Items were selected to be simple and as objective as possible to minimize impact of implicit bias about instructors. Using these strategies, the desired outcome is to have higher end-of-course response rates, with less bias, which will offer more useful feedback to instructors.

Encourage ongoing student feedback. We offer guidelines and tools to increase its use. Formative assessment of teaching is valuable for faculty and students, but it’s not used consistently across campus. We suggest that student feedback results outside of the revised ACE are confidential to the instructor but can be used (if desired) to document teaching effectiveness in personnel reviews.

Promote systematic peer feedback. We offer guidelines and a tool for instructor-initiated use. Peer observation is helpful because it incorporates data distinct from student ratings; peers may have biases, but different than those of students, and they have more expertise to assess the effectiveness of course content and pedagogical practices. Currently, peer observation is used for promotion and tenure rather than as a routine part of improving instruction. Even so, we discovered that it is conducted in different ways across units, some not systematic. We propose peer observation be used more widely and more consistently.

Utilize existing campus expertise. We encourage faculty to make more frequent use of offices on campus including (but not limited to) the Center for Teaching, Distance and Online Education, and Office of Consultation and Research in Medical Education. These offices can offer feedback to improve design and delivery of courses.

Offer comprehensive educational resources. Video-based instructions for student, faculty, and administrators are suggested to increase response rates and reduce bias in responses. Videos will target different audiences based on the way they interact with student ratings. Student resources, for example, will explain how ratings are used and how to provide constructive feedback that faculty can use. Faculty and administrator resources will explain the nature of implicit bias and the limitations of any one source of data about instructor effectiveness.

Build a supportive culture and infrastructure. We propose moving responsibility for all teaching assessment efforts to the Provost’s Office, acquiring software with greater functionality, and marketing the changes proposed in this report. The current software lacks important functionality and is supported out of Information Technology Services (ITS).