CETL | Canvas | AI Guide | Course Design | Online | Software | Workshops Student Evaluations—Learn more about student evaluations of instruction and midsemester feedback. Feedback Matters!As noted on our Teaching Observations page, meaningful feedback provides a useful mechanism for improving teaching and learning through critical self-reflection. The goal is not for observers to be critical, per se --or for us to begin with an assumption that anything is wrong-- but rather to be constructive, curious, and collaborative; to consider if the strategies we are using in class genuinely connect our teaching to learning and our learners. In our discussion of Small Group Instructional Diagnoses (SGIDs), we introduced student feedback into the equation. This practice is consistently and positively associated with marked improvements in teaching effectiveness and student perceptions of them. In this section, we address two additional aspects of student feedback, one of which is required by state and institutional policies (student evaluations of teaching [SETs]), and another of which is required by yet seldom used at the University of Idaho --midterm evaluations. Student Evaluation of Teaching (SETs)—A Piece of a Larger PuzzleIn recent years, a considerable body of research has demonstrated the limitations of SETs, most notably involving ... bias
validity
In short, while well-intentioned and efficient, end of semester student evaluations possess and reveal inherent and potentially serious validity challenges. Whether due to completion rates (too few students completing evaluations yield statistically invalid results: 03-2015 Validity and Reliability of IDEA Teaching Essentials The IDEA Center) or the fundamental question of whether the instruments measure what they are genuinely intended to measure, questions of validity abound. It is possible, however, to at once recognize these limitations and consider ways that we can make student evaluations of instruction meaningful. Our purpose is not (on this page) to revise institutional instruments, but to consider ways that they --and related strategies-- can be used more effectively. This ultimately requires an alignment of individual and institutional perceptions of value, validity, and purpose, Our goal, here, is put SETs and midterm evaluations into perspective and to build upon five guide-posts:
Making it Matter—The Road to Meaningful FeedbackIf there is an unspoken agreement, or even just an assumption, that student evaluations don't matter, neither will take them seriously. Even if they do matter to the instructor --even if she/he/they agree that student feedback can be useful, how do they get students to buy-in? The answer is that we have to make it matter. In our teaching portfolios and practices, we need to show that the student experience --and their feedback-- matters. With regard to SETs, one way to do this is to think about ways that we can motivate students to (1) complete the evaluation and (2) make it meaningful. If we can prove that it matters to us, it will matter to them --we need to make the value proposition clear. This begins with communication. This begins with us telling our students why evaluations matter (individually and institutionally); the purposes for which they are used (individually and institutionally); and that all of this helps us refine our craft to constantly improve the academic experiences of the students. Here are a few tips to get started:
Midterm Matters—Midsemester Evaluation TipsAmong the many reasons why midterm evaluations are so useful is because they give us an opportunity to:
Midterm evaluations help establish a culture of meaningful feedback and strengthen the bond between professor and student in a way that can positively affect student performance, retention, and success, as well as end of semester SETs. While there are numerous ways too elicit and use midterm feedback, CETL is presently advocating for a model akin to Yale University and its Center for Teaching and Learning's process. Using a Midsemester Feedback tool in Canvas, students address four well-crafted questions (plus any additional questions faculty members would like to include) that get to the heart of the matter in a highly effective, welcoming, and efficient manner:
There are alternatives, such as the Stop, Start, and Continue method, which can be modified to assess what, in the opinions of the students, the instructor should stop doing, start doing, and continue doing, but this method carries with it a less palatable tone. Faculty may also wish to solicit feedback on specific assignments, pedagogical strategies, and technologies used in class in an effort to refine their tools and techniques. If one conducted a search of instruments, all of which U of I faculty could replicate and deliver through Qualtrics, one would discover a continuum of complexity. While we strongly encourage and support faculty in their efforts to achieve their desired goals in this regard, we point to the four questions, above because they do a very good job of framing key areas of inquiry in a way that helps students provide meaningful and actionable feedback while at the same time encouraging them to assess what they can do to improve the learning experience and environment. The middle of the semester is a vulnerable time for all of us. Reality has set in, classes have gotten more challenging, and the temptation to call it quits is stronger than you might expect, especially a year into a global pandemic. It never hurts to set aside time --in class or through a survey-- to ask "how's it going?" If you would like to learn more about making evaluations matter, or for assistance in designing surveys, instruments, and framing language to yield meaningful feedback, please contact Brian Smentkowski, Director of the Center for Excellence in Teaching and Learning. |
||
Instructors should be mindful of policies regarding FERPA. |