As the semester comes to a close, students will devote twenty minutes of every class to fill out course evaluations. The purpose of these evaluations is to assess the structure of courses, as well as consider a professor's tenure candidacy; however, these evaluations could do so much more. Without compromising the original mission of the surveys, the feedback could also be used by students to inform their course selections.
Currently, students turn to review websites such as Rate My Professor to learn more about their potential instructors. While students may find the occasional helpful review, the extra effort required to submit one in the first place eensures that entries reflect strong feelings-- either glowing or terrible assessments. The other option available to students curious about professors is by asking their friends. The efficacy of this strategy, however, relies on knowing someone who has taken a course from the professor in question.
Despite the helpfulness of having a plethora of reviews available, systemic biases in evaluating professors should be noted. According to a recent NPR report, teacher evaluations are “better mirrors of gender bias than of what they are supposed to be measuring: teaching quality.” Another study conducted by Colgate University reflected similar disadvantages for faculty members of color. If Skidmore were to decide to make anonymous evaluations public to students, derogatory evaluations should certainly not be released. By making students aware of the discrepancies in bias, theyit could be taken that into consideration when reading the evaluations.
The current system of actual evaluations, in which students complete one standard college-wide evaluation for the Dean of Faculty, and additional department specific survey, has merit. This plan provides data that compares departments, while giving Department Chairs the freedom and ability to gather any additional information that is helpful for their hiring decisions. With the exception of vague questions on the standard evaluation —, namely the lack of specific anticipated grade options — , and the question of whether time spent in class qualifies for devoted class time, the surveys are comprehensive.
The way the assessments are distributed and collected, however, could be improved by requiring that students complete the surveys by using an online host. Some students feel uncomfortable hand writing unfavorable reviews —, out of fear that professors will be capable of identifying them. Completing the evaluations online could remedy this problem by making it impossible for professors to identify students who may enroll in future classes with the same professor.
Moreover, by requiring that professors read evaluations after releasing that semester’s grades, there is already an acknowledgement of the possibility for bias in grading. While the current rule of postponing professors’ access to the reviews until after grades are released may keep bias from affecting that semester's’ grades, it does not protect students’ complete anonymity for future semesters.
Distributing evaluations online may also improve the quality of responses. Currently, many students complete their surveys at the end of a class period, and thus rush their responses in order to leave class earlier. And because professors leave the room while students fill out the assessments, there is a possibility that students will discuss their responses. Having students complete course assessments online, however, could solve these issues by reducing the urge to quickly finish an evaluation and allowing for more confidentiality. The completion of the evaluations can also be easily enforced by placing holds on those students’ accounts who do not submit their surveys.
Every semester, Skidmore gathers roughly ten thousand professor evaluations. Administering the surveys online would likely improve the quality of the responses. The data is already helpful in hiring decisions, but could also further serve the student body by informing course selection.