Student evaluations of courses are one important tool for improving the quality of teaching and learning at the Harvard Chan School. Course instructors value students’ honest and constructive feedback, and the school uses these evaluations to make improvements to courses and to the overall quality of education here.
Harvard Chan Course Evaluations
In order to access course evaluations, please click the button below.Evaluations Since 2005
What is the best way to make sense of student evaluations of your teaching? Making sense of student feedback can be challenging so we offer the following resources for examining and responding to evaluations.
Questions to Consider When Reviewing Your Evaluations
Was my syllabus clear? Did I provide a clear understanding of the learning outcomes, alignment with degree program competencies, expectations of students, teaching methods, evaluation/assessment criteria, reading requirements?
Is my course attracting the students I expected? How can I assess readiness or prior knowledge that may hinder or block learning? Is there a difference in rating and expectations among students that represent different degree programs? Are there differences across other demographic variables?
Are students’ expected grades accurate? Did I provide enough clarity around grading procedures? Do I offer mid-course feedback or other mechanisms for feedback prior to the end of the course? How can I use focused feedback early and often to help students achieve the objectives of the course?
Are students attending the course regularly? If not, do I engage in active teaching strategies, address the readings and engage students in class discussions? Is the content duplicative of other courses in the school?
Do students feel comfortable in the classroom? Have I created an inclusive environment? Did I engage the students in creating norms for cross-cultural dialogue?
Do students find the out-of-class assignments valuable? Are readings or addressed in the classroom? Are they addressed in any of the assessments: exams, group projects, simulations, discussions?
Does my own assessment of my teaching match that of my students? If not, why not?
Looking at the scores in each category
Is there a pattern among the comments that tells a story about what is going on in the classroom and/or course overall? Are there some simple changes that might improve my course and teaching? Or are there more substantial issues I’d like to tackle in my course design and teaching methods?
For a consultation and coaching on instructional design and Canvas, teaching methods and classroom observation, please contact Sejal Vashi and our Team of Learning Designers or Nancy Kane, Professor of Management, HPM.
Faculty Focus Article
Four Horsemen of the Teaching Apocalypse
Four problems account for the lion’s share of serious teaching problems:
- Expert blind spot
- Content overload
An overstatement? Perhaps, but over the many years we’ve worked with faculty in a wide range of disciplines, we’ve seen these issues undermine students’ learning, motivation, and morale in insidious ways. Easy to fall prey to, they compromise the effectiveness of even seasoned teachers. Here’s some advice on recognizing the problems, avoiding them, and preventing the host of headaches they can cause.
Three important elements characterize any well-designed course: objectives (what students should know or be able to do by the end of the course), assessments (the means used to gauge students’ progress toward those objectives), and instruction (the methods and materials employed to help students acquire the knowledge and skills articulated in the objectives). A solid course design requires that these elements be aligned, each dovetailing with and supporting the others.
Misalignment occurs when these three elements are not in sync, in particular when the knowledge and skills being taught are not the same as those being assessed. “I’d never do that!” you might be thinking. And of course, no one ever means to. But it can happen more easily than most of us realize. All too often, we teach the whats (terms, definitions, formulae) and the whys (concepts, principles) of a subject but assess the hows (procedures, methods) and the whens (conditions of application).
Think of how we learn to drive. We study the whats and whys of the rules of the road in order to pass a written test. Then we have to pass a test of actual driving. Would studying for the written test prepare us adequately for the driving test? Of course not. Actual driving requires other knowledge, skills, and practice, for which road rules are necessary but not sufficient. If we prepared for the driving test solely by learning road rules, it would constitute misalignment: the instruction and assessment don’t line up.
While obvious in the context of driving, misalignment isn’t always easy to spot in an academic context. Consider a statistics course where students learn statistical tests and the mechanics of calculation (the whats and hows) but then are asked on exams to select the appropriate tests for a problem and justify their choices (the whens and whys). Have they practiced the skills required to do this? Or think about a history course in which students analyze the significance of key historical events during class discussions (the hows and whys) yet are tested only on their recall of facts and dates (the whats). Was the instruction and the assessment well-aligned? When students are taught one type of knowledge and skills but are assessed on others, it leads to poor performance and lingering resentment. Indeed, the classic student complaint, “We never learned this!” is often (though by no means always) a frustrated response to undiagnosed misalignment.
Alignment is not about “teaching to the test”; it’s about making sure students have sufficient opportunities to practice using the knowledge and skills we’re assessing. Does this mean that our tests should only include the identical problems or questions students have encountered in class or on homework? Absolutely not. Nor does it mean that our assessments shouldn’t stretch students or make them think in new ways. What it does mean is that we need to stop conflating skills: using a specified statistical test to solve a problem is not the same skill as selecting the right test for a new problem; knowing historical facts is not the same skill as analyzing their meaning in historical context; applying a formula is not the same skill as explaining a mathematical principle. Our goal, then, should be to identify all the skills we want students to develop and make sure we’ve given them ample time to learn and practice them before we assess them.
Tip: Take a look at the verbs in the course learning objectives (e.g., analyze? solve? compare? design?) and then make sure the instruction you provide reinforces those skills and gives students ample opportunities to practice them. Also, make sure that the same verbs appear in the exams and assignments. When the verbs don’t line up, chances are there’s a misalignment problem.
Expert Blind Spot
Experts know more than novices; that’s fairly obvious. What’s less evident is that their knowledge is organized differently, consolidated into “meaningful chunks” that help them retrieve it more quickly, use it with greater facility, and make far more rapid connections between ideas and applications (Ambrose et al, 2010). These attributes of expertise are a plus when doing research but a double-edged sword when it comes to teaching relative novices. They create an “expert blind spot” (EBS) that can compromise learning. A schema from Sprague and Stuart (2000) on the development of mastery helps to explain.
On the left of the diagram we see a novice, floating in a state of unconscious incompetence:she doesn’t know what she doesn’t know. As she starts to acquire knowledge and experience, she begins to realize how much she doesn’t know, and proceeds to a state of conscious incompetence. With more time and practice, she develops greater knowledge and skill and reaches a state of conscious competence: she has developed mastery yet remains aware of what she learned along the way. Then — and this is the kicker — as a highly knowledgeable and skillful expert, she reaches a state of unconscious competence, where she functions smoothly and dexterously, but has forgotten how much knowledge, skill, time, and experience it took to get there and is often unable to clearly explain how she does what she does. Think of an expert chef who gives instructions like “add spices to taste” or “mix until ready.” That advice is virtually useless to a novice, who doesn’t know which spices to use, what the dish should taste like, or when the mixture is “ready.”
The pitfalls of expertise are endemic in higher education where instructors are experts (sometimes the experts) in their disciplines. Unaware of their EBS, they skip quickly through the steps of complex procedures. They speak in a kind of shorthand, omitting key pieces of information. They jump rapidly from idea to idea, seeing relationships and connections that a novice may not. This fluency can create problems when teaching novices. Avoiding EBS takes constant vigilance and the ability to put yourself in your students’ shoes.
Tip: To combat EBS, break complex skills (e.g., writing) into component skills (e.g., articulating an argument, enlisting evidence, organizing ideas). Ask yourself what your students know, don’t know, and need to know next, and make sure they’ve learned and practiced all the relevant skills. Ask someone who is “consciously competent” (e.g., a graduate student or sophisticated undergraduate) to review lesson plans or lecture notes and see whether any steps have been skipped.
In 2011, Craig Nelson wrote a seminal article called “Dysfunctional Illusions of Rigor,” which outlines the many things teachers do to reassure themselves that their courses are rigorous, but that do not actually promote learning. One of these illusions is the idea that more content coverage equals more learning. Entirely too often (and we’re all guilty) we convince ourselves that jamming in more information, more topics, more lectures, and more readings means students come out of the course knowing more.
Unfortunately, this input-output model fails to take into account how students actually learn. It leaves out what we know is most important for deep learning and retention: opportunities for students to engage meaningfully with the material. When there’s too much content in a course (or workshop or seminar…), there’s no time for learners to ask and answer questions, to discuss ideas, and apply concepts to problems and cases. Students get a shallow exposure and leave without a deep understanding. In other words, rather than ensuring rigor, content overload jeopardizes it.
Content overload is difficult to avoid. We love what we teach! Everything is interesting, important, and necessary. We can’t possibly leave anything out! Over time, though, the list of essential content grows and the course is bursting at the seams. At this point, content has crowded out the time students needed to grapple with what they are learning.
Combatting content overload begins with the recognition that less is often more. Rather than trying to cover everything: prioritize. What is most essential? What skills and knowledge will students absolutely need for downstream courses? What content is nice-to-have but not necessary? Cut content that is not essential, then use the space created to incorporate active learning: opportunities for students to wrestle with, discuss, and apply the material. Will it be easy? No. But keep this in mind: lightening content is not about lightening the learning or “dumbing down” the course; it’s actually the opposite. Shifting from content coverage to active learning shifts responsibility from you to your students, who will now have to demonstrate and apply what they know.
Tip: When faculty encounter an interesting activity that would fits perfectly in the course, but find themselves thinking, “I’d like to try that, but I don’t have time if I’m going to get through X, Y, and Z…” that’s often a good indication that the course is overloaded and content should be pruned.
Absent evidence to the contrary, we humans have a tendency to believe that others think and feel the same way we do. In behavioral economics it’s called the Projection Bias. The tendency to project is strongest when we identify with a particular group of people – as we do with our students. Our students often remind us of ourselves when we were first discovering the disciplines we now love. We relish the idea of introducing them to the powerful ideas that changed our lives.
However, seeing ourselves in our students can be a hazard. The fact is, college professors are not like the vast majority of students who take their courses. Think about it: we opted for the rigors of graduate school and obtained an advanced degree. We’re teachers and often researchers. But our students are still exploring, not sold on the subject and not likely to major in it or do professional work at all related to it.
We can’t assume that our students share our interests, goals, and priorities. When we do, we often fail to make a compelling case for the value and relevance of what we teach. We assume the value is apparent, because it is to us. Moreover, over-identifying with students makes us prone to disillusionment. When they fail to share our passion or enthusiasm, they disappoint us. We make unwarranted assumptions: they’re unintelligent, they lack curiosity, they’re unmotivated, they’re lazy.
When we accept that students aren’t us, though, we can begin to explore their interests and experiences, and thus develop more robust strategies to motivate and inspire them. One of the most potent ways to generate enthusiasm among students is to explicitly communicate what we find so entrancing about our fields – rather than assuming the appeal is obvious. Explicitly pointing out the applications of key concepts, theories, and methods to real-world issues also goes a long way in helping students to better understand the modern relevance of our disciplines and the intersections with other domains.
Tip: Conduct a pre-course assessment with your students to identify what influenced their decisions to enroll in your course, their perceptions, assumptions and misconceptions about the topic, and their interests and values. Use this information to help frame your course and connect students to the topic.
We know that the Four Horsemen do not explain every teaching problem. However, we still believe that many of the problems that are subtlest and hardest to diagnose, that affect learning, performance, and motivation most profoundly, and that can be most demoralizing to both students and faculty can be avoided through awareness of and attention to these four issues.
Ambrose, S.A., Bridges, M.W., DiPietro, M., Lovett, M.C., Norman, M.K. (2010). How Learning Works: Seven Research-Based Principles for Smart Teaching. San Francisco: Jossey-Bass.
Nelson, C. (2010). Dysfunctional illusions of rigor: Lessons from the scholarship of Teaching and Learning. To Improve the Academy: Resources for Faculty, Instructional, and Organizational Development, vol. 28. Linda B. Nilson and Judith E. Miller, eds.
Sprague, J. & Stuart, D. (2000). The Speaker’s Handbook. Fort Worth, TX: Harcourt College Publishers.
Talking with Students About Course Evaluations – Vanderbilt University Center for Teaching
Mid-Term Course Evaluation Ideas – Berkeley Center for Teaching and Learning
Tips for Making Sense of Student Evaluations – Vanderbilt University Center for Teaching
Making Sense of Student Evaluations – Legigh University Center for Teaching and Learning