The Harvard Q Guide, and the supplementary CS50 tool, built on top of the Q's data, are the primary non-human resources that Harvard students use to plan their academic trajectory and understand their university's academic offerings. Despite how useful these tools are, they were designed to help students select courses—they were not designed for the task of actually understanding the academic system at Harvard.
To some extent the administration has incentives to keep this data private. If most of the value of a Harvard education lies in its intangibles, measuring tangible aspects of it will only lower its perceived quality. The worst thing that could happen is that statistical comparisions could show that most Harvard classes are fundamentally the same as classes at any other university, a truth only really understood by Harvard students themselves.
Perhaps this, not a desire to improve surveying techniques, is the reason the university took over the administration of course surveying from the Crimson, who started the project in 1925. At the time the university took over this data, students were assured that that student editors would have complete control over the publication of the Q Guide. It was revealed much later that actually the administration secretly instructed students on what they were allowed to print. Since then, the university has completely reigned control over the Q Guide. When I reached out to the Office of Institutional Research, they told me that it was not their policy to give students raw versions of the data.
If we are going to be asked to front a sticker price of $60000 and thousands of hours of time, I argue that we should at least know what we're getting out of our Harvard education. There are two fundamental reasons that students choose to go to colleges like Harvard: to learn and to get an impressive credential. To the extent to which Harvard seeks to educate, this data is useful in understanding the distributions of what students are studying and where Harvard is allocating its resources. To the extent that college serves to rank and measure students, this data is valuable in establishing the merits and work ethics of students—especially as grade inflation degrades the signal behind students' nominal grades.
When Harvard hides information about grade distributions, all that really does is obfiscate merit and benefit (often wealthy) students who know how to work the system for higher grades.
It's equally important from a societal perspective to understand the nature of a Harvard education. As an educational nonprofit, Harvard is largely supported by public funds. Most of these funds come in the form of tax breaks and research grants, but other sources of funding like Pell Grants and ??? loans directly fund student tuition. As tuition increases (driven largely by top schools like Harvard) start to take a serious toll on , the public is rightfully questioning the effectiveness of colleges at facilitating social mobility. Additionally, with Harvard facing the dual allegations of (1) discrimination in its admissions process and (2) not doing enough to support low-income students, it is absurd that there appears to be negligible effort to statistically evaluate hypotheses about how aspiring students will do once at Harvard. Unfortunately, the information within the Q Guide isn't enough to answer any of these questions since there is no demographic information associated with student response data.
The Harvard Q Guide has been scraped several times in the past, each time driving out certain elements of its vast data. The most comprehensive recent scraping was done by Roger Zou '17 in 2016, who used a python scraper and published some interesting Excel visualizations in the student-centered education blog My Student Voices. Other interesting projects have been done by Patrick Pan '19, Ryan Kerr '17, and perhaps others. Like other stabs at understanding this dataset, the following analysis does not claim to be complete. There seems to be no limit to the number of interesting patterns in the data, and I hope that others will be interested in this project and continue with it where I left off.
The data scraping for this project was done using Web Scraper, a Chrome Extension. I could not get Zou's code to work directly, probably because of the introduction of Two-Factor Authentication in that came to the Q in late 2016. A description of how Sara Valente and I scraped this information can be learned about in a medium post with code available on github. Only students with Harvard Key authentication credentials will be able to scrape the raw data themselves, though I am happy to pass on the cleaned raw data to any Harvard student that emails me with their college address.
We'll start with the information that everyone wants to see first: ratings scores. These ratings include are generally the first things that students reference when comparing schedules. The best rated courses are We notice (showing the top 30 concentrations) that Math, Computer Science, Physics, and Chemistry tended to be the most demanding courses going as far back as the data went.
Check out data from sorted .
* In-standard deviation is the average standard deviation of observations within classes in the chosen department, weighted according to class size. Out-standard deviation is the average standard deviation of observations between class means in the chosen department, weighted according to class size.
Among the metrics above, the question of workload is the most talked about, among both students trying to find easy classes to take and alike by students trying to justify the impossibility of their academic load. The concept of telling students how much work different classes take is a somewhat controversial idea. In fact, the Q Guide used to feature both a measure of workload and of difficulty, before the 2014 Faculty Council decision to strip away the difficulty metric that enabled students to systematically select classes that were easy A's. Students fought hard to keep the workload metric on the Q Guide, arguing that it was crucial to their efforts to balance their schedules and maintain a healthy and predictable amount of work.
The glaring conclusion from the above visualization is that classes in STEM fields carry on average more work than classes in other departments. Most people are probably aware of this trend, but actually this pattern is somewhat deceiving since STEM fields have so much higher workload variance than non-STEM fields. Courses in my field of Mathematics, for example, had a standard deviation of over seven hours per week of work.
It's a natural question then whether this variance stems from in-class variance or between-class variance (pun intended). High in-class variance would be a symptom of the same course costing its students vastly different quantities of time, perhaps depending on ability. High between-class variance would be a symptom of a department that has courses of vastly different difficulties under the same departmental umbrella. An intro-stats understanding of variance decomposition shows that these two sources of variance combined describe the entire variance.
The data clearly show that workload variance in STEM fields comes overwhelmingly from between-class variance. This might be explained by the tendency for STEM classes to be sequential and ability-segregated in a way that courses in other departments are not. This variance decomposition might also be a reflection of the recent creation of easier courses and pathways within STEM fields that didn't exist before. In 2017, Flyby reviewed the 10 easiest classes at Harvard, which included classes like "Engineering Sciences 139: Innovation in Science and Engineering", "Neurobiology 95hfj: The Sleeping Brain", and "Organismic and Evolutionary Biology 59: Plants and Human Affairs". From personal experience, the easiest class I have taken at Harvard was a Graduate Math class about modeling cancer.
Select to view information on courses sorted by from .
* Null rows like that for Spring 2017 Math 55B are null because they are null in the official, Harvard-maintained guide. They are included here rather than discarded so readers know that those courses were indeed offered. Their sort order above is not based on data.
Check out how course enrollment is distributed between classes overall and throughout time. See detail about enrollment.
This chart originally seems a bit uninteresting, there isn't the kind of dramatic expansion of enrollment in Statistics and Computer Science that might have been expected (though those trends are significant and certainly viewable on a larger timescale). The fact that there is more enrollment in the fall semester rather than the spring semester is something that is caused by more people taking time off during the second rather than the first semester.
This is the 2011-2017 aggregated enrollment histogram for , using a perspective1.
1 The student level median is the typical class size that a student would expect to be enrolled in. The class level median is the typical class size that a professor would expect their class to be.
Students' experiences in class vary greatly by the departments those classes are in. This is reflected in Q Guide data through the starkly different median class sizes between departments. General Education classes (in departments like Aesthetic and Interpretive Understanding, Science of Living Systems, Science of the Physical Universe, United States in the World, Societies of the World, ) had by far the largest class sizes, followed by STEM classes. The overwhelming values of Despite Harvard's supposed attention paid to its General Education program, it's a shame that these classes are often administered in depersonalized ways.
A strong pattern emerges when we look at workload versus course rating: In STEM fields, workload correlates negatively with course rating. The highest-rated STEM classes are those that require the least work. At the other end of the spectrum, general education classes are actually rated higher when they have higher workloads. Other areas were rated somewhat in the middle: workload wasn't as great a predictor of overall course rating.
This is
The scatter shows course data from classes in .
* The scatter dots representing courses are sized by enrollment.
Perhaps the most valuable information to consult in the Q Guide is the comments. Comments are valuable because they allow for specific anecdotes, associations, and comparisons. We can get a taste of this value by simply looking at how
The scatter shows the word distribution of comments about classes. You can selectively decide whether to .
* A list of the boring words, along with everything else used in the project, is on my github.
* A list of the boring words, along with everything else used in the project, is on my github.
The scatter shows how certain words correlate with . Toggle whether you'd like . Interact with the legend to show different subcategories of words.
* This visualization uses data for all available semesters and all available courses.