Gradually Growing Sophistication in How Colleges are Evaluated

In “False Advertising for College is Pretty Much the Norm,” I argue:

Colleges and universities claim to do two things for their students: help them learn and help them get jobs. …

On job placement, the biggest deception by prestigious colleges and universities is to claim credit for the brainpower and work habits that students already had when they arrived. Another important deception is to obscure the difference between the job-placement accomplishments of students who graduate from technical fields such as economics, business, engineering or the life sciences, and the lesser success of students in the humanities, fields like communications or in most social sciences.

In Money’s “The Best Colleges in America, Ranked by Value,” the situation is getting a little better. A lot of Money’s focus is on providing possible students a more accurate idea of how much a college costs, which I didn’t focus on in my diatribe. But in some of the components of their evaluation described in their methodology explanation are in the direction I called for:

Major-adjusted early career earnings: (15%). Research shows that a student’s major has a significant influence on their earning potential. To account for this, we used College Scorecard program-level earnings of students who graduated in 2017 and 2018. These are the earnings of graduates from the same academic programs at the same colleges, one and two years after they earned a degree. We then calculated an average salary, weighted by the share of the bachelor’s degree completers in each major (grouped by CIP program levels). Then, we used a clustering algorithm to group colleges that have a similar mix of majors and compared the weighted average against colleges in the same group. Colleges whose weighted average was higher than that of their group scored well. This allows us to compare schools that produce graduates who go into fields with very different earnings potentials.

Value-added early career earnings (15%). We analyzed how students’ actual earnings 10 years after enrolling compared with the rate predicted of students at schools with similar student bodies.

Economic mobility: 10%. Think tank Third Way recently published new economic mobility data for each college, based on a college’s share of low- and moderate-income students and a college’s “Price-to-Earnings Premium” (PEP) for low-income students. The PEP measures the time it takes students to recoup their educational costs given the earnings boost they obtain by attending an institution. Low-income students were defined as those whose families make $30,000 or less. We used this as an indicator of which colleges are helping promote upward mobility. This data is new this year, and replaces an older mobility rate we had been using from Opportunity Insights.

Adjusting earnings for major and for the characteristics of students when they enter college is a step forward. I would like to have (a) seen these components emphasized more and (b) had the main table distinguish between earnings for different majors, so prospective students could see how much they sacrifice in earnings to pursue less-well-remunerated majors.

In “False Advertising for College is Pretty Much the Norm,” I also argue for direct measurement of learning:

To demonstrate that a school helps students learn, schools should have every student who takes a follow-up course take a test at the beginning of each semester on what they were supposed to have learned in the introductory course. The school can get students to take it seriously enough to get decent data — but not seriously enough to cram for it — by saying they have to pass it to graduate, but that they can always retake it in the unlikely event they don’t pass the first time. To me, it is a telling sign of how little most colleges and universities care as institutions about learning that so few have a systematic policy to measure long-run learning by low-stakes, follow-up tests at some distance in time after a course is over.

Some readers might argue that earnings are the ultimate goal, so measuring the intermediate outcome of learning is unnecessary. But I’ll bet that other unmeasured factors affect earnings to a greater degree than other unmeasured factors affect learning of the specific types of knowledge that colleges purport to teach. So measuring learning is a way to focus on an intermediate outcome likely to be to a greater extent causally due to college. And learning may be of some value to students in their lives beyond what it contributes to measured earning. For example, college graduates are healthier than those who don’t graduate from college, may make better consumer decisions and may get more enjoyment out of inexpensive entertainments such as reading books. Some colleges may do a better job at helping students learn things that will help their health, their decision-making and their enjoyment. As an even bigger deal, some colleges may do a better job at helping students choose life goals that those students will find meaningful in the future.

I describe some of my economics department’s effort to measuring learning in “Measuring Learning Outcomes from Getting an Economics Degree.” I discuss measurement of learning more in “How to Foster Transformative Innovation in Higher Education” and argue that measuring skill acquisition unlocks new, improved possibilities for higher education:

I want to make the radical claim that colleges and universities should, first and foremost, be in the business of educating students well. One implication of this radical claim is that colleges’ and universities’ performance should be measured by value added—by graduation rates and how much stronger graduating students are academically than they were at matriculation. By this standard, bringing in students who were impressive in high school raises the standards for what one should minimally expect a college’s or university’s students to look like at graduation, and colleges and universities become truly impressive if they help weak students become strong.

… The key to allowing alternative forms of higher education to flourish is to replace the current emphasis on accreditation, which tends to lock in the status quo, and instead have the government or a foundation with an interest in higher education develop high-quality assessment tools for what skills a student has at graduation. Distinct skills should be separately certified. The biggest emphasis should be on skills directly valuable in the labor market: writing, reading carefully, coding, the lesser computer and math skills needed to be a whiz with a spreadsheet, etc. But students should be able to get certified in every key skill that a college or university purports to teach. (Where what should be taught is disputed, as in the Humanities, there should be alternative certification routes, such as a certification in the use of Postmodernism and a separate certification for knowledge of what was conceived as the traditional canon 75 years ago. The nature of the assessment in each can be controlled by professors who believe in that particular school of thought.)

Having an assessment that allows a student to document a skill allows for innovation in how to get to that level of skill. …

To the extent colleges and universities claim to be teaching higher-order thinking, an assessment tool to test higher-order thinking is needed. One might object that testing higher-order thinking would be expensive, but it takes an awful lot of money to amount to all that much compared to four or more years of college tuition. And colleges and universities should be ashamed if they think we should take them seriously were they to claim that what they taught was so ineffable that it would be impossible for a student to demonstrate they had that skill in a structured test situation.

Conclusion: Outcome measurement matters. If you want to change the world for the better using your statistical or test-designing skills, figuring out how to do better outcome measurement for colleges and universities is a high leverage activity.