Close

The U.S. News & World Report college ranking over the years

By Wilder Perkins

U.S. News & World Report has published a ranking of the "Best Colleges" since 1983, and during that time, they've made many changes to how they calculate their rankings.

This interactive website visualizes those formula changes, and provides a few insights into the reasons behind some of the changes.

You can explore the graph by moving your mouse, and read more details in the text above the graph.

Click the red X button to begin.

The U.S. News ranking formula in... 2021


+1 year

What’s new:

  • Added a new “Graduate indebtedness” factor, which aims to reward colleges that leave their graduates with less student debt.
  • Graduation rate performance: now based on a 2-year average, continuing the trend of smoothing out the data used for the rankings.
  • Test-blind schools are now allowed to be ranked, but now all schools must report a 6-year graduation rate.
  • Faculty resources: salary based on “full-time faculty salaries” over 1 year rather than 2-year average of “both salaries and nonsalary compensation.”
  • Selectivity (from 10% to 7%): including test scores (from 7.75% to 5%) and class rank (from 2.25% to 2%). Both remaining data points in this section continue to decline in relevance in today’s college application landscape, but they remain in the formula.
  • Grad. rate performance: prediction formula now includes % of students who are first-generation students (helps colleges that are better for first-gen students); uses “multiyear average instead of a single cohort” for calculating % of students on Pell grants.
  • Reputation (still 20%): No more high school counselor surveys. U.S. News says they did this because they “had greater confidence in the data and the significantly increased response rates from the peer assessment surveys”. Look at the downward trend in response rates on the high school survey and you’ll understand why they dumped it. The peer assessment survey now accounts for all 20% of this category’s overall weight.
  • Faculty resources: Switched to “open-source data from the Bureau of Economic Analysis” for faculty salaries. This data is presumably more reliable than what the colleges were feeding to U.S. News.
  • Selectivity: If class rank is available for <10% of students, school gets an estimate instead of a penalty. However, schools still get a penalty if the data is available for 10–20% of students; this may be because some schools don’t report class rank data for any of their students, placing them in the <10% category. Also, schools get a footnote in the ranking table if class rank is available for <20% of students (was <50%). Before this change, 79% of schools in the top 50 had a footnote about their class rank data; after this change, only 8% of top-50 schools had a footnote.
  • Substitute data from the Department of Education’s Integrated Postsecondary Education Data System was used if schools do not report data in some areas. This suggests some schools were not self-reporting data (or reporting inaccurate data) in these areas.
  • For the 2019 ranking, U.S. News made the most significant formula changes they’d made in a while. This is interesting timing to say the least, given that the year before, Politico and The Washington Post both released articles criticizing many aspects of the ranking methodology.
  • Some categories have been given new names on the “what’s new” page. Some of these names (“Student Excellence” for “Selectivity”; “Expert Opinion” for “Reputation”) give off the impression that U.S. News is trying to “sell” the usefulness of these factors to readers.
  • Added a new “Social Mobility” factor.
  • Selectivity (“Student Excellence”): Eliminated acceptance rate from the formula.
  • Reputation (“Expert Opinion”) the weight of high school counselors’ ratings has been lowered from 7.5% to 5% overall. (The survey of college administrators remains weighted at 15%.) Is this because of low response rates?
    • Only 35.5% of college administrators responded to the reputation survey (down about 5% from the year before). Unlike in previous years, U.S. News did not publish the response rate for the high school counselor survey. Given that only 7% of counselors responded the year before, there’s a good chance U.S. News didn’t publish this information because they were embarrassed about how low the response rate was.
  • Percentages unchanged.
  • The “graduation rate performance” prediction formula now accounts for what percentage of degrees granted are STEM degrees (because STEM students graduate at lower rates).
  • Every year, U.S. News publishes the response rates for their reputation surveys, allowing us to see how much these rates have declined over the years. In 2017, 39% of college administrators filled out the peer assessment survey (down from 65% in 1997), and 7% of high school counselors filled out their survey (down from 21% in 2011). This reflects the declining popularity of the rankings among people with ties to higher education.
  • Faculty resources: class size still counts for 40% but more gradations of “credit” (in descending order: <20; 20–29; 30–39; 40–49; and ≥50 gets no credit). This may reduce the advantage gained if a school tries to game the system like Clemson did in 2009. Clemson, and presumably other schools, tried to improve their class size score by reducing a class with, say, 22 students down to 19, while allowing large classes to get even larger.
  • Percentages unchanged.
  • “2 small changes” to Reputation: peer assessment ratings are now based on the last 2 years, and high school counselor ratings are now based on the last 3 years.
  • If less than 75% of applicants submitted test scores, a 15% penalty is applied to the school’s test scores. This has the potential to punish test-optional schools, though they changed it once the pandemic prompted many top-ranked schools to go test-optional. This is the first ranking where they mentioned that they do this, but they say they did it last year as well.
  • Methodology unchanged.
  • Selectivity: Class rank is deemphasized in the Selectivity calculation because “as each year passes, the proportion of high school graduates with class rank on their transcripts is falling” making the data they have less representative. (Class rank decreases from 6% to 3.125%; test scores increase from 7.5% to 8.125%; acceptance rate decreases from 1.5% to 1.25%.)
  • “One small methodology change” in Reputation: High school counselor reputation ratings are now averaged from the last 2 years, rather than being based on 1 year.
  • Accredited for-profit colleges that give bachelor’s degrees are now ranked. Shoutout to the University of Phoenix, which as of the most recent ranking is in the bottom 25% so doesn’t get an official ranking, but it ranks last in almost every category in the latest ranking, so it would clearly be last if it did get a number.
  • Reputation: “In order to reduce the impact of strategic voting by respondents,” the 2 highest and 2 lowest scores were removed for each school. This may be another response to the “strategic voting” by Clemson and other schools (in addition to the change made in 2017).
  • Reputation: added high school counselor survey (33.3%; original peer assessment survey weighted at 66.7%) — magazine says high school counselors have been asking to be included for a long time. Counselors rate colleges on the same 1–5 scale as college administrators.
  • Grad. rate performance: increased (from 5% to 7.5%) because it’s “well received by many higher education researchers” (Does that mean their other metrics aren’t well received by researchers?)
  • Top 75% have printed numerical ranks (was previously top 50%). U.S. News says this was by popular demand, but it raises the question of how accurate/distinguishable the rankings of these schools really are.
  • Percentages unchanged.
  • Selectivity: test scores calculation now includes both SAT and ACT (previously only counted whichever one was submitted by more students).
  • Added “Up-and-Coming Schools” ranking “in response to criticism that our academic peer assessment survey is too slow to pick up improvements at colleges.”
  • Methodology unchanged.
  • Percentages unchanged.
  • Test-blind schools are now unranked. “Other schools were unranked for the following reasons: a total enrollment of fewer than 200 students; a vast proportion of nontraditional students; no first-year students (these are sometimes called upper-division schools). We did not rank private, for-profit universities; nor did we rank a few specialized schools in arts, business, or engineering.” (Quote from 2009 edition where the same rule was in effect.) This may be because test scores are a key part of the “selectivity” factor (counted for 7.5% overall at the time).
  • Grad. rate performance: “% of students on Pell grants” is now part of the prediction formula.
  • Methodology unchanged (for 3 years in a row).
  • Methodology unchanged (for 2 years in a row).
  • Methodology unchanged.
  • Methodology article opens with a story about someone who was trying to decide between 2 colleges and chose the one that was ranked higher. Scary stuff.
  • Selectivity: Yield rate dropped from the formula, as U.S. News finally gives in to outside pressure to do so. For the other selectivity subfactors, the overall weights are now 1.5% accept rate (down from 2.25%), 6% class rank (up from 5.25%), and 7.5% test scores (up from 6%).
  • Methodology unchanged (for 2 years in a row).
  • Methodology unchanged.
  • No changes other than clarification on how class size is calculated/weighted.
  • Caltech was #1 in this ranking — the only time the #1 college is not Harvard, Yale, Princeton, or Stanford (all of which have held or tied for the top spot multiple times). How long will it be before another college joins the club?
  • U.S. News changed how they weighted some categories to reflect when there were large disparities between schools (for example, the numbers for “financial resources” might now look like $200k, 100k, 50k; where they would have previously been 1, 2, 3). This might make the rankings more accurate according to the criteria U.S. News sets out, but it could also disproportionately benefit the institutions with larger endowments.
  • In the middle of the methodology, there’s a paragraph that attempts to address accusations that by including yield rate in their formula, they were encouraging colleges to admit more students through early decision programs. Yield rate is the percentage of accepted students who choose to enroll, and students accepted through early decision are required to enroll, raising a school’s yield rate. This hurts low-income applicants because they lose the ability to compare the financial aid packages offered by the colleges that accept them. U.S. News doesn’t make any changes as a result of this criticism, but in the next few years, they continue to be attacked for their use of yield rate...
  • Reputation: Survey now uses a 5-point scale (instead of the previous 4-point scale) — a system still used to this day.
  • Financial resources: Now only includes educational spending, not the “other spending” category present in the 1998 ranking. U.S. News cited “changes in reporting rules for private colleges and universities” as their reason for the change. But it also removes the spending that’s less related to academics. When a school spends a lot of money on, say, building maintenance, that doesn’t really say anything about the school’s academic quality, which is what U.S. News purports to measure.
  • What’s new in 1998 and earlier...
  • 1998: First year U.S. News publishes percentage weights for subfactors (e.g. acceptance rate, class size)
  • 1997: First year U.S. News publishes the percentages they use to weight the main factors (e.g. selectivity, faculty resources)
  • 1994:
    • Added class size as a subfactor under “faculty resources” (schools with smaller classes on average get more points)
    • Added alumni giving rate as a new factor
  • 1993: Resources becomes based on education spending per full-time student plus other spending (e.g. research, maintenance) per student
  • 1991:
    • Added yield rate to Selectivity
    • Simplified Financial Resources to total spending ÷ total enrollment
  • 1990:
    • Added state and local government funding per student to Financial Resources
    • Switched to 5-year graduation rate for Retention
  • 1989: Ranking switches from biannual to annual and now uses the following 5 criteria:
    • Selectivity: acceptance rate, SAT/ACT, and class rank (% of freshmen in top 10% of high school class)
    • Faculty resources: % of faculty with doctorates, student:faculty ratio, and instructional budget per student (incl. faculty salaries)
    • Financial resources: endowment per student and library budget per student
    • Retention: % of freshmen who return as sophomores and 4-year graduation rate
    • Reputation survey: of “college presidents, deans and admissions officers”; asked to rank colleges on scale from 1 to 4
  • 1984: First edition of rankings; based entirely on reputation survey

−1 year

Key

Click on the Help icons to learn more about each part of the U.S. News formula.

Graduate debt Help

Social mobility Help

Alumni giving Help

Grad. rate performance Help

Financial resources Help

Selectivity Help

Faculty resources Help

Retention Help

Reputation Help

Please enable JavaScript in your browser.