The various college rankings are based on variables that the organizations doing the rankings believe are important or reflect quality. There is no universal set of variables on which everyone agrees, and the disagreements about what matters can be quite energetic. It is not unusual, therefore, for a college to appear near the top of one list and near the bottom of another.

The varying degrees of rigor with which the information that forms the basis of the lists is collected can also result in dizzying changes in placement from one year to the next.

For some organizations, the focus is on factors that create significant differentiation among schools along a small, carefully circumscribed set of dimensions. Others-most notably the federal government-tend to whitewash differences not only among colleges but across higher education sectors and settle on a set of metrics that ostensibly apply to all-public private, residential, online, vocational, liberal arts, selective, open-but in fact apply, in the aggregate, to none.

So, reader-unfriendly though it may be, methodology matters. In what is often a sincere effort to assist prospective students and their families as they navigate the overwhelming amount of information on an increasingly complex array of higher education options, correspondingly complex numbers are reduced to summary measures that, at their least harmful, mask important context or, at their most harmful, exploit it.

In combination, these two considerations-organizational agenda and the winnowing of complex data into sound bites-can be a disservice to the very students the ranking organizations wish to help.

Rankings reflect opinions about what matters in higher education and how well colleges meet presumed obligations. They are also, candidly, often a vehicle to draw attention-and web clicks-to organizations' agenda, which run the gamut from educational to political to commercial to controversy for controversy's sake. (When US News started its rankings, for example, they were a small item in the print magazine. The print magazine is long defunct, but the web site draws more than 10 million visitors for whom the first stop is most often one of the ranking lists, the number of which grows almost annually.) Methodology runs the gamut as well, from data-driven algorithms to anecdotes offered by individuals who may or may not be students, and may or may not even attend the school in question. The result of such variety in both philosophy and rigor is that no school—not a single one—appears at or near the top of every ranking, nor is there any guarantee that doing well one year means doing well the next.

Colleges have a choice. They can dismiss all rankings on methodological grounds. They can expend time and resources improving placement on a select few. Or they can focus on aspects of the various lists that, overall rank notwithstanding, resonate with campus mission and priorities. Which of these metrics matter to us? What can we learn from this comparison with our peers? With whom do we share proximity on the list?

Davidson asks these questions every time a rankings list is released. We take a critical look at methodology, and we assess the reliability of what is being measured. We take particular note of where Davidson is relative to schools with similar missions and resources. We are mindful that prospective students, donors, and alumni are unlikely to have delved into the often hard-to-find explanations of data collection and calculation, and that there is a fine line to walk between shining an objective light on those issues and defensively downgrading their impact.

The most productive approach Davidson can take to the proliferation of rankings is to have a solid understanding of what each of these organizations hopes to accomplish, a firm grasp on campus priorities and whether or not they are truly reflected in any particular ranking, the means for conveying accurate information to its various constituencies, and the standing to give invalid conclusions only the weight they deserve.

This last is critical. Schools that have complained most publicly about their own rankings have been, almost without exception, schools whose company Davidson does not wish to keep. The top schools, on the other hand, can say, "this is not a list to which we aspire" or to greet inclusion among highly ranked schools with "we are pleased to be recognized as part of this group." Davidson is privileged to be counted among the schools able to dedicate time and resources not to what a rankings organization tells us matters but to what the college knows matters, to its mission, to its aspirations, and to its students.

That said, prospective Davidson students are not helped by a dry reiteration of methodology. Nor does the college want to appear to be dismissive or defensive when asked about rankings that grab the attention of the media. 

The Rankings

The alphabetical listing of the better known rankings are below, accompanied by brief descriptions. 

American Council of Trustees and Alumni (ACTA)

Background: ACTA's mission includes ensuring that "the next generation receives a philosophically rich, high-quality education at an affordable price." This translates into some pretty strongly held beliefs about what constitutes an appropriate course of study for college students. Colleges that do well on the ACTA rankings have core curricula or general education requirements that include U.S. history, economics, and survey courses (that is, an introductory literature class rather than one focused on a particular literary topic, genre, or author). In order to receive credit for a composition requirement, it must be at the introductory level and include grammar; writing-intensive and writing-across-the-curriculum are explicitly excluded. ACTA only recently removed a required course in Shakespeare as a criterion of excellence. Schools are assigned letter grades that reflect how closely they meet the ACTA definitions of quality. Since the criteria for doing well on the ACTA rankings are rather proscriptive, they are also inconsistent with some aspects of Davidson's educational philosophy. It would require a fundamental shift in academic requirements to receive a grade of A from ACTA (and, in fact, only 22 of the 1,091 schools rated received one). It is also worth noting that Davidson's peers, the Ivies, and most of the colleges and universities widely recognized for their academic quality tend to do very poorly. Amherst, Bowdoin, and Berkeley were among the schools receiving an F from ACTA in 2013. Williams and Harvard received a D. Dartmouth joined Davidson with a grade of B. Schools that qualify for an A grade tend to be less selective and more fundamentally religious. All that counts toward the ACTA grade are the schools' core curriculum or general education requirements. It is possible for a school to receive a grade of A while graduating less than 30% of its students, as do Kennesaw State University and Colorado Christian College.

Forbes

Background: Although the rankings are published in Forbes, they are created by the Center for College Affordability and Productivity (CCAP). Their reliance on self-reported, un-vetted data and non-representative sources explains a large part of the wide variation in rank from year to year (swings that apply to virtually all of the colleges ranked).

CCAP prides itself on using "outcomes/satisfaction-based criteria, not input/reputation criteria used by some other rankings services." Presumably they are referring to US News and its inclusion of standardized test scores, selectivity, and peer assessment. However, CCAP's focus on satisfaction is driven by this philosophy, which figures prominently in its explanation of methodology: "Asking students what they think about their courses is akin to what some agencies like Consumers Report or J.D. Powers do when they provide information on various good and services." Note that CCAP views this analogy as positive.

Also positive in the CCAP philosophy is high salary and high profile. The problems with the way these data are collected are myriad, but the larger issue is that salary and the inclusion of alumni on various lists that tend to weight fame higher than societal contribution are at odds with what Davidson considers to be markers of success.

Within the individual components from which overall rank is derived are measures that matter to Davidson, including retention, graduation rates, debt, and loan default rates. Unfortunately, the Forbes provides no comparative information in its presentation of rankings, leading to perplexed ruminations about why, for example, Cornell College can be ranked well above Cornell University one year and drop to the bottom 50% the next or how some schools that graduate less than 50% of their students do well on this list as NYU barely edges into the top 100.

Money Magazine

Background: Money Magazine's rankings include some measures that are important to Davidson (graduation rate, net cost of a degree), some that are gathered from questionable sources (payscale.com), and some that reflect priorities that are not necessarily aligned with Davidson's (alumni income). Money's unique contribution to developing rankings is its attempt to statistically control for large enrollments in academic programs that lead to high paying careers. They make some other adjustments to measures that have been historically misleading (average debt at graduation takes into account the percent of students graduating with debt) and that is a good thing.

They are less than forthcoming about how some of these adjustments are made, and offer no information on how they calculate some of their predicted values. As a result, it's both not surprising and inexplicably puzzling that schools that tend to cluster on other ranking lists are widely dispersed on this one. The ranks of Davidson and its peers range from 14 to 150. The contrast between institutions given an identical rank is often noteworthy. Johns Hopkins, for example, is tied for rank 107 with the College of Our Lady of the Elms, a small college than accepts 80% of its applicants and is not listed among the top 100 schools in US News even when limiting its category to those located in the north region.

Princeton Review

Background: It must be said: The topic lists can be a lot of fun. It must also be said: The Princeton Review has no relationship to Princeton University. The Princeton Review began life as a college admissions preparatory company that happened to be located, at the time, in Princeton, New Jersey.

The lists that comprise the Princeton Review rankings are based on a survey completed by visitors to the Princeton Review site (presumed to be students, specifically presumed to be students at the college they're rating, but no verification is requested). Given the small number of survey completions per campus, and the absence of demographic analysis, there is no reason to assume responses are representative.

Most of the lists are based on a single survey question. The Great Financial Aid list, for example, is based on the question, "How satisfied are you with your financial aid package?" Note inclusion on this list is not based on actual financial aid data. A few of the lists-and they tend to be the more creatively labeled-are based on more than a single survey question. The Future Rotarians and Daughters of the American Revolution list, and its counterpart, the Birkenstock-Wearing, Tree-Hugging, Clove-Smoking Vegetarians list, are based on respondents' political identification and their perception of the use of drugs on campus, popularity of student government, acceptance of the LGBT community, and how religious the school is perceived to be. There is so much volatility in the lists that a school could appear on the Birkenstock-Wearing... list one year and the "Future Rotarians..." list the next.

The 379 Best Colleges compilation released simultaneously each year does contain vetted and data-supported information for prospective students and their families. Colleges are not ranked; rather, ratings are assigned for selectivity, academic quality, cost, and extracurricular activities. Davidson consistently does well on all these ratings.

U.S. News and World Report

Background: It has been criticized-not unreasonably-for summarizing myriad and nuanced data points into a single and over-simplified number that is unduly influenced by reputation. It has been accused-not unfairly-of giving an undue weight to higher budgets, higher selectivity, and higher profiles. "Movement" is built into the rankings (through changes in factors, weights, scales or, more insidiously, through compressed variables and the ways in which tied scores are handled) that are disproportionate to the effect that campus changes would likely create, as a way to increase interest. To be fair, on this point even US News is on record with the admonishment that schools should not use language about "rising" or "falling" in the rankings.

US News is also on record regarding the subjectivity of the ranking's algorithm. Robert Morse, the editor in charge of the rankings since their inception was quoted in the February 14, 2011 issue of The New Yorker: "We're not saying that we're measuring educational outcomes," he explained. "We're not saying we're social scientists, or we're subjecting our rankings to some peer-review process. We're just saying we've made this judgment."

Yet more so than virtually all the other rankings, US News demonstrates its professed commitment to providing information to prospective students and their families through its detailed presentation of the rankings themselves and a robust search feature that enables comparison along dimensions that are important to those students and families. They can learn about the academic preparation of students at different colleges by looking at standardized test means and selectivity. They can get a sense for the academic experience by looking at class size and student-faculty ratio. They can see how well a school retains and graduates its students, a decent measure of how well the admission process creates a good fit between the applicants and the school, and of the educational experience of the students. They can get a sense of how satisfied alumni are by looking at the alumni giving rate. Hundreds of data points are collected beyond what are used in the rankings, making US News one of the best sources of information on colleges, especially for prospective Davidson students for whom class size, faculty interaction, and the academic challenge of other bright students in the classroom are important.

Where schools tend to err is in giving undue attention to inconsequential changes in rank from one year to the next. A school can increase its overall score and still stay maintain the same rank; it can increase its score and go down or up in rank. Within a range of approximately five ranks in either direction, most movement is a result of methodological maneuvering.

Washington Monthly

Background: The Washington Monthly rankings are particularly problematic for schools like Davidson because they are ostensibly driven by service and research, key words that mean something to Davidson's constituencies. However, the ways the Washington Monthly defines service and research reflect a very particular perspective that is, in many ways, at significant odds with Davidson's mission and priorities.

Some of the on-campus service measures make sense for Davidson, for example, student participation in community service, academic courses that incorporate service, the number of staff supporting community service. Here, the problem is not the measures but the data source. The Washington Monthly does not request the data from the schools but pulls them from applications made to the Corporation for National and Community Service for the President's Higher Education Community Service Honor Roll. If a school did not submit such an application the year the rankings were calculated, that school would get no credit for eight of the ten measures of service.

The remaining two measures of service are rather arbitrary: the percentage of number of alumni who join the Peace Corps and the percentage of students who serve in ROTC. Note that participation in no other service organization qualifies; in spite of the Washington Monthly's focus on what is in [America's] public interest," organizations that focus on work in the U.S. are excluded.

The research measures notoriously reflect significant biases in the Washington Monthly rankings. Unlike the service measures, where some adjustment for enrollment size is made, the primary research measure, research expenditures, includes no such adjustment. The Washington Monthly's response to this bias in favor of large universities is as follows: "...our research score rewards large schools for their size. This is intentional. It is the huge numbers of scientists, engineers, and PhDs that larger universities produce, combined with their enormous amounts of research spending, that will help keep America competitive in an increasingly global economy...This year's guide continues to reward large universities for their research productivity." When asked why PhDs in the sciences and engineering received greater weight than other fields, the Washington Monthly went on record with this statement: "obviously people working in those fields provide the most benefit to society."

Wall Street Journal

Background: The WSJ college ranking is a mixture of defined data equally available for all schools and data that are subjective/self-reported or apply disproportionately to a particular type of institution. There is a reason the list is heavy on research universities (the first liberal arts college appears at rank 22). Major factors include graduation rate and some other straightforward metrics, as well as a "value-added" measure for salary 10 years out from graduation and loan repayment. These are taken from College Scorecard data, meaning they are only available for graduates who took out federal student loans.

Academic reputation is based entirely on a survey that the ranking's co-sponsor, Times Higher Education, has done for a number of years for another set of rankings. The survey is sent to "only experienced, published scholars, who offer their views on excellence in research and teaching within their disciplines and at institutions with which they are familiar." The list of "experienced, published scholars" are those listed in the Elsevier database of publications. Research productivity of the faculty is measured by the number of research papers listed in Elsevier. Student engagement depends on responses to the US Student Survey. Nothing about the survey suggests confidence in its representativeness and, in any case, it is based on perception and self-report.