Wednesday, August 14, 2013

Are Students Really Academically Adrift? Rethinking the Assessment of “Limited Learning” on College Campuses

Four years ago I attended a presentation at the annual meetings of the American Sociological Association (ASA) in which Richard Arum and Josipa Roksa previewed the findings of Academically Adrift, their influential book published in 2010.  In a column for the Chronicle of Higher Education, I wrotethat this “cool study” was producing some interesting results.  Most importantly, I reported, that it seemed like the learning gains identified by the research “didn’t look like much.”  I was concerned, for sure, and thus wasn’t surprised when the authors eventually subtitled their book “Limited Learning on College Campuses.”

 Fast forward, and after attending a presentation at this year’s ASA in New York last week, I’ve come to question my assessment—and theirs.   At the time, I was looking at percentage point gains over time, and we know that these are not a good way to assess effect sizes since they do not take into account the amount of variation in the sample. Once the gains are standardized, Arum and Roksa find that students tested twice, four years apart, improve their scores on the Collegiate Learning Assessment by an average of 0.46 standard deviations.  Now that’s a number we can begin to seriously consider.

Is a gain of 0.46 sd evidence of “limited learning” and something to sniff at?  As I said back in 2009, we need a frame of reference in order to assess this.   In the abstract, an effect size means little if anything at all.

For their part, the authors point to a review of research by Ernie Pascarella and Pat Terenzini indicating that on tests given at the time, students in the 1980s gained about 1 standard deviation.  Doesn’t that mean students learn less today than they once did, and that that’s a problem?  Actually, no.

Scores cannot simply be compared across different tests. The scales on tests differ and can only be linkedby administering the same test to comparable people. Clearly, the CLA was not administered to students attending college in the 1980s.  Nor, for that matter, were students then comparable in demographic characteristics to the students of today, or were the conditions of testing the same.

Certainly, the authors know this and it is why they seek to replicate their findings with different tests and samples.  There is some evidence that the effect size of about 0.44-0.47 holds up.  But the purpose of the replication, their main focus, is on whether the magnitude of the gains are the same—they are not using replication to think about whether the effect size is large or small.

So, let’s go back to that critical question.  It was during Josipa Roksa’s ASA presentation last week that I felt we finally had a reasonable answer.  Her talk focused on inequality in learning, and she showed several achievement gaps.   This is one good way to benchmark an effect size and it’s commonly done in k-12.  When educational interventionists seek to examine the size of a program’s impact on achievement, they often compare it to the magnitude of the black-white achievement gap in math, which is about one standard deviation.

The same exercise with the impact of 4-years of college on learning as measured by the CLA is illuminating.   The learning gap in parental education among first-year students in the Academically Adrift sample (e.g. the differences in CLA scores between students with a high school-educated parent and students whose parents completed graduate school) is 0.47 standard deviations.  The black-white gap is 0.79 standard deviations.  These are highly relevant comparisons, given that the posited benefits of colleges are thought to be especially strong and important for students facing more disadvantages.

Thus, the learning gains made during college are equivalent in size to the advantage that a student from an educationally-advantaged family holds over a first-generation student.  They are also almost two-thirds the size of the black/white gap.   In this sense, these are sizable gains. 

However, advantaged students make bigger gains during college such that the parental education gap is 0.54 standard deviations four years later.  This is entirely unsurprising, given the body of evidence indicating that colleges and universities are prioritizing the desires of elite students over the real educational needs of those for whom college is essential to social mobility.

Social inequalities are very hard to close—we won’t be reassigning children to new parents anytime soon. But four years of college clearly raises student achievement, and it is an intervention we can promote and can afford.  The findings from Academically Adrift tell a very different story than its subtitle suggests.  On average, college is transformative for learning, and the real tragedy is that higher education does not focus more attention on the neediest students in order to close the gaps that affect the stability and fabric of our everyday lives.