Reviewing the range of responses to President Obama's plan to reduce college costs, and the questions that are being raised on Twitter, it seems important that the Administration clarify a few things sooner rather than later.
1. This effort to reduce college costs is a first step and thus it is not intended to solve all problems. The President should say something more specific about the ultimate goal and what it would look like in practice. Are we working towards a free community college education? Are we trying to close achievement gaps? What is the intended outcome down the road?
2. This is not NCLB for higher education. The President needs to assure the public that he is not calling for standardized testing, the end of professorial tenure, or a focus on specific fields or majors. He is trying to help more Americans access the quality post secondary education they seek, not water down quality or redefine what matters.
3. This is an effort to protect public higher education, not destroy it. This needs to be said loud and clear, and the President's commitment to community colleges in particular must be emphasized. Too many community college leaders are distressed at the roll-out of these plans, and I did not think that was intended.
4. This is also not an attempt to end for-profit or private higher education. The purpose is to ensure that Title IV is spent in ways that support national needs, not to define the entire range of opportunities that can exist. It is certainly possible to support private and for-profit educational providers without insisting that the federal government should also subsidize them.
5. The President is not insisting that everyone must go to college-- he is trying to help make the American Dream a reality by decoupling family income from educational opportunities.
Now, if I'm correct that these are all statements the President and his Administration can agree with, let's move on to figuring out how to take aim at the underlying inefficiencies in the current financial aid system using institutional accountability.
I think it would be a mistake to subject all institutions to metrics anytime in the near future. Most colleges and universities are good actors, keeping college costs down as long as states do their part. What we need to do as a starting point is to get a handle on (a) the bad actors and (b) federal investments that are ineffective and unnecessary.
Which schools fall into those categories? Here's a start.
BAD ACTORS
1. Institutions whose primary revenue source is Title IV. Let's say those who get at least 75% of funding from Pell and/or student loans, for example. These schools aren't operating based on market demand but rather are propped up by federal aid.
2. Institutions with selective admissions (say less than 75% admitted) and low average graduation rates (less than 50% over 5 years).
INEFFECTIVE, UNNECESSARY INVESTMENTS
1. Institutions with large endowments per student.
2. Institutions serving very few Pell recipients (regardless of whether this is due to admissions practices, costs, or a decision to simply be small).
If we could ensure that federal student aid no longer supported these schools, we would see fewer students attend these schools, their prices would likely fall (or they would close), and/or at minimum we'd save money that could be spent elsewhere.
If that were the first stage, then the Department of Education could begin by publishing these lists of problematic schools, issuing a warning that they have three years to get off the list or lose Title IV.
The other big issue is how to get states back to the table. There could be a separate list of states that are put on probation based on a failure to match federal investments in higher education with state investments. All colleges and universities in those states should be put at risk of losing Title IV-- including the privates and for-profits-- and given 5 years to address the problems.
None of this is perfect, of course, but they get us thinking about a more targeted, incremental approach to reform. What do you think? What would you include?
Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts
Saturday, August 24, 2013
Friday, August 23, 2013
How to Prevent Creaming in Higher Education Performance Regimes?
One of the most prominent concerns raised about President Obama's proposed performance-based funding plan for higher education is that it could reduce access by encouraging creaming. In other words, what's to stop colleges and universities from simply raising the bars for entry, tightening their admissions policies, in order to improve graduation rates and lower default rates?
Good question.
I'd like to make a few points and then open this up for discussion. It's one of the big areas that needs bright minds thinking hard in search of solutions, and I hope you'll jump in with good ideas. We're going to have to look far and wide for solutions, as we can expect that folks in education probably don't have all the answers.
1. The problem already exists. The number of colleges raising their admissions requirements over time tells this story. So let's not pretend like we're creating a new problem. The question is whether we're making it worse.
2. NCLB approached this challenge through the use of value-added modeling. It didn't work there and it's probably not going to work here either, especially since it's hard to believe that we can possible account for all inputs that are external to college, in order to focus on gains made by the college itself. Now, I know many people will disagree with me on this, including my former student Robert Kelchen, so be sure to read up on their work on the topic.
3. A weaker version of value-added modeling is risk-adjusted metrics, a regression based approach to accounting for intial student differences when looking at outcomes. I'm not sure this is going to fly either, and the Left doesn't like it since it seems to perpetuate the idea that we should "expect" students from disadvantaged families to do worse in college. No one actually wants that, and so we try things NCLB-style, demanding growth in graduation rates for subgroups of students. But that too doesn't prevent colleges from admitting fewer students from a given subgroup.
4. Prohibition of creaming via more metrics. Let's say we stipulate terms regarding enrollment and admissions, in addition to outcomes. These may have to be differentiated according to college type. For example, in order to receive Title IV aid, a college must :
Let's talk this one through further.
5. Prohibition and regulation. Schools could be selected for an audit based on troubling trends in their admissions data. If they were found guilty of creaming, they could be put on probation and monitored for a period of time. If they failed that, they'd be kicked out of Title IV.
We must also ask, how much creaming might be tolerable? If the use of performance standards forced more colleges to help low-income students graduate, while reducing access for some other students, at what point would this become intolerable?
Ok, enough from me-- what are your great ideas?
Good question.
I'd like to make a few points and then open this up for discussion. It's one of the big areas that needs bright minds thinking hard in search of solutions, and I hope you'll jump in with good ideas. We're going to have to look far and wide for solutions, as we can expect that folks in education probably don't have all the answers.
1. The problem already exists. The number of colleges raising their admissions requirements over time tells this story. So let's not pretend like we're creating a new problem. The question is whether we're making it worse.
2. NCLB approached this challenge through the use of value-added modeling. It didn't work there and it's probably not going to work here either, especially since it's hard to believe that we can possible account for all inputs that are external to college, in order to focus on gains made by the college itself. Now, I know many people will disagree with me on this, including my former student Robert Kelchen, so be sure to read up on their work on the topic.
3. A weaker version of value-added modeling is risk-adjusted metrics, a regression based approach to accounting for intial student differences when looking at outcomes. I'm not sure this is going to fly either, and the Left doesn't like it since it seems to perpetuate the idea that we should "expect" students from disadvantaged families to do worse in college. No one actually wants that, and so we try things NCLB-style, demanding growth in graduation rates for subgroups of students. But that too doesn't prevent colleges from admitting fewer students from a given subgroup.
4. Prohibition of creaming via more metrics. Let's say we stipulate terms regarding enrollment and admissions, in addition to outcomes. These may have to be differentiated according to college type. For example, in order to receive Title IV aid, a college must :
- Enroll at least 100 students who are Pell-eligible (all colleges) and
- Maintain a % Pell that meets or exceeds the state average among high school graduates (all public colleges and universities)
- Admit at least 50% of Pell applicants (all private colleges and universities)
- OR: Use a lottery process for admission into at least 50% of the entering class
Let's talk this one through further.
5. Prohibition and regulation. Schools could be selected for an audit based on troubling trends in their admissions data. If they were found guilty of creaming, they could be put on probation and monitored for a period of time. If they failed that, they'd be kicked out of Title IV.
We must also ask, how much creaming might be tolerable? If the use of performance standards forced more colleges to help low-income students graduate, while reducing access for some other students, at what point would this become intolerable?
Ok, enough from me-- what are your great ideas?
Labels:
accountability,
creaming,
metrics
Wednesday, August 10, 2011
Measuring Up? The Trouble with Debt to Degree
The following is a guest blog post by Robert Kelchen, graduate student in Educational Policy Studies at UW-Madison, and a frequent co-author of mine. --Sara
I was pleased to see the release of Education Sector’s report, “Debt to Degree: A New Way of Measuring College Success,” by Kevin Carey and Erin Dillon. They created a new measure, a “borrowing to credential ratio,” which divides the total amount of borrowing by the number of degrees or credentials awarded. Their focus on institutional productivity and dedication to methodological transparency (their data are made easily accessible on the Education Sector’s website) are certainly commendable.
That said, I have several concerns with their report. I will focus on two key points, both of which pertain to how this approach would affect the measurement of performance for 2-year and 4-year not-for-profit (public and private) colleges and universities. My comments are based on an analysis in which I merged IPEDS data with the Education Sector data to analyze additional measures; my final sample consists of 2,654 institutions.
Point 1: Use of the suggested "borrowing to credential" ratio has the potential to reduce college access for low-income students.
The authors rightly mention that flagship public and elite private institutions appear successful on this metric because they have a lower percentage of financially needy students and more institutional resources (thus reducing the incidence of borrowing). The high-performing institutions also enroll students who are easier to graduate (e.g. those with higher entering test scores, better academic preparation, etc) which increases the denominator in the borrowing to credential ratio.
Specifically, the correlations between the percentage of Pell Grant recipients (average of 2007-08 and 2008-09 academic years from IPEDS) and the borrowing to credential ratio is 0.455 for public 4-year and 0.479 for private 4-year institutions, compared to 0.158 for 2-year institutions. This means that the more Pell recipients an institution enrolls, the worse it performs on this ratio.
While even though Carey and Dillon focus on comparing similar institutions in their report (for example, Iowa State and Florida State), it is very likely that in real life (e.g. the policy world) the data will be used to compare dissimilar institutions. The expected unintended consequence is “cream skimming,” in which institutions have incentives to enroll either high-income students or low-income students with a very high likelihood of graduation. (Sara and I have previously raised concerns about “cream skimming” with Pell Grant recipients in other work.)
The graphs below further illustrate the relationship between the percentage of Pell recipients and the borrowing to credential ratio for each of the three sectors.
There is also a fairly strong relationship between a university’s endowment (per full-time equivalent student) and the average borrowing to credential ratio. Among public 4-year universities, the correlation between per-student endowment and the borrowing to credential ratio is -.134, suggesting that institutions with higher endowments tend to have lower borrowing to credential ratios. The relationship at private four-year universities is even stronger, with a correlation of -.346. For example, Princeton, Cooper Union, Caltech, Ponoma, and Harvard are all in the top 15 for lowest borrowing to credential ratios.
The relationship between borrowing to credential ratios and standardized test scores is even stronger. The correlations for four-year public and private universities are -.488 and -.589, respectively. This suggests that low borrowing to credential ratios are in part a function of student inputs, not just factors within an institution’s control. In other words, the metric does not solely measure college performance.
It is critical to note that the average borrowing to credential ratio should be lower at institutions with more financial resources and who enroll more students who can afford to attend college without borrowing. However, institutions who enroll a large percentage of Pell recipients should not be let off the hook for their borrowing to credential ratios. These two examples highlight the importance of input-adjusted comparisons, in which statistical adjustments are used so institutions can be compared based more than their value-added than their initial level of resources. The authors should be vigilant to make sure their work gets used in input-adjusted comparisons rather than unadjusted comparisons. Otherwise, institutions with fewer resources will be much more likely to be punished for their actions even if they are successfully graduating students with relatively low levels of debt.
Point 2: The IPEDS classification of two-year versus four-year institutions does not necessarily reflect a college’s primary mission.
IPEDS classifies a college as a 4-year institution if it offers at least one bachelor’s degree program, even if the vast majority of students are enrolled in 2-year programs. Think of Miami Dade College, where more than 97% of students are in 2-year programs but the institution is classified as a 4-year institution.
For the purposes of calculating a borrowing to credential ratio, the Carnegie basic classification system is more appropriate. Under that system an institution is classified as an associate’s college if bachelor’s degrees make up less than ten percent of all undergraduate credentials. The Education Sector report classifies 60 institutions as four-year colleges that are Carnegie associate’s institutions.
This classification decision has important ramifications for the borrowing to credential comparisons. The average borrowing to credential ratio by sector is as follows:
Two-year colleges, Carnegie associate’s: $6,579 (n=942)
Four-year colleges, Carnegie associate’s: $13,563 (n=60)
Four-year colleges, Carnegie bachelor’s or above: $23,166 (n=1,421)
Ten of the twelve and 20 of the top 40 four-year colleges with the lowest borrowing to credential ratios are classified as Carnegie associate’s institutions. For example, Madison Area Technical College is 54th on the Education Sector’s list of four-year colleges, but is 564th of 1,002 associate’s-granting institutions. These two-year institutions with a small number of bachelor’s degree offerings should either be placed with the other two-year institutions or in a separate category. Otherwise, anyone who wishes to rank institutions based on their classification would be comparing apples to oranges.
In conclusion: the effort in this report to measure institutional performance is a laudable one. But the development and use of metrics is challenging precisely because of their potential for misuse and unintended consequences. Refining the proposed metrics as described above may make them more useful.
I was pleased to see the release of Education Sector’s report, “Debt to Degree: A New Way of Measuring College Success,” by Kevin Carey and Erin Dillon. They created a new measure, a “borrowing to credential ratio,” which divides the total amount of borrowing by the number of degrees or credentials awarded. Their focus on institutional productivity and dedication to methodological transparency (their data are made easily accessible on the Education Sector’s website) are certainly commendable.
That said, I have several concerns with their report. I will focus on two key points, both of which pertain to how this approach would affect the measurement of performance for 2-year and 4-year not-for-profit (public and private) colleges and universities. My comments are based on an analysis in which I merged IPEDS data with the Education Sector data to analyze additional measures; my final sample consists of 2,654 institutions.
Point 1: Use of the suggested "borrowing to credential" ratio has the potential to reduce college access for low-income students.
The authors rightly mention that flagship public and elite private institutions appear successful on this metric because they have a lower percentage of financially needy students and more institutional resources (thus reducing the incidence of borrowing). The high-performing institutions also enroll students who are easier to graduate (e.g. those with higher entering test scores, better academic preparation, etc) which increases the denominator in the borrowing to credential ratio.
Specifically, the correlations between the percentage of Pell Grant recipients (average of 2007-08 and 2008-09 academic years from IPEDS) and the borrowing to credential ratio is 0.455 for public 4-year and 0.479 for private 4-year institutions, compared to 0.158 for 2-year institutions. This means that the more Pell recipients an institution enrolls, the worse it performs on this ratio.
While even though Carey and Dillon focus on comparing similar institutions in their report (for example, Iowa State and Florida State), it is very likely that in real life (e.g. the policy world) the data will be used to compare dissimilar institutions. The expected unintended consequence is “cream skimming,” in which institutions have incentives to enroll either high-income students or low-income students with a very high likelihood of graduation. (Sara and I have previously raised concerns about “cream skimming” with Pell Grant recipients in other work.)
The graphs below further illustrate the relationship between the percentage of Pell recipients and the borrowing to credential ratio for each of the three sectors.
There is also a fairly strong relationship between a university’s endowment (per full-time equivalent student) and the average borrowing to credential ratio. Among public 4-year universities, the correlation between per-student endowment and the borrowing to credential ratio is -.134, suggesting that institutions with higher endowments tend to have lower borrowing to credential ratios. The relationship at private four-year universities is even stronger, with a correlation of -.346. For example, Princeton, Cooper Union, Caltech, Ponoma, and Harvard are all in the top 15 for lowest borrowing to credential ratios.
The relationship between borrowing to credential ratios and standardized test scores is even stronger. The correlations for four-year public and private universities are -.488 and -.589, respectively. This suggests that low borrowing to credential ratios are in part a function of student inputs, not just factors within an institution’s control. In other words, the metric does not solely measure college performance.
It is critical to note that the average borrowing to credential ratio should be lower at institutions with more financial resources and who enroll more students who can afford to attend college without borrowing. However, institutions who enroll a large percentage of Pell recipients should not be let off the hook for their borrowing to credential ratios. These two examples highlight the importance of input-adjusted comparisons, in which statistical adjustments are used so institutions can be compared based more than their value-added than their initial level of resources. The authors should be vigilant to make sure their work gets used in input-adjusted comparisons rather than unadjusted comparisons. Otherwise, institutions with fewer resources will be much more likely to be punished for their actions even if they are successfully graduating students with relatively low levels of debt.
Point 2: The IPEDS classification of two-year versus four-year institutions does not necessarily reflect a college’s primary mission.
IPEDS classifies a college as a 4-year institution if it offers at least one bachelor’s degree program, even if the vast majority of students are enrolled in 2-year programs. Think of Miami Dade College, where more than 97% of students are in 2-year programs but the institution is classified as a 4-year institution.
For the purposes of calculating a borrowing to credential ratio, the Carnegie basic classification system is more appropriate. Under that system an institution is classified as an associate’s college if bachelor’s degrees make up less than ten percent of all undergraduate credentials. The Education Sector report classifies 60 institutions as four-year colleges that are Carnegie associate’s institutions.
This classification decision has important ramifications for the borrowing to credential comparisons. The average borrowing to credential ratio by sector is as follows:
Two-year colleges, Carnegie associate’s: $6,579 (n=942)
Four-year colleges, Carnegie associate’s: $13,563 (n=60)
Four-year colleges, Carnegie bachelor’s or above: $23,166 (n=1,421)
Ten of the twelve and 20 of the top 40 four-year colleges with the lowest borrowing to credential ratios are classified as Carnegie associate’s institutions. For example, Madison Area Technical College is 54th on the Education Sector’s list of four-year colleges, but is 564th of 1,002 associate’s-granting institutions. These two-year institutions with a small number of bachelor’s degree offerings should either be placed with the other two-year institutions or in a separate category. Otherwise, anyone who wishes to rank institutions based on their classification would be comparing apples to oranges.
In conclusion: the effort in this report to measure institutional performance is a laudable one. But the development and use of metrics is challenging precisely because of their potential for misuse and unintended consequences. Refining the proposed metrics as described above may make them more useful.
Subscribe to:
Posts (Atom)


