Sections

Research

FAFSA completion rates matter: But mind the data

A group of students studies on the campus of San Francisco State University

FAFSA season has just ended — the final deadline to fill out the 2017-18 Free Application for Federal Student Aid (FAFSA) was June 30. This year, as every year, many students who are eligible for aid will have failed to complete the form. This means many miss out on financial aid, which can have a serious impact on postsecondary enrollment, persistence, and completion. As many as one in seven students eligible for financial aid who enroll in college do not complete the FAFSA.

FAFSA completion is positively associated with college enrollment, and FAFSA completion rates can be important early indicators of postsecondary access and success. But FAFSA completion rates can be tricky to track and to compare across time or between places, and are prone to misinterpretation. Overall completion rates in particular schools or districts may also disguise important divides by socioeconomic background.  Differences in filing rates will almost by definition be related to differences in financial need (though, as we will discuss, the correlation may not be in the expected direction). 

In theory, calculating FAFSA completion rates should be child’s play: after all, you just need a numerator (FAFSA completions) and a denominator (12th-grade enrollment). As we show below, however, neither piece of that equation is as straightforward as it seems, which can lead to inconsistent or incomparable estimates. These measurement challenges are important to address: to the extent that FAFSA completion rates matter, measuring them as accurately as possible matters too.

Specifically, anyone trying to calculate FAFSA completion rates must address the following questions:

The Numerator: Which FAFSAs count?

FAFSA completions were once measured using only self-reported survey data, which are likely inaccurate. Federal Student Aid (FSA), an office of the U.S. Department of Education, now provides tallies from actual FAFSA submissions. This is a marked improvement over student self-reports. But a few special considerations are in order when using the public use (school- or district-level) dataset to calculate completion rates.

For one thing, the FAFSA does not ask applicants to report whether they are seniors in high school, so calculating the completion rate among high school seniors requires some assumptions about who counts as a senior. Until mid-April of 2017, FSA identified high school seniors as first-time filing applicants no older than 18, but this age limit has since been changed to 19. Researchers must be clear and consistent about which data series they are using. Additionally, FSA suppresses data for schools with under 5 applications. In aggregate, it does not matter much if we exclude these schools from the dataset or assume that each has, say, 2.5 completions. At a more granular level, though, this choice could have a nontrivial impact on FAFSA completion rates, particularly in districts with many small schools.

It’s important not to mix up artificial differences created by the definitional change or by the handling of small schools with actual changes in FAFSA completions. Actual changes may also be confused with concurrent changes in the timing of FAFSA filing in 2016-17 due to an executive order that (1) allowed students to start applying for aid in October rather than January, and (2) allowed students to use tax information from an earlier year so they do not have to wait until tax filing season to apply for aid. Identifying true increases in FAFSA completions is key to understanding the extent to which policy changes like this one can improve access to financial aid.

The Denominator: How many students are enrolled?

This may sound like the easy part, since the National Center for Education Statistics (NCES) publishes official school-level enrollment data in the Common Core of Data. But as of June 2018, the most recent enrollment data is for the 2015-16 school year. By contrast, FAFSA numbers are updated monthly.

One option for getting up-to-date completion rates is to use enrollment data for a previous year as a proxy for current enrollment. Another is to get more recent enrollment data from state statistical agencies. But not all states measure enrollment the same way. Some report membership on a given day, or average daily membership over a month, or cumulative enrollment over the year (not subtracting those who leave the rolls). Further, since enrollment fluctuates over the school year, it is important to measure enrollment on a reasonably consistent date, so as to enable comparisons across place and time. The Common Core of Data reports membership on October 1.

What matters most, of course, is comparing apples to apples. The results of a FAFSA Completion Challenge run by the National College Access Network, an organization experienced in calculating accurate FAFSA completion rates, were distorted because a grantee in Greensboro, North Carolina (Say Yes to Education) reported cumulative enrollment in the ninth month of the school year for the first year of the challenge, rather than the numbers NCAN asked for: the enrollment numbers reported to the Common Core of Data—that is, membership on October 1.

Greensboro took the prize for both the highest overall FAFSA completion rate and the biggest increase in FAFSA completion, when in fact (by our calculations) the award for the biggest increase should have gone to Cheyenne, Wyoming. The first-place prize was $75,000, and Cheyenne still received $50,000 for coming in second. Small beer, financially speaking, but the warning is clear: FAFSA completion rates have to be calculated very carefully and consistently, especially if they are used to compare different places, or changes over time.

For its 2018-19 challenge, NCAN is requiring districts to “have access to weekly student-level FAFSA completion data provided by the Office of Federal Student Aid’s FAFSA Completion website, and the ability to match that with student-level demographic data from the district’s student information system.”

Distributional Impact: Who is completing FAFSAs?

Pushing up FAFSA completion rates is an important policy goal, in order to maximize the number of eligible students who receive support. As things stand, of the 30 percent of undergraduate students who did not apply for federal student aid in 2011-12, roughly a third were likely eligible for Pell Grants (though we should note that Pell Grant eligibility is difficult to estimate from survey data). For the purpose of awarding need-based aid, what matters most is increasing financial aid applications among those most likely to be eligible for financial aid. Driving up completion rates by inducing more students from affluent families to fill out the FAFSA is close to pointless unless we are primarily interested in raising the number of students who apply for student loans.

We might think (or hope) that FAFSA completion rates would be highest in school districts with the greatest need. But students in relatively affluent districts are probably more likely to have access to the kind of one-on-one assistance that is key to getting more students to submit the FAFSA, enroll in college, and receive more financial aid.

A recent NCAN study found that school districts with higher child poverty levels have lower FAFSA completion rates—in the realm of 3 percentage points for every 10-percentage-point difference in the child poverty rate. This relationship varies across states: four states (Alabama, California, Minnesota, and Montana) see slightly higher rates of FAFSA completion in low-income districts (those at the 90th percentile of the national district-level poverty distribution) than in high-income districts (those at the 10th percentile):FAFSA

We should note that despite within-state gaps in FAFSA completion rates between high- and low-poverty districts, high-poverty districts in Tennessee and Maine still have higher completion rates than many of the low-poverty districts in other states. In fact, Tennessee regularly has the highest overall rate of FAFSA completion of any state. This is no accident: students must complete the FAFSA to apply for Tennessee’s HOPE and Promise scholarships.

Ideally, we would like to know how many of the students who file the FAFSA as a result of these place-based scholarships are eligible for aid, and how many fill out the form simply to “check a box” on the path to obtaining a non-need-based scholarship. Wider access to data on FAFSA completion among demographic subgroups could help us to determine if these programs are increasing FAFSA completion among those who are most in need of aid. Another potential approach is to compare completion rates in schools with students with similar socioeconomic characteristics (say, majority low-income).

The good news is that FAFSA completion rates can be raised by providing one-on-one personal assistance, holding “FAFSA completion nights,” or just by making the application process simpler. But it is important for policymakers, scholars, and practitioners to ensure they are working with consistent and comparable data sources. As in so many areas, data really matters here.

Authors