College Readiness Predictors Don't Have to Be a Mystery

Eric Vanden Berk headshotSDP Fellow Eric Vanden Berk of Minneapolis Public Schools created a data system to track and measure factors that signal college-readiness in high school students.

Early warning systems have proliferated in school districts across the country. Many districts now use the ABCs (attendance, behavior, and course performance) to predict who is at risk of dropping out of high school, but SDP Fellow Eric Vanden Berk took things a step further. Instead of just predicting whether someone is likely or not likely to graduate from high school, Eric wanted to know if Minneapolis was preparing students to go to college and whether those colleges were high quality matches.

What Is College Readiness?

Fellow Eric Vanden Berk realized that Minneapolis Public Schools (MPS), like many other districts, were defining college readiness from a deficit perspective. In other words, schools were relying on early warning detections that signaled risk factors rather than success factors.

But as important as early detection may be for identifying those students who will drop out or graduate late, deficit data tells educators very little about who will succeed after high school graduation. Vanden Berk wanted to “flip” that data to look at the best indicators for success rather than failure. “It was a way of looking at the same system for a different outcome,” Vanden Berk says.

Using longitudinal data, Vanden Berk set out to build an “on-track system” that would allow counselors and educators to rely on data-driven information for meeting MPS’s mission of ensuring that every student is “College and Career Ready.”

Vanden Berk and MPS began by identifying an operational definition of college readiness, one that focused on measurable outcomes. That definition included two key components:

1. Students go on to college immediately upon graduation, and

2. Students attend institutions that match their ability in terms of institutional quality and selectivity.

What Are Valid and Reliable Indicators?

Data from eight cohorts of graduating students from MPS schools between 2009–2016 became the basis for the system. The data set included information about students’ college enrollment date (if they enrolled), their GPA, their ACT score, the number of AP and IB classes they took, their attendance record, their disciplinary records, their socioeconomic status, and their completion (or not) of a FAFSA application.

Since Vanden Berk also wanted to look at the competitiveness of colleges chosen, he and a colleague created a ranking system using a similar version of Barron’s Selectivity Rankings. They could then look at which students were most likely to attend highly-ranked colleges.

What Best Predicts College Readiness?

Statistical analysis yielded a prime predictor that surprised everyone. That critical predictor is whether or not a student fills out a FAFSA application. In other words, as Vanden Berk says, “If you filled out a FAFSA application, it really meant that your intention was to go to college.”

Because demonstrating intention to go to college proved critically important, the MPS district “raised the profile of FAFSA applications,” says Vanden Berk. The best predictor for a student attending a highly-ranked college was perhaps less surprising: Students with a higher GPA are far more likely to attend selective institutions.

Other predictors provided useful insight into how educators and counselors might better prepare students for college. For example, students who take multiple AP or IB classes are more likely to attend competitive colleges. Students with higher GPAs can compensate for lower ACT scores. A student with a GPA above 3.5 signals an 89% chance the student will attend college.

Educators and counselors who have access to data-backed predictors can begin to challenge conventional ideas about how to best help students achieve the goal of college and prepare students for attending even the most highly-selective schools.

“The good news is that we’re still using this data,” said Vanden Berk. “We have used this work to build out some on-track reporting tools that show points like ACT scores, FAFSA completion, and advanced course taking to expand our understand of which students are college ready.”

Questions to Consider Before Building a Predictive Database

Districts or states who want to create a customized database for identifying college predictors should consider four key questions before beginning:

1. What are your definitions of college readiness?

Start by agreeing on a specific and measurable definition of college readiness that satisfies all relevant stakeholders. Recognize that your district or state’s definition of college readiness may be as simple as completing a high school diploma. College selectivity may not factor into your definition; instead, you may be more interested in looking at college retention or some other outcome.

2. What data points can you access?

Determine data points that are reliable and valid that you can easily access for all or most of the district or state high schoolers. Look at your current data system to see what kind of longitudinal data you have right now. That might be where you begin.

3. What measures might you use that are noncognitive?

Think about measures that are not directly related to GPA or ACT scores. Those measures could include student engagement, volunteer work, or extracurricular activities. Determine how you might consistently collect that data for graduated students and for future students.

4. What is your plan for long-term validation?

Determine a plan for evaluating and re-calibrating your cut-points to ensure that your predictions and tools are consistently accurate and useful.

 

Eric Vanden Berk is a Data Scientist for Minneapolis Public Schools.