How SDP Fellow Brittany Mauney used data to understand a controversial teacher bonus program in Delaware.
This month Denver public schools teachers went on strike. Much of the controversy revolved around a pioneering and reportedly complicated pay-for-performance system developed in partnership with the local union. The original vision was that teacher bonus programs would help retain high quality teachers, incentivize performance, and contribute to educator professionalism.
Yet there is a mixed body of research and results, and states and districts have swung back and forth about teacher incentives for more than a decade. On the front lines of this debate in Delaware was SDP Data Fellow Brittany Mauney, heading an initiative to measure the efficacy of the Delaware Talent Cooperative (DTC). Created under a Race to the Top grant, DTC awarded Highly Effective educators $5-10k over two years to remain in a participating high-need school—and the program’s efficacy was up for debate.
Developed on a bedrock of good intentions, the DTC was designed to reward effective teaching behavior, attract and retain high-quality educators in high-need schools, and provide those educators with the opportunity to serve as teacher leaders in their schools. Yet sentiments toward the program were mixed. While the program was generally looked upon favorably by participating principals and teachers, the political climate surrounding the program wasn’t without nuance. To cut through the political rhetoric, Mauney turned to the data. She asked the fundamental question: Is participation in the Delaware Talent Cooperative associated with higher educator retention rates?
After digging into the data, Mauney found that the results were as mixed as the program’s intentions. In a state like Delaware that is known for its progressive approaches to education reform, Mauney was able to examine a large amount of data relating to teacher employment, school type, performance, and demographics over a two-year period. After controlling for school and individual characteristics, she found that teachers in DTC schools were in fact retained at higher rates, but that those rates couldn’t be attributed to DTC itself. And as Mauney’s work coincided with significant cuts in budget, the program was discontinued during the next budget season.
Mauney’s work with Delaware points to the significant impact of data science in controversial political climates. From the outset, the DTC had mixed perceptions at the state level. Disagreements around performance-based pay in general, as well as how the program would affect teacher attraction and relationships within their respective schools, shrouded the program in scrutiny and unsuredness. Yet, many teachers and leaders supported the program and saw the additional compensation as helpful in bolstering the educational climate. Mauney’s project was able to reconcile the varied sentiments, adding a level of concreteness to the sometimes political and emotional conversation. Where a decision to discontinue the DTC might once have been based on intuition alone, decision makers were able to temper their assessment with evidence.
“While this project did not allow us to pinpoint what causes high-quality teachers to remain in high-need schools, it does create the opportunity to dig deeper,” Mauney reflects. “The result showed that the money was not the reason that teachers stayed at their schools. However, there was something about the schools that chose to participate in the program that was associated with higher teacher retention. It is possible (and in line with other research) that these differences can be attributed to things like an excellent school leader or strong school culture.”
Navigating contention with data
Given her findings and examples of similar programs in the literature, Mauney couldn’t say with confidence that money alone is sufficient to entice teachers to stay in a school. However, her project reveals two key takeaways for those making decisions in controversial climates.
#1 - Clearly define outcomes and priorities prior to performance pay policy implementation.
When implementing a new program, it is essential to identify the desired outcomes and measures prior to implementation. This will enable the program to be evaluated using clear, (hopefully) outcomes that are agreed upon from the outset—aiding in buy-in when the results are finalized and alignment on what is most important.
#2 - Manage expectations on how data will be used to inform practice.
Data can be used in a number of ways and can be manipulated to communicate different messages. Misalignment on how to use data can cause tension, confusion, and misrepresentation among users and intended audiences. Setting norms and managing expectations for the use of data among all relevant stakeholders early on can clarify issues that arise from misalignment on data use.
Small adjustments like these at the outset can have significant impact on the trajectory of a program for the entirety of its duration.
Mauney reminds us that schools, districts, and states have a plentitude of data and can use it to measure whether new policies actually keep high performing teachers in the classroom. “Now we can really dig into the interventions being implemented, understand how they impact teachers, and determine what works...” We just need to be willing to ask the hard questions about our investments and listen to what the numbers tell us.
Brittany Mauney is a former SDP Fellow and currently serves as Chief of Staff at the Summit Learning Program.