The Value-Added Teacher Modeling Conundrum
“Value-added” teacher models are being used less at the same time that evidence of their usefulness is increasing, according to a new research brief out today from the American Institute for Economic Research.
Value-added models show the difference between how students perform on a standardized test, and how they were expected to perform. Such models attempt to show the added value of particular teachers to their students’ achievement.
Such measures rightfully raise questions about teaching to the test, said the author of the research brief, AIER research fellow Patrick Coate.
But he points to various studies that show value-added models are more useful when averaging teacher scores over a period of three or more years. When the tests show consistently higher scores among a teacher’s students over a period of years, it establishes a pattern of better learning, Coate said.
This can be cross-checked against alternate forms of teacher validation, like student surveys and video classroom observations, to guard against teaching to the test, Coate said.
Studies have also shown that higher teacher value-added scores lead to better economic outcomes for their students, Coate said. A pair of 2014 studies showed that students of high value-added teachers were more likely to attend college and have higher wages as adults.
And yet, these models are coming under increasing public scrutiny, as well as lawsuits, as individual states wrestle with how to use value-added models in evaluating teachers. Ohio, for example, decided last August not to use value-added teacher data from the last two years.
“A balance must be found where the information in value-added models can be used without distorting teachers’ incentives enough that they prioritize test scores to students’ detriment,” Coate wrote.
To read his full brief, which is available to read free of charge, click here.
Click here to sign up for the Daily Economy weekly digest!