Varied student backgrounds get too little weight

Originally Published at LoHud. October 15, 2013 The study can be read at Review and Analysis of New York State Growth Model

New York’s first attempt to grade teachers on their students’ progress was flawed in several key ways, a new study commissioned by the region’s superintendents says.

The state’s formula gave less credit to teachers serving disadvantaged students, judged some teachers on the performance of too few students, failed to measure key variables such as student mobility and did not clearly signal how schools can assist teachers or students, the study found.

“Our fears were realized,” said Harrison Superintendent Louis Wool, who was president of the Lower Hudson Council of School Superintendents when the study was started in the spring. “The first round of assessments did not accurately measure the value of teachers whose students are in poverty, in special education or speak limited English. We are concerned that we have spent countless hours and millions and millions of dollars to produce results that are not comparable across the state and do not inform teacher practice or student learning.”

Tom Dunn, spokesman for the state Education Department, said the overall study is generally complimentary of New York’s approach, calling it a “positive step forward in educator evaluation.” Several issues raised in the study have been or will be addressed, he said.

Thirty-five school districts from the Lower Hudson Valley and Long Island contributed anonymous data on 1,700 teachers and 46,800 students for the study.

bilde
Photograph of Brenda Myers

The superintendents group chose the Value-Added Research Center at the University of Wisconsin-Madison to study New York’s first round of teacher scores, released in 2012 for a limited number of school districts. The center researches how to measure the impact of teachers on student growth, a calculation known as the “value added,” and also markets its own models.

The study is a solid analysis that repeats concerns raised last year by the state’s own consultants, said Bruce Baker, a Rutgers University professor and expert on school finance.

“I give these superintendents credit for following through on this, raising significant concerns about potential biases in these measures,” he said. “For the state to continue to enforce these measures in the face of contradictory evidence is over-the-top ridiculous.”

Under New York’s evaluation system, 20 percent of teachers’ overall ratings is based on their impact on student progress.

Sixty percent is based on classroom observations and the remaining 20 percent on locally chosen assessments.

Valhalla Superintendent Brenda Myers said the study confirmed that New York’s formula was flawed and did not provide information that would help districts improve teacher or student performance.

“After all the time and energy we put into this, we get a score on a teacher that doesn’t tell us anything,” she said. “Are we improving student learning? This is the question that keeps us up nights.”

But Dunn said the system was designed to assist districts in helping teachers improve.

“Districts and educators are responsible for using information from the multiple sources of evidence in the evaluation system to provide feedback and development support to educators,” he said.

New York agreed to implement a new evaluation system and the Common Core learning standards when accepting $700 million in federal Race to the Top funds in 2010.

The state seeks to measure every student’s growth, in part by comparing students who are similar in terms of academic history and factors like poverty and English proficiency.

The state then looks at the growth of all of a teacher’s students.

But the study found that New York did not adequately weigh factors like poverty when measuring students’ progress.

“We find it more common for teachers of higher-achieving students to be classified as ‘Effective’ than other teachers,” the study said. “Similarly, teachers with a greater number of students in poverty tend to be classified as ‘Ineffective’ or ‘Developing’ more frequently than other teachers.”

Andrew Rice, a researcher who worked on the study, said New York was dealing with common challenges that arise when trying to measure teacher impact amid political pressures.

“We have seen other states do lower-quality work,” he said.

The state last month released its second round of teachers’ growth scores to school districts to be used in fashioning evaluations for the 2012-13 school year.

The superintendents council plans to study the second year of teacher grades also.

South Orangetown Superintendent Ken Mitchell, president of the council, said state officials and legislators produced an unrealistic evaluation system that is too closely tied to testing and will produce a “narrow curriculum.”

“We need a mid-course correction before more damage is done,” he said.

Twitter: @garysternNY