from Fairness and Accuracy in Reporting (EXTRA! April 2011  "Not Much
Value in 'Value-Added' Evaluations," y Daniel Denvir):
>Sam Dillon, who regularly reports on education policy for the [New York] 
>Times, wrote (9/01/10) that the “federal Department of Education’s own 
>research arm warned in a study that value-added estimates ‘are subject to 
>considerable degree of random error,’” and a National Academies expert panel 
>wrote a letter to Education Secretary Arne Duncan expressing “significant 
>concern” that Race to the Top put “too much emphasis on measures of growth in 
>student achievement that have not yet been adequately studied for the purposes 
>of evaluating teachers and principals.”

> Dillon quoted Stanford professor Edward Haertel, a co-author of an August 
> 2010 Economic Policy Institute study criticizing value-added measures, saying 
> the system was “unstable.” University of Wisconsin–Madison professor Douglas 
> Harris described how taking different student characteristics into account 
> can produce different outcomes. Dillon detailed more problems: students 
> changing classes mid-year and thus being associated with the wrong teacher; 
> the impossible-to-discern influence of a given teacher or tutor, since they 
> teach overlapping skills; a “ceiling effect” limiting the measure’s 
> sensitivity to gains amongst the highest-performing students.

> Sharon Otterman (12/27/10), who covers New York schools for the Times, 
> reported that “the rankings are based on an algorithm that few other than 
> statisticians can understand, and on tests that the state has said were too 
> narrow and predictable.” She also pointed out that “a promising correlation 
> for groups of teachers on the average may be of little help to the individual 
> teacher, who faces, at least for the near future, a notable chance of being 
> misjudged by the ranking system.”

> Otterman quoted a Brooklyn elementary school principal, “Some of my best 
> teachers have the absolute worst scores.” She cited a July 2010 U.S. 
> Department of Education study that found a teacher would probably be misrated 
> 35 percent of the time with one year of data taken into account, and 25 
> percent of the time with three years. With 10 years of data, the error rate 
> still stood at a stubborn 12 percent.

> Making things all the more complicated, Otterman pointed out, is the fact 
> that standardized tests are adjusted from time to time, making it difficult 
> to compare one year’s test scores with the next—and New York’s were just 
> toughened.<
-- 
Jim Devine / "Segui il tuo corso, e lascia dir le genti." (Go your own
way and let people talk.) -- Karl, paraphrasing Dante.
_______________________________________________
pen-l mailing list
[email protected]
https://lists.csuchico.edu/mailman/listinfo/pen-l

Reply via email to