Share This

The Value-Added Approach

The value-added approach has many pitfalls.  Namely, there is no controlling factor for which students are placed into certain classes, sampling errors, and the failure to recognize the limiting use of tests in measuring progress. 

Here are just a few voices from opponenets of the value-added approach:







Stephen Krashen

Professor emeritus, USC's Rossier School of Education

 Value-added evaluations of teachers assume that higher test scores are always the result of teaching. Not so. Test scores are influenced by other factors.

 We can generate higher scores by teaching "test preparation" strategies for getting higher scores without students learning anything. We can generate higher scores by testing selectively, making sure that low scorers are not in school the day of the test. And of course we can generate higher scores by direct cheating, sharing information about specific test questions with students.

Teachers who prepare students for higher scores on tests of specific procedures and facts are not teaching; they are simply drilling students with information that is often soon forgotten. Moreover, research shows that value-added evaluations are not stable year to year for individual teachers, and that different reading tests will give you different value-added scores for the same teacher.

 









Marco Petruzzi

Chief executive, Green Dot Public Schools

 Value-added measures work very well for elementary schools but are more complicated to use in high school, where students sometimes take only one course of one subject (chemistry, for example). You can and should still use the methodology, but it should be complemented with other measures of teacher effectiveness, like feedback from students and parents, evaluation of student portfolios, classroom observations, attitude, etc. to get a fuller picture.

 

 







John Rogers

Associate professor, UCLA Graduate School of Education and Information Studies and director of the Institute for Democracy, Education and Access

 Consider the well-documented estimates that 25% of valued-added assessments are likely to be in error. Even the best teachers will want to avoid grades three, four and five.

 Value-added methods are a limited and underdeveloped tool. By focusing narrowly on standardized tests, these analyses ignore much learning that matters to students, parents and teachers and cannot stand alone as a measure of "effectiveness." The National Academy of Sciences has identified several of the problems posed by value-added methods.

 First, the National Academy of Science notes that student assignments to schools and classrooms are rarely random. It's not possible to definitively determine whether higher or lower student test scores result from teacher effectiveness or are an artifact of how students are distributed.

 Second, you can't compare the growth of struggling students with the growth of high performers. In technical terms, standardized tests do not form equal interval scales. Enabling students to move from the 20th percentile to the 30th is not the same as helping students move from the 80th to the 90th percentile.

 Third, estimates of teacher effectiveness can range widely from year to year. In recent studies, 10% to 15% of teachers in the lowest category of effectiveness one year moved to the highest category the following year ,while 10% to 15% of teachers in the highest category fell to the lowest tier.
 
 The National Academy of Sciences concluded that value-added methods "should not be used as the sole or primary basis for making operational decisions because the extent to which the measures reflect the contribution of teachers themselves, rather than other factors, is not understood."