Charges that adults in the Atlanta school system systematically altered student answers on Georgia’s annual high-stakes tests have made front pages nationwide. It’s not a new story: Federal and state high-pressure accountability based on narrow quantitative measures, with bonuses for some and pink slips for others, has had the unintended effect of leading some educators across the country to allegedly fix test results.
Atlanta is the most famous example because of the intensity of the investigation and the number of indictments, reaching from teachers and principals to former Superintendent Beverly Hall. The indicted in Atlanta deserve their day in court. If they are guilty, they deserve to be appropriately penalized.
But there was something else going on in Atlanta during Hall’s tenure, and this appears at odds with the picture of Atlanta painted by investigators and the media. Atlanta’s scores on the National Assessment of Educational Progress, the Nation’s Report Card also rose very substantially between 2003 and 2011 when Hall was school chief.
Others have pointed to Atlanta’s gains on NAEP but failed to point out how extraordinary they were and did not ask how Atlanta gained so much while other districts did not.
The NAEP assessments, which are overseen by the federal government and are carefully designed and administered by independent contractors, provide a valuable way to compare achievement across time and among states and districts.
Shortly after the original news of the Atlanta scandal broke, the federal government made its own investigation whether the NAEP scores from Atlanta were valid. Investigators found no evidence of cheating. Because of the careful procedures used by independent contractors who oversee the administration and transportation of the assessments, it would be practically impossible for teachers and principals to alter students’ work on these tests.
Students in Atlanta and nine other urban districts — New York, Los Angeles, Boston, Chicago, Cleveland, Houston and Washington, D.C., among them — took the NEAP mathematics and reading assessments at grades 4 and 8 in 2003 and then every two years until 2011, the period all 50 states also took the NAEP each of those years.
Atlanta students improved more on fourth and eighth-grade reading assessments than students in every one of the other nine districts from 2003 to 2011. They also gained more than every one of the 50 states. In mathematics, Atlanta’s fourth graders improved more than seven of the nine districts and more than or equal to the gains of 48 states. Eighth-grade Atlanta gains easily topped the list of all nine districts and all 50 states.
How big are the gains? In reading in grade four, Atlanta youngsters gained 15 points, considerably more than a grade level; in grade eight, they gained 13 points. In fourth-grade math, Atlanta students gained 12 points, a little over a grade level, and in eighth grade, 22 points, almost two grade levels.
Atlanta’s gains exceeded those of Georgia in every comparison. In eighth grade, Atlanta’s gains exceeded Georgia’s by almost a grade level in reading (9 points) and by well over a grade level in math (14 points). Eighth-grade gains are an important measure of the cumulative effect of eight years of schooling and of students’ readiness for high school. Atlanta still has a long way to go especially. But these gains from 2003 to 2011 are substantial.
What was going on in Atlanta during these years? Clearly, the students in 2011 knew more than their predecessors. Moreover, they must have been motivated to do as well as they did on the NAEP, a test that is not consequential for them.
If the reason for NAEP gains was federal and state high-stakes testing and accountability, why didn’t all of the districts and states succeed as well as Atlanta? Or, maybe the reason is that Atlanta has a high poverty rate and had more to gain? But Atlanta’s poverty rate is about the same as many of the districts that gained less on the NAEP.
Continuous improvement is the pattern of Atlanta’s NAEP gains from 2003-2011. The American Association of School Superintendents’ national award to Hall in 2009 used as criteria the quality of leadership, communication, professionalism and community involvement. The association did not base its decision solely on test scores.
Contrast that with the view of one of the investigators quoted in the press: “Under Dr. Hall’s leadership, there was a single-minded purpose, and that is to cheat.”
Yet the children in Atlanta school district achieved the largest gains in the country on an independent assessment during the same period. What’s happening here? The investigators were only after evidence of cheating. Perhaps a broader investigation would yield a better understanding of what was going on.
Marshall S. Smith is a former undersecretary in the U. S. Department of Education and former dean of the Graduate School of Education at Stanford University.