An hour after I finished reading my AJC colleague Heather Vogell’s yearlong investigation into testing errors, my 14-year-old son showed me an answer sheet from his multiple choice social studies test graded by a machine. He pointed out that he had three questions wrong, which, according to the point scale, was an A. But the machine assigned the test a B.
My son was quite surprised that the machine mismarked a test, but Vogell's reporting revealed mistakes occur more than most of us probably realize.
The reliability of tests is becoming a critical question as students, schools and teachers are being held to higher and higher accountability. And that accountability is measured through test scores.
I am curious whether any of you have experienced mistakes in answer sheets. Vogell found that answer-sheet scanners do malfunction. She also found bad questions where none of the answers is correct.
Chris Domaleski had a problem and its name was Andrew Lloyd Webber.
Question 42 on Georgia’s sixth-grade social studies test had asked whether Webber was a playwright, painter, sculptor or athlete. The famous composer of Broadway musicals, however, was none of those things. But what should Domaleski, the state’s testing director, do?
Testing was over. Scrapping the question would delay test results at least 10 days, inviting complaints about one of the state’s most politically-sensitive undertakings. Rushing a re-scoring would also heighten the chance of error. Yet counting the question would mean penalizing tens of thousands of students for someone else’s mistake.
Domaleski’s predicament illustrates the cascade of problems flawed questions cause when they slip past layers of review and appear on standardized exams.
In a year-long national investigation, the newspaper examined thousands of pages of test-related documents from government agencies – including statistical analyses of questions, correspondence with contractors, internal reports and audits.
The examination scrutinized more than 100 testing failures and reviewed statistics on each of nearly 93,000 test questions given to students nationwide.
The reporting revealed vulnerabilities at every step of the testing process. It exposed significant cracks in a cornerstone of one of the most sweeping pieces of federal legislation to target American schools: The No Child Left Behind Act of 2001.
Test-based accountability began its march across the nation’s classrooms more than a decade ago. Yet no one in that time, the newspaper found, has held the tests themselves accountable.
While lawmakers pumped up the repercussions of lagging scores, schools opened exam booklets to find whole pages missing. Answer-sheet scanners malfunctioned. Kids puzzled over nonsensical questions. Results were miscalculated, again and again.
Most tests are fine, test company executives and state officials point out. They also say the field is working to improve itself. Yet mishaps continue to disrupt tests and distort scores, the newspaper found, and damage-control judgment calls like Domaleski’s lurk at every turn. The vast majority of states have experienced testing problems – some repeatedly.
“If someone hasn’t had an error,” said Robert Lee, chief analyst of Massachusetts’s testing program, “either they’re extremely lucky or they haven’t reported it.”
Even as testing companies received public floggings for errors, lawmakers and education officials failed to address why the tests were derailing or how government contributed to breakdowns.
Some industry executives acknowledge their immense challenges, which include an unprecedented volume of test-takers and demanding federal and state timelines for reporting scores.
You're Almost Done!
Select a display name and password
{* #socialRegistrationForm *} {* socialRegistration_displayName *} {* socialRegistration_emailAddress *} {* traditionalRegistration_password *} {* traditionalRegistration_passwordConfirm *}Tell us about yourself
{* registration_firstName *} {* registration_lastName *} {* registration_postalZip *} {* registration_birthday *} {* registration_gender *} {* agreeToTerms *}