Opinion

Opinion: A flawed school-rating system is worse than none at all

The Georgia school report card is based largely on state standardized test scores. (AP Photo/Alex Brandon)
The Georgia school report card is based largely on state standardized test scores. (AP Photo/Alex Brandon)
By Kyle Wingfield
April 28, 2017

We are more than a decade into the age of measurable accountability in education. You might have thought we’d have the hang of it by now.

That would mean having metrics we stick with, because they make sense. Instead, Georgia has been more prone to adding new metrics on top of earlier ones that have hardly been perfected.

Take our state’s “Financial Efficiency Star Rating.” The Legislature ordered up this new metrics in 2012 and, five years later, education officials are preparing to release the first school-level results (expected in early 2018). We already have two years of district-level results.

The premise sounds good enough. Perennially, one of the biggest education fights in Georgia concerns whether we're spending enough on our public schools. A number of fiscal conservatives, myself included, question the wisdom of simply pouring money into the system. And so, voila! — a rating system to tell us how schools and districts are doing.

What we got, however, was a blunt instrument: a rating system of o.5 to 5 “stars” comparing schools’ and districts’ spending across the state to their performance on the College and Career Readiness Performance Index, or CCRPI.

There’s something to be said for simplicity, as the public is served well by a scale that’s easily understood. One district got four out of five stars, and another only two? The average Georgian can grasp that.

But the average Georgian might also expect all the districts receiving four stars to be getting a similar, or proportionate, bang for their buck. In reality, consider these examples from metro Atlanta:

How does the state arrive at these seemingly illogical conclusions? The biggest reason is the matrix used to determine the star ratings. If a district is in the bottom 20 percent of the state in terms of spending, the worst rating it can get is 2.5 stars — no matter how badly its students perform in the classroom. If a district is in the top 20 percent, the best rating it can get is 3 stars — no matter how well its students perform.

The matrix sets up the preposterous possibility of District A spending little money and having none of its students pass the state’s standardized tests, while District B spends a lot of money and has 89 percent of its students pass — and both districts receiving the same rating of 2.5 stars.

Does that really tell us anything about their respective “financial efficiency”?

I can appreciate the rhetorical value, at least, of a metric to counters the familiar refrain that more money can cure what ails public education. But if that metric doesn’t tell us anything real or credible about how schools and districts compare, it will eventually undercut the very idea of trying to gauge how effectively they spend their money.

If this is what legislators intended when they passed that 2012 bill, they ought to explain what they were thinking. If it's not, they ought to schedule some time — before those school-level ratings are released — to ask the state officials who built this metric what they were thinking.

About the Author

Kyle Wingfield

More Stories