University of Georgia professor Peter Smagorinsky always gives us something to think about it in his guest columns for AJC Get Schooled. No exception today with this fascinating discussion of how we define and measure literacy.

He begins with a question: How can a country have 95 percent literacy rate in one ranking, yet land on the very bottom in another? (This also speaks to the problem with international educational rankings and comparisons: Who is being measured and how?)

This is great discussion fodder.

By Peter Smagorinsky

It is very common to hear people refer to “literacy” as a desirable human capability. “Literacy rates” are often reported to measure how advanced a society is, and literacy is often treated as something people either have, or do not have. But what exactly does “literacy” refer to? How do we know literacy when we see it? On the surface—the level at which policy seems to work—literacy is a simple and unambiguous concept. Yet, in reality, literacy is complex and contested.

I'll use the Mexican context, where I contribute to a literacy program in Guadalajara, to illustrate. On the one hand, according to Statistica, Mexico has a 95 percent literacy rate. Very impressive! However, UNESCO reports Mexico ranks 107th of 108 countries in reading proficiency. How can literacy in one nation be both near-universal, and ranked among the world's lowest?

The problem follows from differences in what people mean when they refer to the term “literacy.” What does it mean to be literate? By what means is literacy measured and determined? Is a person either literate, or not? Or can a person be sort of literate, or half literate, or mostly literate, or fully literate? Can a single form of measurement be uniformly applied to all of the world’s people for comparative purposes in assessing literacy rates?

Do national, ethnic, and regional contexts mean that a centrally developed means of measurement tends to position some people better than others in the evaluation, and consider everyone else to be in deficit? Is literacy something that can only be considered available via engagement with an alphabetic form of script? If so, how are character-driven scripts, such as those in Asian calligraphy symbol systems, interpreted in measures of literacy? Is it possible to develop a valid and reliable way of determining comparative literacy rates when different nationalities use different symbol systems to construct their texts?

These are all perplexing questions, complicated by the ways in which contexts shape how literacy is interpreted from place to place, situation to situation, culture to culture. A context-sensitive perspective suggests there is no way in which one test can validly compare literacy rates when nations employ different symbolic forms for representing thinking in texts, or even speak in languages that lack similar structures and meaning systems.

How, then, can international rankings be determined? What is this thing we call literacy, a construct that is employed to produce rankings of the degree to which a nation may be considered advanced and prominent on the world stage?

I will confine my attention today to the ways in which literacy rates tend to be measured. On U.S. standardized tests, reading achievement measures assume that everyone agrees on what it means to “comprehend” a written text. Comprehension is typically measured by students’ ability to answer multiple-choice questions devised by researchers or assessment specialists in response to a given passage.

This narrow means of determining comprehension is, however, problematic for many reasons. Primarily, it assumes the questions included on reading assessments are uniquely capable of producing information about what students do and do not understand about what they have read.

However, readers may find meaning in texts quite different from what a test designer, teacher, or researcher might consider to be important, a fact that has recurred in my own research and that of many others. This meaning often comes through a student's empathy with literary characters' emotions and experiences, something not available through multiple choice questions posed by someone else, because such responses tend to be informed by personal experiences, and not reducible to correct answers.

Literature is written to be ambiguous and open to interpretation. What of “informational” texts of the sort prized in the Common Core State Standards? If such texts had inherent, testable meaning, then Supreme Court justices would always agree on how to interpret the very explicit text of the Constitution. Yet their ideologies inform their reading to produce different understandings of the law, as always becomes an issue in the ways in which conservative and liberal administrations make their appointments and vote on confirmations.

This standardized means of measurement further assumes every reader reads the test items in the same way. This assumption is inattentive to human variation. Yet in a world driven by the need for standardization, standardized humans are assumed in the ways in which the tests are constructed and considered valid and reliable.

Virtually any investigation into cultural differences and individual human variation, however, demonstrates the futility of accepting that assumption. Standardized test items do not map onto human diversity, instead giving an advantage to those whose cultural experiences correspond best to those of the test designers. Most other people are doomed to deficit interpretations of their ability to engage fruitfully with the written word.

Finally, each of these conceptions relies on an autonomous view of literacy, i.e., one that takes literacy out of its social and cultural context and views it as a discrete skill. Similarly, these tests assume the texts used on reading tests are autonomous in that all meaning resides in the text itself, rather than in how readers not only decode words but encode them with meaning.

This assumption is central to U.S. policies governing how students and teachers are assessed and rests on an easily disconfirmed belief that texts themselves have meaning independent of readers' constructive activity.

President Bill Clinton helped to accelerate the current emphasis on testing when he declared, "We must do more . . . to make sure every child can read well by the end of the third grade." This belief framed the Reading Excellence Act originating in his administration and taken up with increasing frenzy in subsequent presidencies' Departments of Education.

This act promises to "provide professional development for teachers based on the best research and practice" and a testing apparatus to produce "accountability." Yet reading specialists have profound differences on what constitutes the "best research and practice," as evidenced by the highly contentious and divisive Reading Wars over both federal funding and the stature and wealth that follow from a federal endorsement.

Given that researchers cannot agree on which evidence suggests a person can read, which research most usefully identifies this ability, and which instruction is most likely to produce it, “literacy” does not provide consensus among people considered to be experts. No wonder Mexico is both highly literate and widely illiterate, depending on the source consulted.

We are incapable as a profession or nation of agreeing on what it means to be literate. Reducing the concept of literacy to answering multiple choice questions is, in my view, a big part of the problem, given how such tests are fraught with misconceptions, and how they advantage people similar to the test-developers.

I think the whole movement toward standardization is badly misguided, given its reductive tendencies and glorification of statistics, no matter how misrepresentative they are of complex phenomena. As long as citizens allow this farce to continue, students and teachers will continue to be mismeasured and punished because oversimplification is so much easier to achieve than real efforts to help students develop fluency with a multifaceted act like reading.