Sunday, November 3, 2013

Lexiles and the Fetishization of Quantification

A year and a half or so ago, I wrote about an inane report about how the level of discourse of our Congresscritters has generated in recent years. And, of course, there was “objective” analysis, which didn’t have anything to do with renewed fervor for partisan hackery, increasing disregard for the accuracy of one’s statements, or the apparently undeniable siren song of equating one’s political opponents with Hitler. No, the scourge blighting American political discourse was an insidious combination of monosyllabic words and simple sentences.

You see, there’s something called the Flesch-Kincaid score, which bases an objective (albeit worthless) ranking of the sophistication of one’s rhetoric on two factors: syllables per word and words per sentence. The F-K scores are alleged to match up with the grade level of the linguistic sophistication of the piece under consideration. So what are we to make of the fact that Martin Luther King’s “I Have a Dream” speech, one of the greatest displays of rhetorical brilliance ever created, checks in at freshman-in-high-school level? Or that the most famous line from that speech, “I have a dream today,” gets a score of 0.52? You and I, Gentle Reader, see a problem with the methodology.

Not so, however, the hack journalists of both sides of the political divide, who pounced on the alleged “evidence” to show the The Other Guy really is as ignorant as My Side says. Exhibit A, Fox News: “Obama’s SOTU Written at 8th Grade Level for Third Straight Year.” Exhibit B, The Inquisitr: “Congress Officially Dumber, Study Finds, Republicans in Particular Showing Marked Decline.”

In writing about this pseudo-evidence, Curmie exclaimed, he hoped rhetorically, “Seriously, is there a study that means less than this one?” Alas, someone took that as a challenge. Unfortunately, it’s a lot more serious, as this particular round of pseudo-scientific idiocy actually has consequences.

The concept of “lexiles” has apparently been around for a generation, and I think I knew that but the first I really registered hearing of it was reading a rather foam-flecked article on the New Republic site. Yes, the faux terror of “federal bureaucrats” is hyperbolic at best, and author (and University of Iowa professor) Blaine Greteman casually ignores the fact that the lexiles are only one means of determining “readability” levels. For all that, this is scary stuff, indeed.

Let’s face it. If it is even possible to begin a story the way Greteman does, the supposedly objective criteria being employed are worse than worthless:
Here’s a pop quiz: according to the measurements used in the new Common Core Standards, which of these books would be complex enough for a ninth grader? 
a. Huckleberry Finn
b. To Kill a Mockingbird
c. Jane Eyre
d. Sports Illustrated for Kids
 Awesome Athletes!
The only correct answer is “d,” since all the others have a “Lexile” score so low that they are deemed most appropriate for fourth, fifth, or sixth graders.

Kid stuff.
Wow. I was one of the top couple of students in my 10th grade “intensive” English class when we read Huckleberry Finn, and to say it was plenty “complex” for me would be to err on the side of understatement rather than hyperbole. “Complexity” has more to it than vocabulary and sentence structure… you know, like subject matter and levels of ambiguity and symbology and… well, other things that actually matter.

Let’s face it: if you’ve got a mechanism that says that Mr. Popper’s Penguins is a tougher read than Hemingway, Steinbeck, or Graham Greene, you’ve got a system so flawed that it deserves to be promptly shit-canned. Naturally, the Common Core geniuses institutionalized it.

There’s a good discussion by Rider University prof Russ Walsh, in which he references the actual CCSS (Common Core State Standards):
Appendix A of the CCSS defines text complexity. It is very important that all teachers understand this definition because it is useful and empowering. Text Complexity is made up of three dimensions: quantitative, qualitative, and reader and task. Quantitative measures include the useful, but limited “reading level” measures like the Fry Readability Graph and Lexile scores. The CCSS has focused on Lexile scores and has recalculated them to reflect the push for greater complexity. Qualitative measures include such issues as considerateness of text (clear structure, coherence, audience appropriateness), knowledge demands, use of figurative language, vocabulary, etc. Reader and task measures concern themselves with the cognitive abilities, motivation and experience of students.
Walsh does describe the Lexiles as “useful,” but that’s because he’s nicer than me, not because we really disagree. Lexiles are also “limited” in Walsh’s world; more importantly, he “[worries] that simplistic readings of this call for greater text complexity will lead to disempowerment of teachers and inappropriate reading assignments for students.” The likelihood of that happening, from my perspective, borders on ontological certitude.

That’s because the fix is already in. Here’s Greteman again:
As the Common Core Standards Initiative officially puts it, “until widely available quantitative tools can better account for factors recognized as making such texts challenging, including multiple levels of meaning and mature themes, preference should likely be given to qualitative measures of text complexity when evaluating narrative fiction intended for students in grade 6 and above.” But even here, the final goal is a more complex algorithm; qualitative measurement fills in as a flawed stopgap….

… I believe that many educational leaders will hear the call for more complex texts and will try to force teachers to use texts that are simply harder. Remember “rigor” is not in the difficulty of the text, but in the vigor of the instruction. Texts that are too difficult for a particular reader are not rigorous; they are just hard. I also believe that publishing companies, under the guise of the publishers guidelines provided them by the authors of the CCSS for ELA, will soon be on the market with anthologies purported to “meet the standards” that will not meet the needs of many students.
Heady intellectual fare.
In other words: the CCSS folks aren’t going to budge on using their stupid rubric for grades 5 and below, and the expertise of teachers will become subservient to their divinely inspired claptrap as soon as they can develop quantifiable criteria that are merely inaccurate instead of So Freaking Imbecilic That Nobody Could Take Them Seriously. Well, nobody but the likes of The Atlantic’s Eleanor Barkhorn and whoever wrote that headline: “Teachers Are Supposed to Assign Harder Books, but They Aren't Doing It Yet.”

Implicit here is the double-whammy of shoddy journalism: the assumption that anything the corporatists behind the Common Core require is legitimate, and the implicit belief that the problem isn’t bullshit requirements, but teachers’ intransigence. Of course, teachers, principals, superintendents, and school boards have not only the right but the responsibility to reject nonsense like lexiles and to substitute their own… wait for it… expertise. (Anyone who has read Curmie’s page for a while knows my take on the two meanings of authority.)

Of course, using completely nonsensical means to measure literature isn’t new. Well over 2400 years ago, Aristophanes concluded his long farcical confrontation between Aeschylus and Euripides in The Frogs by having Dionysus challenge them to recite their “weightiest” verses. Aeschylus wins every round because rivers are “heavier” than cattle, death than persuasion, chariots than clubs, etc. “Weight,” then, no longer means “profundity of thought or expression,” but rather, “the gravitational attraction or solemnity of the subject matter.” It’s a joke, of course, and the overwhelming majority of the audience in late 5th-century BC Athens would have caught on. Somehow, however, I suspect that there was someone named Βιλλγάτες or Αρνεδύνκαν who saw nothing wrong with the test. It was objective, after all.

The attempt to scientize art got a huge jolt in the mid- to late 19th century, with the rise of naturalism, a movement headed by Émile Zola but ultimately inspired by social Darwinism. Zola, Balzac, and their contemporaries generated some stark portrayals of life as lived, especially in urban areas during the Industrial Revolution. They also propagated some really bad theatre (and quasi-theatre): since art was supposed to replicate life, the ultimate expression of theatrical naturalism was to charge admission to watch an actual butcher carve up a beef carcass. Really. No wonder Henrik Ibsen supposedly described the difference between realism and naturalism by tersely observing, “I go to the sewers to clean them; Zola, to bathe.”

The earnest desire to make art more scientific (Curmie resists the urge to use the phrase “reduce art to the level of science”) also manifested in the work of François Delsarte, best known for his voluminous commentaries on elocution, most especially his System of Oratory. Delsarte writes expansively about virtually every conceivable vocal inflection and gesture, seeking a communication system that is efficient and transparent. In one sense, this is a noble goal; in another, it’s transcendently naïve. The fact is that in life and especially in art, ambiguity is not merely ubiquitous but desirable—and this is true whether we ascribe to the modernist model of artists’ imbedding meaning to be discovered by the spectator or the post-positivist schema in which the artist’s function is to catalyze meaning which is ultimately constructed by the observer. Delsarte was a significant player in the development of modernism, he influenced a number of important figures in theatre history (Steele Mackaye, for example), and he is still highly regarded in much of the dance community. Theatre people today, however, regard him (rightfully, I’d suggest) as the quintessence of wrong-headed reductionism.

This idea that objectivity often creates precisely the opposite result of that which was intended serves as one of the themes of The Memorandum, one of the best plays by the late, great Václav Havel. Shortly after Havel’s death two years ago I dug up a piece I’d originally written for the Cedar Rapids Gazette back in 1990, and re-printed it on my other blog. I therefore quote myself from 23 years ago:
In The Memorandum (1968), the central character, Gross, heads an unnamed government agency. After authorizing the circumvention of a silly regulation, he allows himself to be half blackmailed, half cajoled, by an amoral underling into approving a new synthetic language called Ptydepe. Said to make communication more “efficient,” Ptydepe mandates that every word must differ from every other word of equal length by at least sixty per cent of its letters. Further, words are assigned to concepts by the relative frequency of their use: the more often a word is used, the shorter it will be. Hence, the shortest word in the language, “ng,” means “whatever”; the word for “wombat” has 319 letters.
I should add that in Ptydepe, vowels and consonants are used in random sequence, so a word with a half dozen consecutive consonants is not uncommon. Moreover, every variation of a conventional word, including whether it’s used in jest, or to surprise the listener, gets its own Ptydepe word—defeating the purpose of language play altogether. Eventually, of course, Ptydepe proves to be an unsuccessful experiment, so it is eliminated… to be replaced not by the perfectly workable normative language that people can understand, but by a new, totally different synthetic language with a completely different but equally inane set of rules.

You see where this is going, don’t you, Gentle Reader? Objectivity for its own sake is inherently idiotic. Eliminating ambiguity provides a false efficiency, but it destroys nuance, and ultimately both language and art suffer irreparable damage. Everybody knows that. Well, everybody but the educationists. They sort of think that an educational system based on the philosophical principles advocated by the Communist apparatchiks in post-Stalinist Eastern Europe is the way to go. How very “American” of them.

1 comment:

Jody said...

For your reading pleasure, a Leader from The Economist, 2004:
"English
Out with the long
“Short words are best”, said Winston Churchill, “and old words when short are the best of all”

AND, not for the first time, he was right: short words are best. Plain they may be, but that is their strength. They are clear, sharp and to the point. You can get your tongue round them. You can spell them. Eye, brain and mouth work as one to greet them as friends, not foes. For that is what they are. They do all that you want of them, and they do it well."
Full text searchable on Web (I don't dare try to post a link).