The College Board's New Essay Reverses Decades of Progress Toward Literacy
By DENNIS BARON
Those of us who took the SAT remember what we got on that test, even if we took it long ago. On the other hand, few of us recall our high-school GPA, and the permanent record that was supposed to follow us forever has vanished without a trace. But the score that propelled us into the college of our choice, or kept us out, stays with us, surfacing at cocktail parties or when our children or our students ask, "Whadja get?" as they begin to worry about the SAT's ability to open the doors of the college of their choice.
The SAT hasn't changed dramatically since I took it 45 years ago (my scores are none of your business), but now and then it has been overhauled, at least on the surface. On March 12 some 330,000 American high-school students took the newly revised SAT, expanded to include a writing sample. Exit polls report the following responses to the latest version of America's high-stakes college-entrance exam:
1. The College Board found strong student enthusiasm for the test, with few complaints from parents.
2. In contrast, Stanley Kaplan saw widespread discontent over the SAT's increased length and difficulty, with the essay proving particularly worrisome.
3. FairTest, a longtime test watcher, charged that the new SAT is little more than the old test grown a size larger so that the multimillion-dollar nonprofit College Board, which owns the test, can raise prices, exploit proctors and graders, and increase the salaries and bonuses paid to its executives.
It doesn't take a psychometrician -- that's a vocabulary word you won't find on the SAT; it means "someone who cooks up standardized tests" -- to tell you that comments like those correlate very highly with the interests of the speakers who make them, proving that in testing, as in crime, we can get the right answer by asking cui bono? -- literally "who benefits?" but loosely rendered into the English of late capitalism as "follow the money."
The College Board benefits from changing its test to keep up with changing educational times, while plugging in to the national testing frenzy to protect its market share. In addition, the Kaplans and the Princeton Reviews and the rest of the test-prep industry scare test takers into cram courses on essay writing, raising the companies' bottom line while they help students raise their scores. And critics who argue that the SAT assesses neither knowledge nor potential, that it reinforces social stratification instead of creating avenues for mobility, benefit from catching the brief media flurry surrounding the rollout of a new test, even though those critics must also acknowledge that testing has such a stranglehold on the American consciousness that no one really cares whether their objections are valid.
The comments of those most directly involved in education, teachers and students, are also predictable. Teachers seldom agree about curriculum, and those interviewed about the new SAT seem evenly divided for or against the essay. It either does or does not give students an opportunity to think critically and write the kind of impromptu, timed theme on a surprise topic that the College Board seems to think they will encounter regularly in college and later on in their careers.
And students, whose lot is ever to complain, gripe that the test is too long, the breaks too short, and there isn't enough time to finish. Some test takers liked the essay, others didn't. Many weren't used to writing by hand, and most agreed that the general nature of the topic -- on March 12 students on the East Coast had to discuss majority rule, while students in the West dealt with the role of creativity in the modern world -- together with the time constraint of 25 minutes for planning and execution all but guaranteed a response that was both formulaic and unreflective.
Because I write and teach about writing, I have my own concerns about the SAT essay test. It makes up only 30 percent of the new and "improved" writing section -- that section is worth 800 of the new SAT's total score of 2400 points -- and is nothing more than the old, optional SAT II writing test repackaged as a mandatory part of the SAT I. Furthermore, more than two-thirds of a student's "writing" score comes not from writing prose but from identifying sentence and paragraph errors by way of multiple-choice questions. That method is no different from the SAT's earlier attempts to gauge writing knowledge indirectly.
The students are right that responses to general essay prompts in the new test are almost certain to be formulaic, and that those essays that don't fit the five-paragraph mold are likely to be rated down by graders looking for an easy peg to hang a score on. SAT test-preparation guides, whether online or in print, stress the importance of a simple four- or five-paragraph structure. They encourage students to begin with a catchy opener; to demonstrate their literacy by offering supporting examples from literature, not pop culture or personal experience; and to dazzle graders by throwing in a few obscure vocabulary words.
Such advice is counterproductive, since (1) formulas like the five-paragraph essay, while common enough "in vitro," in school, and on standardized tests, rarely occur "in vivo," in the more natural world of personal and on-the-job writing; (2) literary examples may demonstrate that the writer is also a reader, but they may not always be the best examples to support an argument; and (3) the average student can't deploy sesquipedalian words appropriately.
To be fair, the College Board insists that the essay assessment won't be formula driven, and one essay-test developer claims that, despite the explicit instruction to write a persuasive essay, students could earn high scores with a story or a poem. Maybe so. After all, a student who skips the essay entirely but fills in enough correct bubbles with a No. 2 pencil to get a perfect score on the grammar and usage questions can still come away with a respectable 650 points out of a possible 800 on the "writing" section.
However, few students will dare to blow off the essay, and with only five minutes to plan and 20 for drafting, most students aren't going to risk pushing the boundaries of form or write outside the box. Instead, they'll stick with what they've been told by their teachers and coaches is safe: an introductory paragraph, three examples -- consisting of one paragraph each -- and a conclusion, all of which makes a tidy five-part package. Even that may be hard to cram in to the time allotted. Many who took the test on March 12 complained they couldn't finish, adding that when they have to write in class they get twice as much time to deal with a topic whose subject matter they already know, since it derives not from thin air but from the work they've done in class.
Then there's the problem of grading the essays. According to the College Board, graders must have a B.A. and preferably will have taught English recently. They must also own a personal computer connected to the Internet, even if that computer is at an all-night Kinko's or an Internet cafe in Bangalore. But that's not all. Checking the college transcripts and work records of so many graders proved so onerous that now applicants are asked simply to affirm that they meet the minimal criteria.
The pool of graders, who are paid $17 an hour, is expected to process about 2.5 million handwritten essays each year. They are trained not in groups led by assessment specialists but through an interactive DVD that they can watch in their free time on their computers. The DVD shows them how to assess essays holistically using a six-point scale, working alone at their Web browsers. In the assembly-line online marathon necessary to process all the tests, the average grader must read 300 essays over a 10-day grading period. Given a six-hour work day and a high-speed Internet connection for downloading essays and uploading scores, that comes to 10 minutes per essay, assuming no breaks to get another latte, check e-mail, or call a grading supervisor with a question. It is not clear that graders who work at that pace, or on such a scale, can reliably evaluate what they read.
That leads me to think that the College Board won't be exploiting essay graders for long. It already markets WritePlacer Plus, a service that offers colleges machine grading of student-placement essays, and its online SAT-prep course uses the same software to give students instant feedback on sample SAT essays they must write and submit. In addition Pearson, which holds the contract for scoring the SAT, is actively pushing another machine-grading package, the Intelligent Essay Assessor, which promises to eliminate human subjectivity from the process altogether. Once that happens, writers will be compelled to become even more formulaic in their quest to craft an essay that matches the computer's highest-score algorithm.
Our educational system trains writers to address human readers, not machines, and switching to machines to process millions of writing samples won't send students the message that writing is important. Those students who don't like to write will like it even less if they're being graded by silicon, and those who think of themselves as literary will only feel more alienated than they normally do. In addition, whether the SAT essays are graded by humans or machines, it is not clear that the scores assigned to them indicate anything beyond the ability of high-school juniors to hit the ground writing. Even in the information age, that's not a skill that will get anyone very far.
Surely basing a critical decision like where a student goes to college on a single writing sample is a precarious thing to do. But one of the biggest complaints about standardized tests like the SAT, the ACT (which includes an optional essay), and the numerous state-mandated assessments now in place -- many of which already include essays -- is that they force educators to teach to the test. Of course the testing companies, making lemonade out of their lemons, insist that this is the whole point of assessment. Gaston Caperton, president of the College Board, cheerfully predicts that the new SAT will bring about a much-needed revolution in the public schools, with writing instruction at its center.
But writing isn't even at the center of the SAT's new 800-point writing section. What Caperton's revolution is really promising is to marginalize writing still further by promoting the five-paragraph theme from an educational curiosity into something more like the National Writing Report Card. Doing that guarantees leaving even more schools behind than does the government's controversial No Child Left Behind policy. Writing is harder to master than reading, and schools where writing is deemed deficient will be forced to adopt a mechanical, building-blocks approach to it, just as schools with low reading scores are steered toward phonics and direct instruction in reading. Those methods will reverse decades of progress in literacy instruction and ultimately turn students into intellectual automatons.
More specifically, the five-paragraph theme, or any other formulaic approach to writing, will not help improve the writing of either high-school or college students: It won't help those who can't produce intelligible, written sentences to form them better, and it won't teach those not used to thinking analytically to analyze either their writing or the subject that they're writing about.
The fact that the new SAT's writing section values correct English more than competent writing will have a negative impact on the teaching of grammar and usage in our schools. Correctness in language is not learned through memorization. It evolves through complex choices conditioned by the social and rhetorical context of specific acts of communication. The SAT's idea that questions about language can be answered a, b, c, d, or "none of the above" promotes the mistaken notion that there is only one right answer when it comes to good English, and thus will force language instruction to revert to simplistic, one-size-fits-all grammar drills. As a result, the new SAT will widen the gap between high and low achievement for speakers of nonstand-ard English and for those who speak English as a second language.
Instead of providing colleges with a more accurate measure of students' writing ability and linguistic knowledge, the new SAT will further disadvantage those students who are already educationally disadvantaged. Today's high schoolers will remember their SAT score not as the opener of educational doors but as the certificate of membership in, or exclusion from, the elite club of those whose writing and grammar already matches the College Board's idea of what is correct.
Dennis Baron is a professor of English and linguistics at the University of Illinois at Urbana-Champaign.
The Chronicle of Higher Education
Section: The Chronicle Review
Volume 51, Issue 35, Pp B14-15