Why not ask for more?

A couple of days ago, in a successful attempt to sabotage my own efforts to meet a deadline, I decided to look into the Google Book Settlement. The settlement is an agreement, hammered out last fall between Google and the Authors Guild, about how Google will share with authors some of the money it hopes to make from its digitization of books in copyright. The agreement itself is very long (you can download it here) and rather complicated. It isn't set in stone quite yet, but the cement is hardening. In order to opt out, you have to notify the settlement administrator by 5 May 2009. You can also stay in the settlement but object to some of its terms, if you make your objections by 5 May 2009. That's only a few months away, so it's not too early to start forming an opinion.

I haven't yet read the agreement all the way through. I didn't think I was going to need to, because I have warm, fuzzy feelings both about Google and the Authors Guild. Also, the site that the settlement administrator has set up for authors to claim their work looks streamlined and friendly and is in fact very easy to use. But now that I've used it, I have some questions, and I'm not sure how to answer them.

For one thing, I'm pretty sure that I filled out the online claims form "wrong," but I felt that I had little choice if I wanted to protect my rights. Then again, I may not have filled them out "wrong"; I'm not sure. Here are some of the dilemmas I found myself facing.

First, under the terms of the settlement, I allegedly don't have rights to my published work unless it was registered with the U.S. Copyright Office. The settlement's fine print claims that this is in conformity with a court decision. I don't think this fine print matters much in my case, because I suspect that most of my published work was copyrighted on my behalf by my publishers, but if it did matter, it would be more than a little enraging. When I started life as a writer, the law of the land rendered it unnecessary to register one's work with the U.S. Copyright Office in order to own copyright in it. In fact, the consensus was that only fussbudgets bothered to. Copyright of one's expression was a common-law claim that didn't need bureaucratic imprimatur; if challenged, you only needed to be able to prove that you and no one else had written the words in question. Listed in Google's database, though not yet digitized, is my undergraduate thesis on Nelson Algren. I know I never registered the copyright. I'm also fairly sure that there are only two surviving copies of it, one on my bookshelf here at home and another in the bowels of Widener Library at Harvard. But it's nonetheless distressing to imagine that if Google were to digitize it, I might not be able to control what happened to it, or make money off it if suddenly a great number of people wanted to know what I thought about Chicago realism when I was twenty. I've also never registered the copyright to any of my magazine articles, ever, but I've felt confident until this week that I owned copyright in them nonetheless, and continued to own copyright when they were reprinted in books, and would not lose that copyright if someone scanned and uploaded it.

Another problem is the settlement's division of the literary world into books and "inserts." An "insert," in the terms of the settlement, is a part of a book that an author owns a right to. For example, the introduction and notes to the Modern Library edition of Royall Tyler's Algerine Captive are copyrighted in my name, so they're my "inserts" in that edition. Since the book is still in print, I told Google that Modern Library still has the rights, and I presume this means the Modern Library will get the lump-sum cash payment for its digitization, not me. But an article that I wrote on Milan Kundera for the magazine Lingua Franca was reprinted in the anthology Quick Studies, which is now out of print, so presumably I will get some money off of that. Not as much as I think I deserve, though. Google is offering to reimburse authors in several ways: first through lump-sum payments for digitization, and later through revenue sharing, based on the money Google makes by selling subscriptions to its database to libraries and colleges, by placing ads on webpages that display the digitized material, and perhaps by selling downloads of books otherwise out of print. As an insert, my old Lingua Franca article will bring me a $15 lump-sum payment and later, perhaps, a $50 payment for inclusion in databases that Google sells to libraries and colleges. But according to Attachment C of the settlement agreement, my insert will bring me nothing from any of Google's other revenue-sharing programs. If Google sells ads next to my Kundera article, or sells someone a download of it, I get zilch. Since Quick Studies is an anthology, it consists entirely of inserts. So who's this revenue going to be shared with? The magazine Lingua Franca, by the way, is defunct. As a writer, I've made far more money off of magazine articles than books at this stage of my career, and I still make money off the reprinting of some of them. It seems to me that excluding "inserts" from substantial revenue sharing is an element of the settlement agreement worth objecting to.

A confusing element of the system: multiple digitized versions. Google's database seems to know that it has scanned both the hardcover and the paperback versions of a short story collection that I helped to translate, Josef Skvorecky's The Tenor Saxophonist's Story. I claimed inserts in both versions, even though the instructions told me not to, because I figured Google would be able to figure out that they were the same book. I claimed both of them for a reason: how else am I to be be sure that Google knows that I have a rights claim (in this case, as a translator, a pretty limited rights claim, but still, something) to both versions? For some reason, Google has scanned two versions of my book American Sympathy, and its database doesn't seem to know they're the same book. Moreover, it also has a reference to what seems to be a free-standing copy of one of my book's chapters, not yet digitized, which I never published separately. I claimed that, too. And I claimed an "insert" in a scholarly anthology that reprints a journal article that overlaps a great deal with one of the book's chapters. I know for a fact that no one else has any right to that insert. Google's instructions say that if an insert reprints material also published in a book, the author should only claim either the book or the insert, but not both. Well, that makes sense as far as the lump payments go. But if Google is later going to sell ads on webpages or sell downloads, it doesn't make sense. The income that Google will be making off my content will be split between the various versions of my work that are in its databases, and I should be able to claim revenue from all versions they hold of everything I've written. (By the way, this book, too, remains in print, so as I understand it, I won't be getting any lump-sum payments for it no matter how I fill out the forms.)

I'll end by saying that this agreement is so complex that it seems destined to have unintended consequences, and that I welcome corrections to any misunderstandings I may have made here. I look forward to learning other writers' reactions to the agreement and the claims process, because my sense is that most of us in the rank and file have yet to weigh in on them.

Why I remain pessimistic

The National Endowment for the Arts has just released a new report, Reading on the Rise, which contains good news: In a 2008 survey administered by the Census Bureau, Americans reported higher rates of literary reading than they did in 2002. In earlier reports, the decline of readers between the ages of eighteen and twenty-four had been a subject of special concern. Between 2002 and 2008, however, the proportion in that age group who reported that they had read a work of literature in the previous year jumped by 8.9 percentage points.

I've had an interest in American reading rates for some time, and I wrote an article on the subject for the New Yorker that was published in December 2007, so I read the New York Times article on the report and the report itself with great curiosity. I'm happy that there's some good news on the topic. I nonetheless remain pessimistic about the trend of reading in America overall, and it might be worth explaining why.

Before I do, though, I'll waste a paragraph or three on the Sisyphean task of trying to clear up a common misconception about the NEA-sponsored surveys. It is often claimed that in earlier surveys, the NEA undercounted literary reading because it failed to ask about reading that happened online. This is not exactly true. Starting in 1982, the survey asked, "During the last 12 months, did you read any a) plays, b) poetry, or c) novels or short stories?" Respondents weren't prompted to think about the internet, but they weren't told not to think about it, either, and the NEA has always said that a poem read online counted just as much as a poem read in a book. Moreover, the NEA asked a nearly identical question in 2008 (the only change was a variation in the order of the genres). So if the new report does show that reading rates have indeed recovered, it isn't because the NEA has only just now gotten around to asking about the internet. That, nonetheless, is the latest misconception about the NEA in circulation. For example, on Monday morning, Publisher's Lunch, a publishing insider's newsletter, made fun of NEA chairman Dana Gioia for claiming that his agency was partly responsible for the rise and suggested that the props should instead go to the internet:

Aside from the yeoman efforts of the NEA chairman, what could possibly explain the sudden change? "In 2008, the survey introduced new questions about reading preferences and reading on the Internet."

The quote within the quote is true but the juxtaposition is somewhat misleading. Yes, the 2008 survey for the first time asked about reading preferences (i.e., it asked whether those who read prose preferred mysteries, thrillers, romance, or "other" fiction) and about internet habits. But the central measure—that of literary reading—came from the question I quoted above, the wording of which neither included nor excluded online reading. In other words, the improved results can't be explained by a shift in the NEA's methodology about the internet.

There's another way of reading the Publisher's Lunch passage: PL might have been trying to imply that Internet use has itself spurred the appreciation of literature in the past six years. That, more or less, is the claim made in a similar mocking report in the blog Valleywag. It's a pretty bold claim. It doesn't seem outright impossible to me, because as I noted in my New Yorker article last year, there's some evidence that internet use and literacy go hand in hand. But relatively few of the internet's electrons are devoted to poetry and fiction, and it seems to me on the face of it unlikely that the internet could have caused significantly more people to take an interest in those genres. (Yes, I know about fan fiction, and despite the existence of it I stand by these hunches.)

But enough about the internet. Why aren't I celebrating the new numbers about the reading of literature? First, the numbers are good, but they're not that good. The proportion of Americans who said in 2008 that they read some literature in the previous twelve months may be higher than it was in 2002, but it's lower than it was in 1992, 1985, and 1982. Moreover, the same is true of the rates in the eighteen- to twenty-four-year-old bracket. Over the longer span, we're still talking about a decline.

Second, another of the NEA's measures shows a continued, stubborn decrease. To the question "With the exception of books required for work or school, did you read any books during the last 12 months?" the proportion of respondents saying yes dropped from 56.6 percent in 2002 to 54.3 percent in 2008. Here the internet may be relevant, because the word "book" is generally understood as referring to the ink-and-paper object. But even if the internet is the culprit, I'm still dismayed. Nationwide, there aren't yet that many e-book readers; it's simply not yet possible for very many people to read the electronic equivalent of books. Online substitutions may be taking place, but they're probably not "literary," so I doubt it's good news if the proportion who say they read books for pleasure continues to decline.

Okay, but a piece of good news is still good news, even if it's not great news, and even if another piece of news is bad. Right? Yes, but even of the limited good news I'm skeptical, because I can think of three reasons why the results might not be as good as they seem. First, there's a chicken-and-egg-like measurement difficulty. In the readiness to make fun of Gioia for crediting his own agency with the turnaround, the critics have so far missed a trick. It may be that Gioia does deserve credit, but not for what he thinks he does. The sticky part about the measurement of reading, sociologically, is that reading is a prestige activity. People tend to lie and say they do more of it than they do. As the afterword to the new report points out, the NEA in the last few years has reached out to millions of Americans with brand-new, well-funded programs to encourage reading. In the fall of 2007 it released a report on reading's decline that got lots of attention from journalists like me. Thanks in part to the NEA, literacy was a big news story in 2007 and 2008. I even saw it referred to on television, and I don't watch much television. All of this is worthy and to the good. But it's possible that in raising people's awareness of the importance of reading, the NEA encouraged them to exaggerate their reading habits. With a survey like the NEA's, which relies on self-reporting, there's no way to know for sure whether reading habits themselves were changed. It's as if there were a kind of Heisenberg uncertainty principle at work here. A government agency can either measure reading habits or intervene in them, but if it tries to do both, it runs the risk of measuring no more than the spread of its intervention message. (As I wrote in my article, the best way to measure reading is with a time-use survey, which is harder for respondents to fudge.)

Second, the new survey took place in a different month. In 2002, the NEA's survey took place in August. In 2008, it took place in May. If people had steeltrap minds, that wouldn't matter, but twelve months is longer than most people remember very accurately. I suspect that when you ask people about behavior over such a broad timespan, especially when you're focusing on the subset of people who are the liminal case—that is, on people for whom it was a toss of the coin whether they did or didn't read a work of literature in the past year—respondents may sometimes extrapolate from their sense of themselves in the present, rather than answer according to a comprehensive memory. If that's the case, then the month will influence their answer, if the month happens to have any relation to the aspect of themselves relevant to the question. And in this case it does. Like it or not, literary culture in America is largely keyed to the school year. Search Google Trends for the word "literary," in fact, and you'll see a curve that has acute declines every summer and every Christmas. (You'll also see that it slumps slowly over time, but that's not relevant to my point here.) You'll see the same shape if you search for "poem" and "short story". (Intriguingly, the novel has some but not all of this characteristic wiggle, perhaps because it hasn't altogether surrendered to the academy.) I don't know in which month the 1982, 1985, or 1992 surveys took place. But it's possible that the improvement between 2002 and 2008 owes something to the difference between May and August.

Third, and finally, there's the matter of the years. When Americans were asked in August 2002 about their reading habits over the preceding twelve months, they were of course being asked about the year immediately following 9/11. That was a period when everyone's media intake was wildly disrupted. If you look at the NEA's graph of the percentage of adults who read literature between 1982 and 2008, the outlier isn't 2008. It's 2002.

NEA, Percentage of Adults Who Read Literature, 1982-2008, from Reading on the Rise (2009), p. 3

In fact, if you ignore the 2002 results, you're looking at a gentle but almost uncannily straight descending line. One possible explanation of the graph above: Reading has been declining in America for decades, but the 2002 results were worse than they ought to have been, because in the aftermath of September 11 the nation was panicked out of its usual literary diversions. Between 2002 and 2008, the interruption of 9/11 corrected itself, and many people returned to literature, but not all of them. The underlying decline, the part owed to secular causes, continued.

These are mere speculations, and I'm grateful to the NEA for providing the data that I'm reluctant to accept. Time will tell whether I'm just being a curmudgeon; inshallah, my pessimism will be belied. But right now I think it's too soon to unpop the corks. If other indicators were favorable—in particular, if there were better prospects for newspapers or book publishers, or if anyone had figured out how to make enough money off of writing on the internet to subsidize lots of high-caliber investigative reporting—I might be willing to partake in the festivities. As things stand, though, I think the fate of reading is still a matter of concern.

The story so far

In October, the Christian Science Monitor announced that as of April it will no longer be printed on paper. The Newark Star-Ledger announced a 40 percent staff cut. Radar closed, for what seemed like the fourteenth time, and Culture and Travel closed for the first and probably only time. Time, Inc. announced it would be laying off six hundred staffers, and the Gannett news chain announced it would be laying off 10 percent of its workforce. Condé Nast shrank Men's Vogue into a Vogue supplement, pruned Portfolio down to ten issues a year, and asked its other magazines to cut budgets by 10 percent.

In November, the publisher Houghton Mifflin Harcourt said it was not going to purchase any new manuscripts in the foreseeable future. U.S. News and World Report announced that it, too, would go all-web, except for consumer guides.

In December, Fine Books & Collectibles said it was trading in its print magazine for an electronic newsletter, and the Rare Book Review ceased publication altogether. On so-called Black Wednesday, Simon & Schuster laid off thirty-five staffers, Penguin and Harper Collins froze salaries, and Random House underwent a massive consolidation, turning five divisions into three, a change expected to lead to many more layoffs. A few days later, the New York Times quietly announced it was putting up its new building as collateral for a loan of cash. Then Tribune Company, the owner of the Los Angeles Times and the Chicago Tribune filed for bankruptcy. And today Macmillan, owner of FSG, Picador, and St. Martin's, joined Penguin and Harper in a salary freeze.

Does media violence lead to real violence, and do video games impair academic performance?

Cross-posted from the University of Michigan Press blog.

"Twilight of the Books," an essay of mine published in The New Yorker on 24 December 2007, has been honored by inclusion in The Best of Technology Writing 2008, edited by Clive Thompson. When The New Yorker published my essay, I posted on my blog a series of mini-bibliographies, for anyone who wanted to dig into the research behind my article and try to answer for themselves whether television impaired intellect or whether literary was declining (here's an index/overview to all these research posts). A month or so ago, when the University of Michigan Press, the publisher of The Best of Technology Writing 2008, invited me to write about my essay for their blog, I was afraid I didn't have any more to say. Also, alas, I was under deadline. But I have a breather now, and looking over my year-old notes, I realize that there were a couple of categories of research that I never posted about at the time, because the topics didn't happen to make it into my article's final draft.

This research tried to answer the questions, Does exposure to violence on television or in video games lead to aggressive behavior in the real world? and Do video games impair academic performance? I still think the questions are very interesting, though I must now offer my summaries with the caveat that they are somewhat dated. In fact, I know of some very interesting research recently published on the first question, some of which you can read about on the blog On Fiction. I'm afraid I haven't kept up with video games as closely, but I'm sure there's more research on them, too. I hope there is, at any rate, because when I looked, I found very little. (By research, in all cases, I meant peer-reviewed studies based on experimental or survey data, and not popular treatments.)

A few words of introduction. The historian Lynn Hunt has suggested in her book Inventing Human Rights that in the eighteenth century, the novel helped to change Europe's mind about torture by encouraging people to imagine suffering from the inside. As if in corroboration, some of the research summarized below suggests that the brain responds less sympathetically when it is perceives violence through electronic media. As you'll see, however, there is some ambiguity in the evidence, and the field is highly contested.

1. Does exposure to violence on television or in video games lead to aggressive behavior in the real world?

  • In a summary of pre-2006 research, John P. Murray pointed to experiments in the 1960s by Albert Bandura, showing that children tend to mimic violent behavior they have just seen on screen and to a number of studies in the early 1970s that found correlations between watching violence and participating in aggressive behavior or showing an increased willingness to harm others. In 1982, a panel commissioned by the Surgeon General to survey existing research asserted that "violence on television does lead to aggressive behavior," and in 1992, a similar panel commissioned by the American Psychological Association reported "clear evidence that television violence can cause aggressive behavior." One mechanism may be through television's ability to convince people that the world is dangerous and cruel, in what is known as the "mean world syndrome." Murray claims that a twenty-two-year longitudinal study in Columbia County, New York, run by Huesmann and Eron, which was begun under the auspices of the Surgeon General's office, has linked boys' exposure to television violence at age eight to aggressive and antisocial behavior at age eighteen and to involvement in violent crime by age thirty; in fact, a 1972 study by Huesmann et al. did link boys' exposure at eight to aggressive behavior at eighteen, but the 1984 study cited by Murray linked violent crime at age thirty to aggressive behavior at age eight and said nothing about exposure to televised violence. In an unrelated study, when television was introduced in Canada, children's levels of aggression increased. [John P. Murray, "TV Violence: Research and Controversy," Children and Television: Fifty Years of Research, Lawrence Erlbaum Associates, 2007. L. Rowell Huesmann, Leonard D. Eron, Monroe M. Lefkowitz, and Leopold O. Walder, "Stability of Aggression Over Time and Generations," Developmental Psychology 1984. For a synopsis of Huesmann's 1972 study, see Steven J. Kirsh, Children, Adolescents, and Media Violence: A Critical Look at the Research, Sage Publications, 2006, p. 208.]
  • A longitudinal study of 450 Chicago-area children was begun in 1977 when the children were between six and eight years old, and continued in 1992-1995, when they were between twenty-one and twenty-three years old. As children, the subjects were asked about their favorite television programs, whether they identified with the characters, and how true-to-life they thought the shows were. Fifteen years later, it emerged that watching violent shows, identifying with aggressive characters of the same sex, and believing that the shows were realistic correlated with adult aggression, including physical aggression. The effect was present even after controlling for such factors as initial childhood aggression, intellectual capacity, socioeconomic status, and parents' level of emotional support. (Note that in the opinion of the researchers, the Six Million Dollar Man was considered a "very violent" show, and that the heroine of the Bionic Woman was considered an aggressive character.) [L. Rowell Huesmann, Jessica Moise-Titus, Cheryl-Lynn Podolski, and Leonard D. Eron, "Longitudinal Relations between Children's Exposure to TV Violence and Their Aggressive and Violent Behavior in Young Adulthood, 1977-1992," Developmental Psychology, 2003. Cf. Kirsh , p. 209.]
  • In a 2006 textbook about the relation between media violence and aggressive behavior, author Steven J. Kirsh notes that a 1994 meta-analysis of the link between television violence and aggression estimated the size of the effect to be r = .31. "The effect sizes for media violence and aggression are stronger than the effect sizes for condom use and sexually transmitted HIV, passive smoking and lung cancer at work, exposure to lead and IQ scores in children, nicotine patch and smoking cessation, and calcium intake and bone mass," Kirsh wrote. A 2004 meta-analysis found that the correlation between video game violence and aggressive behavior was r = .26. To put the effect sizes in perspective, Kirsh notes that they are greater than the link between testosterone levels and aggression, but weaker than the link between having antisocial peers and delinquency. In surveying the research on video games, Kirsh makes the point that there is little research as yet, and that most of it was done in what he calls the "Atari age," when the games were fairly innocuous; almost no one has experimentally tested the effects on children and teens of the new-generation, highly realistic and gory first-person shooter games. [Steven J. Kirsh, Children, Adolescents, and Media Violence: A Critical Look at the Research, Sage Publications, 2006.]
  • In a 2007 summary of research, three scientists asserted that there was "unequivocal evidence that media violence increases the likelihood of aggressive and violent behavior in both immediate and long-term contexts," and noted that the link between television violence and aggression had been proved by studies in both the laboratory and the field, and by both cross-sectional and longitudinal studies. Video games were not as well documented, but in the opinion of the scientists, the preliminary evidence suggested that their effect would be similar. Playing violent video games has been shown to increase physiological arousal. Measurements of skin conductance and heart rate show that people have less of an aversion to images of real violence, if they have previously been exposed to violent television or violent video games. Measurements of event-related brain potentials (ERPs) and functional magnetic resonance imaging (fRMI) allow researchers to look with new precision at the magnitude of brain processes that occur at particular times and at the activation of specific regions of the brain. A 2006 study by Bartholow et al., for example, showed that exposure to violent video games reduces aversion to scenes of real violence, as measured by a blip of voltage that typically occurs 300 milliseconds after sight of a gory image. A 2006 study by Murray et al. (see below) showed that violent scenes of television activated parts of the brain associated with emotion, memory, and motor activity. Yet another 2006 study, by Weber et al., showed that while players were engaged in violence during a video game, a brain region associated with emotional processing was suppressed, and one associated with cognitive processing was aroused, perhaps in order to reduce empathy and thereby improve game performance. In a 2005 study by Matthews et al., chronic adolescent players of violent video games scored the same as adolescents with disruptive behavior disorders on a test designed to assess a brain region responsible for inhibition and error correction. Attempting to explain the results of the various studies under review, the authors write: "Initial results suggest that, although video-game players are aware that they are engaging in fictitious actions, preconscious neural mechanisms might not differentiate fantasy from reality." [Nicholas L. Carnagey, Craig A. Anderson, and Bruce D. Bartholow, "Media Violence and Social Neuroscience," Currents Directions in Psychological Science, 2007.]
  • While a functional magnetic resonance imaging (fMRI) device monitored their brain activity, eight children watched a video montage that included boxing scenes from Rocky IV and part of a National Geographic animal program for children, among other clips. The violent scenes activated many brain regions that the nonviolent scenes did not, mostly in the right hemisphere. These regions have been associated by other researchers with emotion, attention and arousal, detection of threat, episodic memory, and fight or flight response. The authors of the study speculate that "though the child may not be aware of the threat posed by TV violence at a conscious level . . . a more primitive system within his or her brain (amygdala, pulvinar) may not discriminate between real violence and entertainment fictional violence." In the activation of regions associated with long-term memory, the researchers saw a suggestion that the television violence might have long-term effects on the viewer. [John P. Murray, etal. "Children's Brain Activations While Viewing Televised Violence Revealed by fMRI," Media Psychology, 2006.]
  • In a 2005 study, 213 video-game novices with an average age of twenty-eight were divided into two groups, and one group spent a month playing an average of 56 hours of a violent multi-player fantasy role-playing video game. Participants completed questionnaires to assess their aggression-related beliefs before and after the test month, and were asked before and after whether they had argued with a friend and whether they had argued with a romantic partner. The data showed no significant correlation between hours of game play and the measures of aggression, once the results were controlled for age, gender, and pre-test aggression scores. The authors note that there might be an effect too small for their study to detect, and that adults might be less sensitive to the exposure than children or adolescents. [Dmitri Williams and Marko Skoric, "Internet Fantasy Violence: A Test of Aggression in an Online Game," Communication Monographs, June 2005. Andrea Lynn, "No Strong Link Seen Between Violent Video Games and Aggression," News Bureau, University of Illinois at Urbana-Champaign, 9 August 2005.]
  • A 2007 book presented three studies of video-game violence's effect on school-age children. In the first study, 161 nine- to twelve-year-olds and 354 college students were asked to play one of several video games—either a nonviolent game, a violent game with a happy and cartoonish presentation, or a violent game with a gory presentation—and then to play a second game, during which they were told they could punish other player with blasts of noise (the blasts were not, in fact, delivered). Those who played violent games, whether cartoonish or gory, were more likely to administer punishments during the second game; playing violent games at home also raised the likelihood of punishing others. Children and college students behaved similarly. In the second study, 189 high school students were given questionnaires designed to assess their media usage and personality. The more often the students reported playing violent video games, the more likely they were to have hostile personalities, to believe that violence was normal, and to behave aggressively, and the less likely they were to feel forgiving toward others. The correlation between game playing and violent behavior held even when the researchers controlled for gender and aggressive beliefs and attitudes. The more time that students spent in front of screens (whether televisions or video games), the lower their grades. In the third study, 430 elementary school children were surveyed twice, at a five-month interval, about their exposure to violent media, beliefs about the world, and whether they had been in fights. Students were asked to rate one another's sociability and aggressiveness, and teachers were asked to comment on these traits and on academic performance. In just five months, children who played more video games darkened in their outlook on the world, and peers and teachers noticed that they became more aggressive and less amiable. The effect was independent of gender and of the children's level of aggression at the first measurement. Screen time impaired the academic performance of these students, too; they only became more aggressive, however, when the content they saw during the screen time was violent. [Craig A. Anderson, Douglas A. Gentile, and Katherine E. Buckley, Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy, Oxford University Press, 2007.]

2. Do video games impair academic performance?

  • In a 2004 survey of 2,032 school-age children, there were statistically significant differences in print and video-game use between students earning As and Bs and those earning Cs and below. On average, A-B students had read for pleasure 46 minutes and played video games for 48 minutes the previous day; C-and-below students had read for pleasure 29 minutes and played video games for 1 hour 9 minutes. Television watching seemed constant between the groups. [Donald F. Roberts, Ulla G. Foehr, and Victoria Rideout, Generation M: Media in the Lives of 8-18 Year-Olds, The Henry J. Kaiser Family Foundation, March 2005, page 47.]
  • A 2007 book presented results of a study in which 189 high school students were given questionnaires designed to assess their media usage and personality. The more time that students spent in front of screens (whether televisions or video games), the lower their grades. In a related and similar study, 430 elementary school children were surveyed twice, at a five-month interval, and screen time impaired the academic performance of these students, too. [Craig A. Anderson, Douglas A. Gentile, and Katherine E. Buckley, Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy, Oxford University Press, 2007.]

UPDATE (27 Feb. 2009): For ease in navigating, here's a list of all the blog posts I wrote to supplement my New Yorker article "Twilight of the Books":

Notebook: "Twilight of the Books" (overview)
Are Americans Reading Less?
Are Americans Spending Less on Reading?
Is Literacy Declining?
Does Television Impair Intellect?
Does Internet Use Compromise Reading Time?
Is Reading Online Worse Than Reading Print?
I also later talked about the article on WNYC's Brian Lehrer Show and on KUER's Radio West.
And, as a bonus round: Does media violence lead to real violence, and do video games impair academic performance?

“Twilight of the Books” reprinted

Best of Technology Writing 2008

My essay “Twilight of the Books,” about how a decline in reading might be affecting the culture, has just been reprinted in The Best of Technology Writing 2008, edited by Clive Thompson, available from the University of Michigan Press and Amazon, among others. Also featuring the brilliant Emily Nussbaum, John Seabrook, Jeffrey Rosen, Cass Sunstein, and more.

My essay was originally published in the 24 December 2007 issue of The New Yorker, and at the time I put up a multi-part annotated bibliography on this blog, organized by topic:

Notebook: “Twilight of the Books”

Are Americans Reading Less?

Are Americans Spending Less on Reading?

Is Literacy Declining?

Does Television Impair Intellect?

Does Internet Use Compromise Reading Time?

Is Reading Online Worse Than Reading Print?

I also later talked about the article on WNYC’s Brian Lehrer Show and on KUER’s Radio West.