Trends in reading

Minutes per day spent on reading for pleasure by Americans age 15 or older

A few weeks ago, I started writing “The Disenchantment of Literature in the Age of the Hit Counter,” a talk that I’m going to deliver at Reed College on March 30 and at the University of Portland on March 31. I found myself wondering whether there was a way to get a quick update of some of the statistics on literacy and reading in America that I collected in 2007, when I wrote an article called “Twilight of the Books” for The New Yorker, and I turned to the American Time-Use Survey (ATUS), which I remembered as one of the most solid sets of data, least subject to the very old-fashioned problem of respondents who lie and say they read more than they actually do. ATUS began in 2003, and it now has a decade of data.

The result is the chart above. In order to compile it, I had to do some arithmetic, which may not be entirely bulletproof, so let me explain. For some reason, in 2003 ATUS reported separate results for time spent reading by men and time spent reading by women, but didn’t report an average for the general population, so to come up with a single number, I weighted those results by what seems to have been the gender balance in America that year, 0.51 men to 0.49 women. In later years, ATUS reported separately time spent reading on weekdays and time spent reading on weekends and holidays, so to get a single average in those years I weighted the results by the ratio of 0.7 weekdays to 0.3 weekends and holidays. (I wondered whether ATUS was properly measuring reading on the internet, so I looked up ATUS’s coding rules for computer activity: “Code the activity the respondent did as the primary activity. For example, if the respondent used the computer to search for work, code as Job Search and Interviewing.” Presumably this means that if the respondent was using the computer to read, the time would be coded as reading, or rather, Leisure/Reading for Personal Interest.)

As you can see, what seems to be happening is a very slow, stately sinking. This is entirely consonant with a Dutch time-use study, much longer term, that tracked the time spent reading in the Netherlands for the first forty years after the introduction of television. I don’t know of an equivalent American study, but I imagine that the pattern in America resembled the one in the graph below.

Hours spent reading vs. watching television as a primary activity, weekends and weekday evenings, by Dutch citizens 12 and older

A retrospective glance

The New Yorker, as you may have heard, has redesigned its website, and is making all articles published since 2007 free, for the summer, in hopes of addicting you as a reader. Once you’re hooked, they’ll winch up the drawbridge, and you’ll have to pay, pay, pay. But for the moment let’s not think about either the metaphor I just mixed or its consequences, shall we?

A self-publicist’s work is never done, and it seemed to behoove me to take advantage of the occasion. So I googled myself. It turns out that I’ve been writing for the New Yorker since 2005 and that ten articles of mine have appeared in the print magazine over the years. All seem to be on the free side of the paywall as of this writing (though a glitch appears to have put several of the early articles almost entirely into italics). Enjoy!

“Rail-Splitting,” 7 November 2005: Was Lincoln depressed? Was he a team player?
“The Terror Last Time,” 13 March 2006: How much evidence did you need to hang a terrorist in 1887?
“Surveillance Society,” 11 September 2006: In the 1930s, a group of British intellectuals tried to record the texture of everyday life
“Bad Precedent,” 29 January 2007: Andrew Jackson declares martial law
“There She Blew,” 23 July 2007: The history of whaling
“Twilight of the Books,” 24 December 2007: This is your brain on reading
“There Was Blood,” 19 January 2009: A fossil-fueled massacre
“Bootylicious,” 7 September 2009: The economics of piracy
“It Happened One Decade,” 21 September 2009: The books and movies that buoyed America during the Great Depression
“Tea and Antipathy,” 20 December 2010: Was the Tea Party such a good idea the first time around?
Unfortunate Events, 22 October 2012: What was the War of 1812 even about?
“Four Legs Good,” 28 October 2013: Jack London goes to the dogs
“The Red and the Scarlet,” 30 June 2014: Where the pursuit of experience took Stephen Crane

Flip you for it

If the recession lasts another year, the New York Times will probably survive. If it lasts two more years, analysts aren't sure. That's one way of reading the latest reporting about the paper's future from the New York Times itself. By borrowing $250 million in January from a Mexican billionaire at what it calls "punishing terms," the paper, according to analysts, "has positioned itself well to ride out another year of recession, maybe two." The trouble is that the analysts also say that the Times accepted the punishing terms because they expect they will only be able to get even worse loan offers as the recession progresses. "Maybe two" years isn't a comfortingly distant horizon.

Another official revelation in the article: Somewhat morbidly, the longterm health of the New York Times is now understood by those who guide it to be conditional on the death of other newspapers across America. "There is a feeling among analysts that there is merit to the last-man-standing strategy," the Times reports. In 2010 or 2011, one analyst suggests, "there could be dramatically fewer newspapers," and absent those competitors, the Times should be able to prosper. To me this sounds a little bit like saying that in the event of a plague, there will be proportionally speaking a lot of canned food left over for survivors.

Why I remain pessimistic

The National Endowment for the Arts has just released a new report, Reading on the Rise, which contains good news: In a 2008 survey administered by the Census Bureau, Americans reported higher rates of literary reading than they did in 2002. In earlier reports, the decline of readers between the ages of eighteen and twenty-four had been a subject of special concern. Between 2002 and 2008, however, the proportion in that age group who reported that they had read a work of literature in the previous year jumped by 8.9 percentage points.

I've had an interest in American reading rates for some time, and I wrote an article on the subject for the New Yorker that was published in December 2007, so I read the New York Times article on the report and the report itself with great curiosity. I'm happy that there's some good news on the topic. I nonetheless remain pessimistic about the trend of reading in America overall, and it might be worth explaining why.

Before I do, though, I'll waste a paragraph or three on the Sisyphean task of trying to clear up a common misconception about the NEA-sponsored surveys. It is often claimed that in earlier surveys, the NEA undercounted literary reading because it failed to ask about reading that happened online. This is not exactly true. Starting in 1982, the survey asked, "During the last 12 months, did you read any a) plays, b) poetry, or c) novels or short stories?" Respondents weren't prompted to think about the internet, but they weren't told not to think about it, either, and the NEA has always said that a poem read online counted just as much as a poem read in a book. Moreover, the NEA asked a nearly identical question in 2008 (the only change was a variation in the order of the genres). So if the new report does show that reading rates have indeed recovered, it isn't because the NEA has only just now gotten around to asking about the internet. That, nonetheless, is the latest misconception about the NEA in circulation. For example, on Monday morning, Publisher's Lunch, a publishing insider's newsletter, made fun of NEA chairman Dana Gioia for claiming that his agency was partly responsible for the rise and suggested that the props should instead go to the internet:

Aside from the yeoman efforts of the NEA chairman, what could possibly explain the sudden change? "In 2008, the survey introduced new questions about reading preferences and reading on the Internet."

The quote within the quote is true but the juxtaposition is somewhat misleading. Yes, the 2008 survey for the first time asked about reading preferences (i.e., it asked whether those who read prose preferred mysteries, thrillers, romance, or "other" fiction) and about internet habits. But the central measure—that of literary reading—came from the question I quoted above, the wording of which neither included nor excluded online reading. In other words, the improved results can't be explained by a shift in the NEA's methodology about the internet.

There's another way of reading the Publisher's Lunch passage: PL might have been trying to imply that Internet use has itself spurred the appreciation of literature in the past six years. That, more or less, is the claim made in a similar mocking report in the blog Valleywag. It's a pretty bold claim. It doesn't seem outright impossible to me, because as I noted in my New Yorker article last year, there's some evidence that internet use and literacy go hand in hand. But relatively few of the internet's electrons are devoted to poetry and fiction, and it seems to me on the face of it unlikely that the internet could have caused significantly more people to take an interest in those genres. (Yes, I know about fan fiction, and despite the existence of it I stand by these hunches.)

But enough about the internet. Why aren't I celebrating the new numbers about the reading of literature? First, the numbers are good, but they're not that good. The proportion of Americans who said in 2008 that they read some literature in the previous twelve months may be higher than it was in 2002, but it's lower than it was in 1992, 1985, and 1982. Moreover, the same is true of the rates in the eighteen- to twenty-four-year-old bracket. Over the longer span, we're still talking about a decline.

Second, another of the NEA's measures shows a continued, stubborn decrease. To the question "With the exception of books required for work or school, did you read any books during the last 12 months?" the proportion of respondents saying yes dropped from 56.6 percent in 2002 to 54.3 percent in 2008. Here the internet may be relevant, because the word "book" is generally understood as referring to the ink-and-paper object. But even if the internet is the culprit, I'm still dismayed. Nationwide, there aren't yet that many e-book readers; it's simply not yet possible for very many people to read the electronic equivalent of books. Online substitutions may be taking place, but they're probably not "literary," so I doubt it's good news if the proportion who say they read books for pleasure continues to decline.

Okay, but a piece of good news is still good news, even if it's not great news, and even if another piece of news is bad. Right? Yes, but even of the limited good news I'm skeptical, because I can think of three reasons why the results might not be as good as they seem. First, there's a chicken-and-egg-like measurement difficulty. In the readiness to make fun of Gioia for crediting his own agency with the turnaround, the critics have so far missed a trick. It may be that Gioia does deserve credit, but not for what he thinks he does. The sticky part about the measurement of reading, sociologically, is that reading is a prestige activity. People tend to lie and say they do more of it than they do. As the afterword to the new report points out, the NEA in the last few years has reached out to millions of Americans with brand-new, well-funded programs to encourage reading. In the fall of 2007 it released a report on reading's decline that got lots of attention from journalists like me. Thanks in part to the NEA, literacy was a big news story in 2007 and 2008. I even saw it referred to on television, and I don't watch much television. All of this is worthy and to the good. But it's possible that in raising people's awareness of the importance of reading, the NEA encouraged them to exaggerate their reading habits. With a survey like the NEA's, which relies on self-reporting, there's no way to know for sure whether reading habits themselves were changed. It's as if there were a kind of Heisenberg uncertainty principle at work here. A government agency can either measure reading habits or intervene in them, but if it tries to do both, it runs the risk of measuring no more than the spread of its intervention message. (As I wrote in my article, the best way to measure reading is with a time-use survey, which is harder for respondents to fudge.)

Second, the new survey took place in a different month. In 2002, the NEA's survey took place in August. In 2008, it took place in May. If people had steeltrap minds, that wouldn't matter, but twelve months is longer than most people remember very accurately. I suspect that when you ask people about behavior over such a broad timespan, especially when you're focusing on the subset of people who are the liminal case—that is, on people for whom it was a toss of the coin whether they did or didn't read a work of literature in the past year—respondents may sometimes extrapolate from their sense of themselves in the present, rather than answer according to a comprehensive memory. If that's the case, then the month will influence their answer, if the month happens to have any relation to the aspect of themselves relevant to the question. And in this case it does. Like it or not, literary culture in America is largely keyed to the school year. Search Google Trends for the word "literary," in fact, and you'll see a curve that has acute declines every summer and every Christmas. (You'll also see that it slumps slowly over time, but that's not relevant to my point here.) You'll see the same shape if you search for "poem" and "short story". (Intriguingly, the novel has some but not all of this characteristic wiggle, perhaps because it hasn't altogether surrendered to the academy.) I don't know in which month the 1982, 1985, or 1992 surveys took place. But it's possible that the improvement between 2002 and 2008 owes something to the difference between May and August.

Third, and finally, there's the matter of the years. When Americans were asked in August 2002 about their reading habits over the preceding twelve months, they were of course being asked about the year immediately following 9/11. That was a period when everyone's media intake was wildly disrupted. If you look at the NEA's graph of the percentage of adults who read literature between 1982 and 2008, the outlier isn't 2008. It's 2002.

NEA, Percentage of Adults Who Read Literature, 1982-2008, from Reading on the Rise (2009), p. 3

In fact, if you ignore the 2002 results, you're looking at a gentle but almost uncannily straight descending line. One possible explanation of the graph above: Reading has been declining in America for decades, but the 2002 results were worse than they ought to have been, because in the aftermath of September 11 the nation was panicked out of its usual literary diversions. Between 2002 and 2008, the interruption of 9/11 corrected itself, and many people returned to literature, but not all of them. The underlying decline, the part owed to secular causes, continued.

These are mere speculations, and I'm grateful to the NEA for providing the data that I'm reluctant to accept. Time will tell whether I'm just being a curmudgeon; inshallah, my pessimism will be belied. But right now I think it's too soon to unpop the corks. If other indicators were favorable—in particular, if there were better prospects for newspapers or book publishers, or if anyone had figured out how to make enough money off of writing on the internet to subsidize lots of high-caliber investigative reporting—I might be willing to partake in the festivities. As things stand, though, I think the fate of reading is still a matter of concern.

Does media violence lead to real violence, and do video games impair academic performance?

Cross-posted from the University of Michigan Press blog.

"Twilight of the Books," an essay of mine published in The New Yorker on 24 December 2007, has been honored by inclusion in The Best of Technology Writing 2008, edited by Clive Thompson. When The New Yorker published my essay, I posted on my blog a series of mini-bibliographies, for anyone who wanted to dig into the research behind my article and try to answer for themselves whether television impaired intellect or whether literary was declining (here's an index/overview to all these research posts). A month or so ago, when the University of Michigan Press, the publisher of The Best of Technology Writing 2008, invited me to write about my essay for their blog, I was afraid I didn't have any more to say. Also, alas, I was under deadline. But I have a breather now, and looking over my year-old notes, I realize that there were a couple of categories of research that I never posted about at the time, because the topics didn't happen to make it into my article's final draft.

This research tried to answer the questions, Does exposure to violence on television or in video games lead to aggressive behavior in the real world? and Do video games impair academic performance? I still think the questions are very interesting, though I must now offer my summaries with the caveat that they are somewhat dated. In fact, I know of some very interesting research recently published on the first question, some of which you can read about on the blog On Fiction. I'm afraid I haven't kept up with video games as closely, but I'm sure there's more research on them, too. I hope there is, at any rate, because when I looked, I found very little. (By research, in all cases, I meant peer-reviewed studies based on experimental or survey data, and not popular treatments.)

A few words of introduction. The historian Lynn Hunt has suggested in her book Inventing Human Rights that in the eighteenth century, the novel helped to change Europe's mind about torture by encouraging people to imagine suffering from the inside. As if in corroboration, some of the research summarized below suggests that the brain responds less sympathetically when it is perceives violence through electronic media. As you'll see, however, there is some ambiguity in the evidence, and the field is highly contested.

1. Does exposure to violence on television or in video games lead to aggressive behavior in the real world?

  • In a summary of pre-2006 research, John P. Murray pointed to experiments in the 1960s by Albert Bandura, showing that children tend to mimic violent behavior they have just seen on screen and to a number of studies in the early 1970s that found correlations between watching violence and participating in aggressive behavior or showing an increased willingness to harm others. In 1982, a panel commissioned by the Surgeon General to survey existing research asserted that "violence on television does lead to aggressive behavior," and in 1992, a similar panel commissioned by the American Psychological Association reported "clear evidence that television violence can cause aggressive behavior." One mechanism may be through television's ability to convince people that the world is dangerous and cruel, in what is known as the "mean world syndrome." Murray claims that a twenty-two-year longitudinal study in Columbia County, New York, run by Huesmann and Eron, which was begun under the auspices of the Surgeon General's office, has linked boys' exposure to television violence at age eight to aggressive and antisocial behavior at age eighteen and to involvement in violent crime by age thirty; in fact, a 1972 study by Huesmann et al. did link boys' exposure at eight to aggressive behavior at eighteen, but the 1984 study cited by Murray linked violent crime at age thirty to aggressive behavior at age eight and said nothing about exposure to televised violence. In an unrelated study, when television was introduced in Canada, children's levels of aggression increased. [John P. Murray, "TV Violence: Research and Controversy," Children and Television: Fifty Years of Research, Lawrence Erlbaum Associates, 2007. L. Rowell Huesmann, Leonard D. Eron, Monroe M. Lefkowitz, and Leopold O. Walder, "Stability of Aggression Over Time and Generations," Developmental Psychology 1984. For a synopsis of Huesmann's 1972 study, see Steven J. Kirsh, Children, Adolescents, and Media Violence: A Critical Look at the Research, Sage Publications, 2006, p. 208.]
  • A longitudinal study of 450 Chicago-area children was begun in 1977 when the children were between six and eight years old, and continued in 1992-1995, when they were between twenty-one and twenty-three years old. As children, the subjects were asked about their favorite television programs, whether they identified with the characters, and how true-to-life they thought the shows were. Fifteen years later, it emerged that watching violent shows, identifying with aggressive characters of the same sex, and believing that the shows were realistic correlated with adult aggression, including physical aggression. The effect was present even after controlling for such factors as initial childhood aggression, intellectual capacity, socioeconomic status, and parents' level of emotional support. (Note that in the opinion of the researchers, the Six Million Dollar Man was considered a "very violent" show, and that the heroine of the Bionic Woman was considered an aggressive character.) [L. Rowell Huesmann, Jessica Moise-Titus, Cheryl-Lynn Podolski, and Leonard D. Eron, "Longitudinal Relations between Children's Exposure to TV Violence and Their Aggressive and Violent Behavior in Young Adulthood, 1977-1992," Developmental Psychology, 2003. Cf. Kirsh , p. 209.]
  • In a 2006 textbook about the relation between media violence and aggressive behavior, author Steven J. Kirsh notes that a 1994 meta-analysis of the link between television violence and aggression estimated the size of the effect to be r = .31. "The effect sizes for media violence and aggression are stronger than the effect sizes for condom use and sexually transmitted HIV, passive smoking and lung cancer at work, exposure to lead and IQ scores in children, nicotine patch and smoking cessation, and calcium intake and bone mass," Kirsh wrote. A 2004 meta-analysis found that the correlation between video game violence and aggressive behavior was r = .26. To put the effect sizes in perspective, Kirsh notes that they are greater than the link between testosterone levels and aggression, but weaker than the link between having antisocial peers and delinquency. In surveying the research on video games, Kirsh makes the point that there is little research as yet, and that most of it was done in what he calls the "Atari age," when the games were fairly innocuous; almost no one has experimentally tested the effects on children and teens of the new-generation, highly realistic and gory first-person shooter games. [Steven J. Kirsh, Children, Adolescents, and Media Violence: A Critical Look at the Research, Sage Publications, 2006.]
  • In a 2007 summary of research, three scientists asserted that there was "unequivocal evidence that media violence increases the likelihood of aggressive and violent behavior in both immediate and long-term contexts," and noted that the link between television violence and aggression had been proved by studies in both the laboratory and the field, and by both cross-sectional and longitudinal studies. Video games were not as well documented, but in the opinion of the scientists, the preliminary evidence suggested that their effect would be similar. Playing violent video games has been shown to increase physiological arousal. Measurements of skin conductance and heart rate show that people have less of an aversion to images of real violence, if they have previously been exposed to violent television or violent video games. Measurements of event-related brain potentials (ERPs) and functional magnetic resonance imaging (fRMI) allow researchers to look with new precision at the magnitude of brain processes that occur at particular times and at the activation of specific regions of the brain. A 2006 study by Bartholow et al., for example, showed that exposure to violent video games reduces aversion to scenes of real violence, as measured by a blip of voltage that typically occurs 300 milliseconds after sight of a gory image. A 2006 study by Murray et al. (see below) showed that violent scenes of television activated parts of the brain associated with emotion, memory, and motor activity. Yet another 2006 study, by Weber et al., showed that while players were engaged in violence during a video game, a brain region associated with emotional processing was suppressed, and one associated with cognitive processing was aroused, perhaps in order to reduce empathy and thereby improve game performance. In a 2005 study by Matthews et al., chronic adolescent players of violent video games scored the same as adolescents with disruptive behavior disorders on a test designed to assess a brain region responsible for inhibition and error correction. Attempting to explain the results of the various studies under review, the authors write: "Initial results suggest that, although video-game players are aware that they are engaging in fictitious actions, preconscious neural mechanisms might not differentiate fantasy from reality." [Nicholas L. Carnagey, Craig A. Anderson, and Bruce D. Bartholow, "Media Violence and Social Neuroscience," Currents Directions in Psychological Science, 2007.]
  • While a functional magnetic resonance imaging (fMRI) device monitored their brain activity, eight children watched a video montage that included boxing scenes from Rocky IV and part of a National Geographic animal program for children, among other clips. The violent scenes activated many brain regions that the nonviolent scenes did not, mostly in the right hemisphere. These regions have been associated by other researchers with emotion, attention and arousal, detection of threat, episodic memory, and fight or flight response. The authors of the study speculate that "though the child may not be aware of the threat posed by TV violence at a conscious level . . . a more primitive system within his or her brain (amygdala, pulvinar) may not discriminate between real violence and entertainment fictional violence." In the activation of regions associated with long-term memory, the researchers saw a suggestion that the television violence might have long-term effects on the viewer. [John P. Murray, etal. "Children's Brain Activations While Viewing Televised Violence Revealed by fMRI," Media Psychology, 2006.]
  • In a 2005 study, 213 video-game novices with an average age of twenty-eight were divided into two groups, and one group spent a month playing an average of 56 hours of a violent multi-player fantasy role-playing video game. Participants completed questionnaires to assess their aggression-related beliefs before and after the test month, and were asked before and after whether they had argued with a friend and whether they had argued with a romantic partner. The data showed no significant correlation between hours of game play and the measures of aggression, once the results were controlled for age, gender, and pre-test aggression scores. The authors note that there might be an effect too small for their study to detect, and that adults might be less sensitive to the exposure than children or adolescents. [Dmitri Williams and Marko Skoric, "Internet Fantasy Violence: A Test of Aggression in an Online Game," Communication Monographs, June 2005. Andrea Lynn, "No Strong Link Seen Between Violent Video Games and Aggression," News Bureau, University of Illinois at Urbana-Champaign, 9 August 2005.]
  • A 2007 book presented three studies of video-game violence's effect on school-age children. In the first study, 161 nine- to twelve-year-olds and 354 college students were asked to play one of several video games—either a nonviolent game, a violent game with a happy and cartoonish presentation, or a violent game with a gory presentation—and then to play a second game, during which they were told they could punish other player with blasts of noise (the blasts were not, in fact, delivered). Those who played violent games, whether cartoonish or gory, were more likely to administer punishments during the second game; playing violent games at home also raised the likelihood of punishing others. Children and college students behaved similarly. In the second study, 189 high school students were given questionnaires designed to assess their media usage and personality. The more often the students reported playing violent video games, the more likely they were to have hostile personalities, to believe that violence was normal, and to behave aggressively, and the less likely they were to feel forgiving toward others. The correlation between game playing and violent behavior held even when the researchers controlled for gender and aggressive beliefs and attitudes. The more time that students spent in front of screens (whether televisions or video games), the lower their grades. In the third study, 430 elementary school children were surveyed twice, at a five-month interval, about their exposure to violent media, beliefs about the world, and whether they had been in fights. Students were asked to rate one another's sociability and aggressiveness, and teachers were asked to comment on these traits and on academic performance. In just five months, children who played more video games darkened in their outlook on the world, and peers and teachers noticed that they became more aggressive and less amiable. The effect was independent of gender and of the children's level of aggression at the first measurement. Screen time impaired the academic performance of these students, too; they only became more aggressive, however, when the content they saw during the screen time was violent. [Craig A. Anderson, Douglas A. Gentile, and Katherine E. Buckley, Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy, Oxford University Press, 2007.]

2. Do video games impair academic performance?

  • In a 2004 survey of 2,032 school-age children, there were statistically significant differences in print and video-game use between students earning As and Bs and those earning Cs and below. On average, A-B students had read for pleasure 46 minutes and played video games for 48 minutes the previous day; C-and-below students had read for pleasure 29 minutes and played video games for 1 hour 9 minutes. Television watching seemed constant between the groups. [Donald F. Roberts, Ulla G. Foehr, and Victoria Rideout, Generation M: Media in the Lives of 8-18 Year-Olds, The Henry J. Kaiser Family Foundation, March 2005, page 47.]
  • A 2007 book presented results of a study in which 189 high school students were given questionnaires designed to assess their media usage and personality. The more time that students spent in front of screens (whether televisions or video games), the lower their grades. In a related and similar study, 430 elementary school children were surveyed twice, at a five-month interval, and screen time impaired the academic performance of these students, too. [Craig A. Anderson, Douglas A. Gentile, and Katherine E. Buckley, Violent Video Game Effects on Children and Adolescents: Theory, Research, and Public Policy, Oxford University Press, 2007.]

UPDATE (27 Feb. 2009): For ease in navigating, here's a list of all the blog posts I wrote to supplement my New Yorker article "Twilight of the Books":

Notebook: "Twilight of the Books" (overview)
Are Americans Reading Less?
Are Americans Spending Less on Reading?
Is Literacy Declining?
Does Television Impair Intellect?
Does Internet Use Compromise Reading Time?
Is Reading Online Worse Than Reading Print?
I also later talked about the article on WNYC's Brian Lehrer Show and on KUER's Radio West.
And, as a bonus round: Does media violence lead to real violence, and do video games impair academic performance?