Those people were a kind of solution: the future of books and copyright, part 2

François Bonvin, 'A Woman Reading, after Pieter Janssens Elinga,' 1846-47, Metropolitan Museum of Art

It’s bracing to spend time with people who know in their hearts that your way of life is going the way of the horse and buggy.

In an earlier post, I described a few legal concepts in vogue at In Re Books, a conference about law and the future of the book that I attended on 26 and 27 October, and I characterized the conference as haunted by the ghost of the late Google Books settlement. In this post, I’d like to relay what the conferencegoers had to say about the future of publishing, including the problem of how to price e-books.

Most of the conferencegoers seemed to be lawyers, law professors, or librarians. One of the exceptions, the author James Gleick, noted that everyone present was united by the love of books—and then added that the love sometimes took the form of a wish to have the books for free. But the lawyers themselves didn’t seem to think of their doomsaying as in any way volitional. Some of them even seemed to look upon the publishing industry with pity; they hoped it would soon be out of its misery.

Consider, for example, the battle being waged between Amazon and traditional publishers over the price of e-books. Most people in publishing see their side as waging a crusade in the name of literature. Their version of the story goes like this: A few years ago, Amazon had managed to establish a near-monopoly on e-books by offering low prices. Amazon in many cases sold e-books to customers for even less than the wholesale price that publishers demanded, losing money for the sake of market share. Publishers were alarmed. If customers came to expect such low prices habitually, and if Amazon’s monopoly remained unbroken, publishers would be forced in time to lower their wholesale prices radically. Editors, designers, publicists, and sales representatives would lose their jobs, and books would no longer be made with the same level of care—if publishers managed to remain in business at all. When Apple debuted the Ipad in 2010, publishers saw a chance to rebel. They agreed with Apple to sell e-books on what was called the “agency model”: publishers were to set the retail price, and Apple was to take a percentage, the way it did with the apps sold through its Itunes store. With many titles, the publishers were agreeing to sell e-books to Apple for a wholesale price lower than the one they had been getting from Amazon, but the power to control retail price seemed worth the sacrifice. The publishers gave Amazon a choice: accept the agency model or lose access to books. Amazon complained that the publishers were abusing their “monopoly” over books under copyright, and the retailer briefly tried to coerce publishers by erasing the “buy” buttons from the Amazon pages of the publishers’ print titles. In the end, though, Amazon gave in, and over the next couple of years, Amazon’s market share in e-books fell. Today the Nook and the I-Pad offer the Kindle stiff competition. In April 2012, however, the Department of Justice accused the publishers and Apple of antitrust violations. A few publishers settled, for terms that required them to allow Amazon to discount their e-books as before. Others are still fighting the charges. Amazon, meanwhile, has become a publisher itself—of serious books, as well as vanity titles. How, most people in publishing want to know, can the Department of Justice fail to see that Amazon is trying to drive traditional publishers out of business?

The lawyers at the In Re Books conference were able to see that, as it happens. They just didn’t see it the way people in publishing do. They saw, rather, a historical process of Hegelian implacability, and they saw the publishers as desperate characters who had resorted to possibly illegal maneuvers in a futile attempt to prevent it. “You know, the agency model,” said Christopher Sagers, a law professor at Cleveland State University who specializes in antitrust, “we used to just call it price-fixing.” Sagers allowed that a recent Supreme Court ruling, the Leegin case of 2007, was somewhat indulgent toward so-called “vertical” price-fixing, which consists of a series of contracts between a manufacturer and its distributors and retailers, along the vertical axis of the supply chain, that allows a manufacturer to determine retail prices of its goods. (Apple famously prohibits its retailers from discounting its products without permission, for example.) But “horizontal” price-fixing remains illegal, as do certain strains of “vertical” price-fixing, Sagers said, and the Department of Justice thought that the publishers and Apple were guilty along both axes. It was no defense, Sagers pointed out, to say that the publishers were choosing to lose money. The law didn’t care about that. Nor was it a defense to say that publishing is special. Throughout history, Sagers said, companies have responded to antitrust accusations by claiming to be special, and Sagers didn’t think publishing was any more special than, say, the horse-and-buggy-making industry had been. In Sagers’s opinion, publishing is suffering through the advent of a technological change that is going to make distribution cheaper and, through price competition, bring savings to consumers. Creative destruction is in the house, and there is no choice but to trust the market. “Someone will figure it out,” said Sagers, “it” being a new economic engine for literature, and he apologized for sounding like Robert Bork by saying so. As for the charge that Amazon was headed for a monopoly, Sagers’s reply was, in essence, Well, maybe, but the answer isn’t to let a cartel set prices.

The legal question at issue is somewhat muddied by the fact that publishers are allowed to set the retail prices of books and even e-books in a number of other countries, where publishing is heralded as special. Germany, France, the Netherlands, Italy, and Spain allow the vertical price-fixing of books, as Nico van Eijk, of the University of Amsterdam, explained at the conference. The United Kingdom, Ireland, Sweden, and Finland, on the other hand, do not. Van Eijk thought he saw a pattern: The warm and emotional countries indulge their literary sector, while the cold and as it were remorseless ones subject it to the free market. The nations that allow for “resale price maintenance,” as it’s called, in publishing justify the legal exception for three reasons. They believe that it brings a bookstore to every village, that it makes possible a wide selection of books in those bookstores, and that it enables less-popular books to be subsidized by more-popular ones. In other words, the argument for resale price maintenance rests largely on the contribution that local, independent bookstores make to cultural life. And bookstores do thrive in countries where publishers may set retail prices. The trouble is that the same arguments don’t work as well with e-books, as Van Eijk pointed out. E-bookstores are virtually ubiquitous, thanks to widespread internet access, and every e-book available for sale is available in almost every e-bookstore. As for cross-subsidization, van Eijk dismissed it as already doubtful even as a justification for printed books. (In fact, though several people I spoke to at the conference seemed either unaware of it or not to believe in it, the current publishing system does allow for cross-subsidization. Most books of trade nonfiction wouldn’t get written without it. Publishers advance substantial sums to writers who propose books that sound promising, and publishers can afford the bets because they’re buying a diversified portfolio: if the biography of Henry VIII doesn’t make it big, maybe the cultural history of the Mona Lisa will. If publishers are driven out of business, only heirs and academics are likely to be able to put in the years of research necessary to write a book of history, unless the market comes up with a new funding mechanism.) Most European countries seem skeptical of allowing resale price maintenance for e-books, but “we’ll always have Paris,” van Eijk joked. French law, he explained, not only allows but requires fixed pricing for e-books. Moreover, France insists on extraterritoriality: even non-French booksellers must comply if they want to sell to French customers.

Niva Elkin-Koren, of the University of Haifa, predicted a “world of user-generated content,” where the tasks of editing and manufacturing books will be “unbundled,” and “gatekeeping,” which now occurs when Manhattan editors turn down manuscripts, will take place through online reviews after the fact. She seemed to see the “declining role of publishers,” as she put it, as a liberation, but I’m afraid I found her vision bleak. In the future, will we all be reading the slushpile? Jessica Litman, of the University of Michigan, also thought little of publishers, accusing them of angering libraries and gouging authors. As a bellwether, Litman pointed to the example of a genre author whom she likes who now sells her books online. I found myself wondering if Litman was extrapolating from an experience with academic and textbook publishers, some of whom do bully authors and have resorted to extorting the captive markets of university libraries and text-book-buying students. In my experience, trade publishers go to great pains to keep prices low and authors happy.

In the last panel session, a masterful analysis of the economics of publishing in America and Britain was presented by John Thompson, of the University of Cambridge, author of Merchants of Culture. Thompson began by surveying the forces of change in the last couple of decades. In the 1990s, the rise of bookstore chains killed off independent bookstores. The introduction of computerized stocking systems brought greater control over when and where books appeared in stores. Once upon a time, paperbacks were publishing’s bread and butter, but mass-marketing strategies originally devised for paperbacks were applied to hardcovers, and in time hardcovers became the moneymakers. Literary agents grew more powerful. A handful of corporate owners consolidated control.

Thanks to these changes, said Thompson, today there are large publishers and many tiny ones, but very few that are middling in size. That’s because a midsize publisher misses out on the economies of scale available to a large one, and misses out on the barter-circle of favors that indie presses are willing to exchange with one another. Large publishers are preoccupied with “big books,” which Thompson defined as “hoped-for best-sellers,” because their corporate owners demand annual growth of 8 to 10 percent, even though the overall market for books is stagnant. At a large publisher, the only way to keep your job is to pursue big books, however mathematically doomed the pursuit may be in the larger scheme of things. Big-book status depends, in Thompson’s formulation, not so much on fact as on “a web of collective belief”; big books are identified by the “expressed enthusiasm of trusted others.” Certain people—often, literary agents—become brokers in this economy of belief, enabling them to extract higher prices. Thompson called the result “extreme publishing.” Every year, the reasonable sales predictions aren’t good enough, and editors are forced to try to “close the gap,” that is, to come closer to the sales figures that their corporate overlords are demanding—a task for which only big books are big enough even to be plausible. Meanwhile, as bookstores are shuttered, it’s becoming harder and harder to bring new titles to customers’ attention. In hopes of making a big book, publishers pay to feature their books in store windows, where a new book has about six weeks to prove itself. If it shows signs of doing well, publishers have become adept at “pouring fuel on a flame,” as Thompson put it. But they’ve also become ruthless at killing off the weak. About 30 percent of books are returned from bookstores to publishers, and most are pulped.

In the United States, said Thompson, publishers face agents who are able to demand higher advances for their authors. In the United Kingdom, where the Net Book Agreement, which allowed publishers to set retail prices, collapsed in the 1990s, publishers face powerful retailers like Tesco who not only sell at a discount but demand cuts on wholesale prices.

As for e-books, Thompson stressed that the market is changing fast enough to make a fool out of anyone claiming to know what it will do next. He noted that when e-books were introduced, most analysts expected business titles to be the pioneers, but instead genre fiction led the way. Forty to fifty percent of romances, science fiction novels, and thrillers are now sold in digital form. (I thought I saw a hint of an explanation for the divergence in a talk given by Stuart M. Shieber, a professor of computational linguistics at Harvard. After analyzing the pros and cons of print books and e-books—including such factors as resolution, weight per reading unit, capacity for random access, and pride of ownership—Shieber predicted that when display technology has been perfected, “E-book readers will be preferable to books” but “Books will still be preferable to e-books.” If Shieber is right, then perhaps what differentiates is where a reader’s attachment lies. If your attachment is to the experience of reading rather than to a particular set of titles, you’re more likely to prefer an e-reader. But if your attachment is to particular books, you’ll prefer to read them in print. After all, at the extreme, if all you want to do is re-read a single text, you probably won’t bother with an electronic device.) But even literary fiction is shifting, Thompson noted. Twenty-five percent of the sales of Jonathan Franzen’s Freedom were e-books, and fifty percent of Jeffrey Eugenides’s The Marriage Plot.

Though he stressed the hazards of guessing, Thompson concluded by making a number of short-term predictions. He thought Amazon would continue to grow and bookstore chains to wither. He foresaw more consolidation, as weak publishers fold and impatient corporate owners decide to get out of the publishing business. As bookstores vanish, they will be taking their windows and display tables with them, and it will become harder and harder to introduce new books to readers, a battle that will have to fought online. Thompson expected that different kinds of books will continue to shift from print to digital formats at different speeds. Price deflation for e-books will be perhaps publishers’ greatest challenge, and publishers will very likely be forced to reduce costs in order to remain profitable—shedding staff and limiting themselves even more rigidly to big books than they do now. Nipping at their heels, all the while, will be an army of small presses and start-ups, many of whom will be trying to come up with new kinds of “disintermediation”—new ways to abridge a book’s journey from writer to reader.

What does it all mean? In looking over these notes, I find myself wondering if copyright is meaningful in the digital world without some power to set retail prices. The rigorous application of free-market logic to issues of copyright sounds slightly off-key to me. It is nowhere written that the law has to defer to macroeconomics, which copyright, by its very nature, defies. No market left to its own devices would come up with copyright. The whole point of it is that society has decided that the written word is special, and has recognized that perfect competition in the literary sphere quickly leads to prices so low that no writer can make a living. (An important subsidiary point is that society demands, in exchange for granting this exceptional economic protection, a temporal limit to a copyright’s term, but we’re not litigating that aspect of the case today.) Amazon’s publicists had a point when they lamented that copyright is a monopoly. In the market for a particular work of literature, it is one, a legal one. It is authorization to sell a work of literature at a higher per-unit price than the market would support if everyone were free to print it. Authorization alone would be meaningless, however. The government also has to prevent a publisher’s competitors from selling the same work at a lower price. In her remarks at the conference, Elkin-Koren predicted that as books turn into e-books, they will move from being commodities to being services, and publishing will merge with retailing. “There is no difference between a bookseller, a publisher, and a library,” she said. But if she’s right, then if copyright is to have any force, shouldn’t the power to set a book’s price at its “first sale” be extended to the price of the license sold to the reader-consumer? The extension might be necessary to preserve the spirit of copyright. And given the ease with which digital copies can be made and shared, it might also be necessary to retain beyond the “first sale” of an e-book the copyright controls that are exhausted upon the first sale of a printed book. That may sound inelegant, but there’s no reason to think that the best way for law to foster literature is going to be natural-looking. Copyright never has been natural, and it never will be. The challenge is to find the least amount of legal protection adequate to retaining publishing as a viable business.

The Future of books and copyright

View of the Interior of the Finishing Room, in Jacob Abbott, 'The Harper's Establishment, or How the Story Books Are Made'

This past weekend, just before the hurricane, I attended In Re Books, a conference about law and the future of the book convened by James Grimmelmann at the New York Law School. Playing the role of Luddite intruder among the futurologists, I gave a talk about the hazard that digitization may pose to research and preservation. Though there were a few librarians, leaders of nonprofits, and even writers present, most of my fellow conference attendees were lawyers who specialize in copyright, and I discovered that copyright lawyers see the world rather differently than do the writer-editor types with whom I usually rub shoulders. They don’t expect publishing as I know it to be around much longer, for one thing. I thought I’d try to write up my impressions of the time I spent in their company. Please keep in mind that I’m not a lawyer myself. I’m just a visitor who went to the fair.

A specter was haunting the conference: the ghost of the settlement that Google Books tried to make with the Authors Guild several years ago. That settlement, slain by Judge Denny Chin in late 2011, had attempted to obtain digital rights to what are known as orphan works, books that are protected by copyright even though the author or publisher who holds the copyright can no longer be found. The settlement had proposed to set up a collective licensing system that would charge for digital access to all books under copyright, parented and ophaned. Proceeds from orphan works, it was suggested, might be shared with findable authors, if no actual rightsholder could be found and if anything was left over after the rights management organization was done paying for itself. The proposal was far from perfect. Why should Google get to sell orphan works and nobody else? Why should the profits from orphan works go to people who didn’t write them? It turns out that the death of the agreement is not much lamented by the copyright lawyers. When Minda Zeitlin, president of the American Society of Journalists and Authors, asked, “Is there anyone better to represent dead and unfindable authors than living and findable ones?” the retort from Pamela Samuelson, a copyright law professor at the University of California at Berkeley, was sharp: “I’m a better representative of an author like me,” Samuelson said, her implication being that an academic author aims in publishing to further knowledge and build a reputation, not make money. Roy Kaufman, who works at the Copyright Clearance Center, a collective licensing agency founded in 1978 in response to the disruptive technology known as the photocopier, was at pains to distinguish his employer’s system from the one advanced by Google Books and the Authors Guild. The Copyright Clearance Center is opt-in and nonexclusive, he assured the audience. His message was studiously non-threatening: mass digitization could involve rightsholders. Maybe it could take the form of collective licenses arranged between social-media networks and publishers. Facebook, for example, could pay the New York Times for articles and photos that its users posted.

Kaufman’s support for collective licensing, however cautious, was atypical. Most at the conference were against it. Samuelson thought it inadvisable in general, as did Matthew Sag, of Loyola University Chicago, who justified his dislike by pointing to the failures and subsequent reboots of a compulsory licensing system recently set up in the United States for the webcasting of music.

What, if anything, will take the ghost’s place? At the conference, a leading contender was the idea that fair use might solve the orphan works problem—an idea recently advanced by Jennifer Urban of the University of California at Berkely. Fair use, as I wrote in a review-essay for The Nation earlier this year, is an exception to copyright written into American law in 1976. It’s because of fair use that a reviewer doesn’t need to ask permission before he quotes from a book, and it’s because of fair use that an Obama campaign commercial can quote a Romney speech, or vice versa, without paying for it. In the last few years, courts have been more and more generous in how they define fair use, perhaps because Congress seems so unlikely to help sort out the tangles in copyright. In a recent case between the Authors Guild and a digital books repository called Hathi Trust, for example, a court found that three of the four things that Hathi wanted to do with digital texts were fair use: data-mining, indexing, and providing access to the blind. America’s 1976 copyright law specifies four factors to consider in determining fair use—the nature and purpose of the new use, the nature and purpose of the original work, the amount taken, and the impact on the original creator’s income—but in the last couple of decades, judges have focused on whether a new use is “transformative” of the old content it borrows from. Whatever purpose Thomas Pynchon had in mind when he wrote Gravity’s Rainbow, for example, he probably didn’t imagine computerized search of his novel along with a myriad of others in order to find patterns of word usage. That’s a completely new use, a transformation of the purpose of his words unlikely to interfere with the money he expected to make from his novel, so the judge in the Hathi Trust case found it fair.

Sag and Samuelson favored Urban’s idea, which was also mentioned by Doron Weber of the Alfred P. Sloan Foundation, which funds the Digital Public Library of America. Since I hadn’t read Urban’s paper, I asked Sag what kind of transformation lay behind her deployment of fair use. There wasn’t any, he explained, to my surprise, and now that I look at the paper, I see what he means. Urban thinks libraries and universities should be able to provide digital facsimiles for their patrons to read—exactly the same use for which the books were originally published. She also frankly admits to wanting the right to reproduce entire works, not just samples or snippets of them. But she argues that such use would be fair nonetheless, based on the four factors conventional in fair-use analysis. She maintains that libraries and universities are nonprofit institutions, who would be offering access to the texts as a noncommercial service for such public-spirited purposes as research and preservation. (For this part of her argument to hold water, would a university library need to open itself to the public in a general way? Right now the services of a university library, however worthy, are for the most part bestowed only on its own students and faculty, and their character is not purely altruistic.) And she argues that the orphanhood of an orphaned work is more important than previous analysts have seen: “Orphan works,” she writes,

represent a clear market failure: there is no realistic possibility of completing a rights clearance transaction, no matter how high the costs of that transaction, because one party to the transaction is missing.

Therefore market harm, the fourth factor of fair-use analysis, is nugatory, in Urban’s opinion. The trouble with her argument here, I think, is that it’s impossible to know whether a so-called orphan work is really an orphan or merely a work whose parents haven’t shown up yet. If the parents do exist, the market harm to them is real, and it would be as wrong for a court to give the value of their work to Urban’s university library as to give it to Google or a third-party author. Urban seems to be transferring the copyright rather than carving out an exception to it, and I’m afraid that only Congress, in its capacity as the sovereign power of the United States, has the authority to dispose of someone else’s copyright, in an act of eminent domain. Without any claim of a transformation, it seems unlikely to me that Urban will convince a court to define fair use so broadly that it includes reproducing whole works for much the same purpose that they were originally published. But this is just my opinion. The copyright lawyers seem excited by her idea, and as yet no one knows how far it will go. It’s up to the courts. As Jule Sigal, of Microsoft, noted in his presentation, the orphan-works problem has passed through the Age of Legislation (2005-2008) and the Age of Class Action (2008-2011), and we are now living in the Age of Litigation.

The other big new idea at the conference was that the first-sale doctrine might be extended to e-books. That sentence will sound like gibberish to the uninitiated, so let me back up and explain. The first-sale doctrine is a legal concept that limits the control that copyright affords. Specifically, it limits copyright control to the period before an item under copyright is first sold. Once you buy an ink-on-paper book, for example, you’re free to re-sell the book on Ebay at a fraction of the cost. Or give it to your boyfriend. Or take an X-acto blade to it and confuse people by calling the result art. You don’t have the right to sell new copies of the book, but you’re free to do almost anything else you like with the specific copy of the book that you bought. Without the first-sale doctrine, used bookstores would be in constant peril of lawsuits.

Two speakers at the conference told the story of Bobbs-Merrill v. Straus, the 1908 case that established the first-sale doctrine. On the copyright page of the novel The Castaway, the publisher Bobbs-Merrill set the retail price at one dollar and threatened to sue discounters for breach of copyright. Macy’s sold the book for eighty-nine cents anyway, triggering a lawsuit, and the court ruled that copyright afforded Bobbs-Merrill control over the book’s price only up to the moment when Bobbs-Merrill, as a wholesaler, sold copies to Macy’s, which then became free to set whatever retail price it wanted. Ariel Katz, of the University of Toronto, noted that the story is usually told as if the case involved an attempt at what’s known as “vertical” price-fixing—that is, an attempt by a wholesaler to fix the prices charged by independent retailers further down the supply chain. But Katz maintains that it was actually a story of “horizontal” price-fixing—that is, an attempt at collusion in price-fixing by companies that are supposed to be in competition with one another, wholesalers in collusion with wholesalers, and retailers with retailers. The Straus brothers who ran the Macy’s department store were “retail innovators,” Katz explained, who sold a wide variety of goods, including books, at steep discounts, thereby angering publishers and traditional booksellers. The members of the American Publishers Association publicly swore to refuse to supply retailers who discounted the retail price of books, and the American Booksellers Association publicly swore to boycott any publishers who didn’t toe the American Publishers Association’s line. It was the Straus brothers who first went to court, accusing the publishers and booksellers of antitrust violations, but the outcome of this first case was ambiguous: the court ruled that publishers could only set the prices of books that were under copyright. It wasn’t until the 1908 case that the court limited price-setting even of copyrighted books to the period before their first sale.

(As Katz pointed out, it isn’t obvious why publishers and booksellers should have been willing to collude in fixing prices, and he proposed an economic explanation that I wasn’t quite able to follow. He suggested that the price-fixing was an attempt to solve a challenge first discovered by Ronald Coase: if you sell a durable good and you’re a monopolist, you soon find that your monopoly isn’t as profitable to you as you’d like it to be, because you’re in competition with yourself—that is, you’re in competition with all the durable goods you’ve already sold, which suppress demand. The only way to keep prices from falling is to convince consumers that you’ll never let them fall. Katz argues that the limit to booksellers’ shelf space helped publishers make credible their promise never to lower prices, and that in the digital world, where shelf space is unlimited, no similar promise will be as credible. He ran out of time before explaining in detail how this mechanism would work, and as I say, I didn’t quite follow. I also wasn’t quite certain that books qualify as durable goods. Most people, once they’ve read a book, prefer to read a new one instead of re-reading the one they just finished, a fact that suggests that books are more like loaves of bread than refrigerators. But I may be missing something.)

Aaron Perzanowski, of Wayne State University, framed the story of Bobbs-Merrill v. Straus in the context of a common-law tradition of rights exhaustion—the word exhaustion here having the sense of a thing coming to its natural end. In Perzanowski’s opinion, the right to control price is not the only aspect of copyright that expires when an item under copyright is sold. The owner of a work has purchased the use and enjoyment of it, Perzanowski argued, including perhaps the rights to reproduce the work and to make derivative works. Perzanowski made explicit a further leap that remained mostly implicit in Katz’s talk: Shouldn’t the first-sale doctrine apply to e-books, too? As a contractual matter, e-books are rarely sold, in order to prevent exactly this eventuality. In the fine print, it transpires that what distributors purchase from publishers, and what readers purchase from distributors, are mere licenses. But if courts were to recognize readers of e-books as owners, the courts could grant readers the right to re-sell and a limited right to reproduce what they had purchased. Jonathan Band, of Policy Bandwidth, in his assessment of recent legal victories won by university libraries on the strength of fair-use arguments, noted that he saw the first-sale doctrine as likely to be important in future disputes over digital rights. Libraries, he said, felt that they had already purchased the books in their collection and ought to be able to convey them digitally to their patrons.

Extending the first-sale doctrine to e-books might make libraries happy, but it would horrify publishers. Right now, only two of the six largest American publishers allow libraries to lend all of their e-books, and one of those two sells licenses that expire after twenty-six check-outs. Librarians sometimes become quite indignant over the limitations and refusals. “Are publishers ethically justified in not selling to libraries?” one asked at the conference. A recent Berkman Center report, E-Books in Libraries, offered some insight into publishers’ reluctance:

Many publishers believe that the online medium does not offer the same market segmentation between book consumers (i.e., people who purchase books from a retailer) and library patrons (i.e., people who check out books from a public library) that the physical medium affords.

When was the last time you checked out a printed book from the library? My own impression is that gainfully employed adults rather rarely do. (At least for pleasure reading. Research is a different beast.) Maybe they prefer to buy their own books for the sake of convenience, which ready spending money enables them to afford. Or maybe it’s to signal their economic fitness to romantic partners, or to broadcast their social status more generally. But whatever the reason, the fact is that publishers don’t sacrifice many potential sales when they sell printed books to libraries, because library patrons by and large aren’t the sort who purchase copies of books for themselves. The case seems to be different with e-books, though, especially if patrons are able to check them out from home. E-book consumers signal their economic status by reading off of an I-pad XII instead of a Kindle Écru; the particular e-book that they’re reading is invisible to the person on the other side of the subway car, so it might as well be a free one from the library. That means that e-book sales to libraries cannibalize sales to individual consumers. Publishers have tried charging libraries higher prices for e-books. They’ve tried introducing technologically unnecessary “friction,” such as a ban on simultaneous loans of a title, or a requirement that library patrons come in person to the library to load their reading devices. The friction frustrates library patrons and enrages librarians, and even so, it hasn’t been substantial enough to reassure the publishers who are abstaining from the library market altogether. If the future of reading is digital, the market-segmentation problem raises a serious question about the mission of libraries. In his remarks at the conference, the writer James Gleick, a member of the Authors Guild who helped to negotiate its late settlement with Google Books, said that he doubted that every lending library needed to be universal and free, and that he wished the Digital Public Library of America, which is still in its planning stages, were trying to build into its structure a way for borrowers to pay for texts under copyright. The challenge of bringing e-books into public libraries turns out to be inextricable from the larger problem of how authors will be paid in the digital age.

I’ll try to report what the lawyers think of that larger problem in a later post.

UPDATE: Part two here.

Walking the plank one last time

A rough graph of how copyright and piracy affect supply and demand curves

Over at Slate, I’ve written a response to Matt Yglesias’s reply to my criticism of his ideas about piracy and copyright.

In the last paragraph of my new post, I qualify my assessment of piracy’s impact on copyright by wondering “if I’m drawing the graphs correctly.” Should anyone want to inspect those graphs, here are a couple! As I’ve said repeatedly, I’m no economist, so they could be riddled with errors. I didn’t draw the supply curve as a straight, upward-sloping line because I’ve always understood that in the book-publishing world, publishers are willing to sell books cheaper if they can sell more of them, and editors spend much time and energy trying to guess whether demand will be sufficient to justify a low price, or insufficient and require them to charge a high one. This may be an elementary error for all I know; if it is, please accept my apologies and straighten out my supply curve. If I’m right about the shape, though, it means that the surplus that a producer can rely on, even if he doesn’t have copyright protection, is just a tiny horizontal slice, lying like a pancreas under a liver, hard to see unless you click on the graph and view it full size. I drew the demand curve with a hump in it because it’s my impression that the audience for a given art work has a natural size, who won’t be deterred by a slight increase in price or much encouraged by a slight decrease. I could be wrong there, too, of course. The inset that I drew in the upper right corner, by the way, is intended to show how unimpeded piracy apportions the economic value of a work of art. As I write in my latest Slate piece, unimpeded piracy “cedes almost the whole triangle under the demand curve to consumers—transferring just a sliver along the bottom to the pirates themselves and leaving virtually nothing for legitimate publishers.”

I wondered about that claim after filing my article, and found myself doodling another graph yesterday afternoon to speculate more methodically about what happens when pirated work competes with copyrighted work. It seems to me that what you need to do is see where the demand curve meets the total supply curve, which is the sum of the legitimate supply curve and the pirate supply curve. Those curves have to be added along the axis of quantity, not price, so if nothing is impeding piracy, don’t bother going any further—the little inset that I drew in the graph above is fine. If piracy is “taxed,” however, by social disapproval, legal jeopardy, or some other inconvenience, the pirate supply curve gets shifted upward along the price axis, and when you add together a taxed pirate supply curve and a legitimate supply curve, you get something that looks a little like a sideways tuning-fork prong, in darkish pencil in the graph below. A tax on pirates makes it possible for legitimate publishers to stay in the marketplace. If the tax is high enough to raise the effective price of a pirated work above the copyrighted price, the legitimate publishers lose nothing, comparatively speaking. If the effective price doesn’t rise that far but does rise above the equilibrium price that would obtain in the absence of copyright and in the absence of piracy (a somewhat notional distinction, IMHO), producers can’t get as large a surplus as they would under copyright, but they can get something. If the tax doesn’t raise the effective price of a pirated work above the notional equilibrium, however, it looks as if producers get no surplus at all.

Advisory: These graphs should be accorded no authority other than as samples of what happens when a humanities-type person tries to puzzle out an economics problem.

A rough graph of copyrighted work competing against taxed pirated work