Do Video Games Dream of Electric Speech?

Tim Wu had an interesting op-ed column in Wednesday’s New York Times: Free Speech for Computers? Wu’s op-ed is in part a response to a paper co-authored by Eugene Volokh, entitled “First Amendment Protection for Search Engine Search Results.” (See also Volokh’s response; criticism by Tim Lee and Julian Sanchez.) Volokh and his co-author, Donald Falk of Mayer Brown, argue that search results, for example those produced by Google (which commissioned the paper), should be treated as speech worthy of First Amendment protection. (Hail, Search King!) Wu argues that this argument threatens to “elevate our machines above ourselves” by “giv[ing] computers . . . rights intended for humans.” The purpose of the First Amendment, Wu writes, is “to protect actual humans against the evil of state censorship.” But computers don’t need that protection: “Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us.” Wu concludes: “The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all.”

This debate intrigues me, not so much for how it applies to Google (although that is interesting too), but for how it applies to video games. Video games are a bit of a constrained format right now for expression; every bit of freedom accorded to the player to act within the game threatens the ability of the game maker to tell a coherent story that flows naturally from the player’s actions. There are all sorts of tricks game designers use to both have their cake and eat it too — maintaining the sensation of player agency while simultaneously imposing a previously determined narrative outcome on the results. But most of the tricks require constraints: doors the player can’t open, walls that can’t be climbed, enemies who must be killed, non-player characters who are too busy to talk. Even a simple game like Tic-Tac-Toe, with nine moves in sequence by two players, has 211,568 possible paths through the game. A video game designer cannot possibly account in advance for anything a player might choose to do; within the space of a few choices, the decision tree would become impossibly huge. (Once in a while you can circumvent a barrier the game designer didn’t intend; the effect is that of pulling back the curtain on the Wizard of Oz, or more precisely, suddenly breaking out of the fourth wall from the inside.)

One of the more significant constraints manifests itself in the genres that games come in: most computer games are set in environments that, for one reason or another, are free of other people. War zones, alien planets, zombie infestations, war zones on alien planets filled with zombies, etc. This preference for depopulated areas means that there are several sorts of film and television genres left unexplored by video games: romance, obviously, but also such action staples as detective stories and Westerns. (There are several recent notable exceptions, of course: Red Dead Redemption, L.A. Noire, and Heavy Rain; but what’s most notable about them is that they are clear exceptions.) One reason is that interacting with non-player characters is very hard to either predict or constrain naturally. It’s particularly hard if the interaction includes conversation; not only is there no way for players in most games to realistically converse with other characters, but even if there were, the unlimited number of things a player could say would make it nearly impossible to model NPC conversation — given that everything an NPC says has to have been recorded in advance by a voice actor somewhere. Those few games that allow conversation either force you to choose from a finite list of things to say, or the NPCs respond frequently with brush-offs: “I’m too busy to answer your question right now.” That’s why, in many games, the vast majority of the living things you encounter are enemies, and the few non-enemies you encounter have rigidly defined roles (squad mate, guard) or are placed beyond your reach.

One way around these constraints is to design a game that does not impose a storyline at all, except perhaps at a very high level of generality. Mission-oriented campaign games, such as those found in most MMORPGs but also single-player wargames as well, lure the player into *imagining* narrative details connecting the various missions together, such as a character’s ultimate sacrifice in achieving some objective. The venerable X-COM was famous for this sort of thing, due in no small part to the simple mechanic that the player could choose names for each of his or her soldiers. Or the game might require no internal story at all, such as sports games or multiplayer combat games like Counterstrike or puzzle games like Tetris.

There’s another possibility, however, and this ties back (finally!) to the Wu-Volokh debate. In the future it may be feasible to have video games automatically generate realistic responses to player actions, removing the need for game designers to design it all upfront. Want to climb over that wall and see what’s on the other side? The game will allow you to go there, it just won’t help achieve whatever task it is you currently have to complete. Once computer technology is able to realistically mimic human speech, this automatic generation of responses might apply to conversations as well, removing the problem of player muteness. But here’s an interesting question: in portions of the game where the maps and NPC behaviors are all automatically generated on the fly — are not specified in advance in the game code anywhere — does the game developer own those sequences? Wu’s article is concerned with the First Amendment implications of computer speech, but I’m wondering about the copyright implications: are automatically-generated sequences the game developers’ expression for purposes of copyright law?

This is not exactly a new question, nor is it limited to video games. The BBC ran a story earlier this week discussing a computer program that is being trained to compose music, based on human feedback to its (at first) randomly-generated efforts. Imagine at the end it comes up with a passable symphony. (I’m skeptical of the somewhat lurid question posed by the headline: “Is This the End of the Composer?”) Who, if anyone, owns the musical composition? (For more on random music generators, see the discussion in Alan Durham, The Random Muse: Authorship and Indeterminacy, 44 Wm. & Mary L. Rev. 569 (2002); and also Jake Linford’s post on Prawfsblawg last year.)

This sort of question received a flurry of attention in the 1980s, perhaps in the expectation that it would soon be a hot issue, and sporadic attention since. But the anticipated conflicts have not yet arisen to any significant degree; there just hasn’t been much controversy over who gets to use randomly-generated works, probably because such works have not been terribly useful yet. But before attention faded, there was a fairly vigorous discussion, with the best and most thorough analysis in an early article by Pam Samuelson, Allocating Ownership Rights in Computer-Generated Works, 47 U. Pitt. L. Rev. 1185 (1986). Samuelson’s article was in part a response to the 1978 CONTU Commission report, which declared that “[t]he obvious answer is that the author is one who employs the computer.” The CONTU Commission itself had been given its charge to look into the issue as a result of questions raised in the 1960s, when computers were massive kludges of tape drives and vacuum tubes that occupied entire rooms. For example, the 1965 report of the Register of Copyrights noted:

The crucial question appears to be whether the “work” is basically one of human authorship, with the computer merely being an assisting instrument, or whether the traditional element of authorship in the work (literary, artistic or musical expression or elements of selection, arrangements, etc.) were actually conceived and executed not by man but by a machine.

The CONTU Commission, in finding the answer “obvious,” did not appear to spend much time considering the scenario where the computer program spins out, with minimal human direction, a piece of music or writing that appears to be highly creative. In part, that’s because the Commission correctly concluded that the ability of computers to engage in such creation was fairly “speculative” in 1978. But Samuelson in her article undertook a much more detailed analysis, and ultimately wound up in much the same place: she argues that the author of a computer-generated work should be considered to be the user of the program, not the developer, and not no one, even if the user inputs minimal creative material into the program.

I partially agree; the developer gets the rights to the program, but not to what the program unpredictably produces. But I’m less certain about whether the user can claim a copyright in all instances either. The outcome there seems to me to depend on what, exactly, the user and the program are doing. If the user is supplying the program with inputs in order to generate the outcome–using the program as a tool, in other words, then it seems quite right that the user would solely own any copyright that results, just like a user of a word-processing program owns the documents he or she writes with it. That’s true, I think, even if the tool gives some assistance in crafting the final product, suggesting grammatical improvements or (in the future) automatically editing text to improve flow. Alternatively, if the user is not inputting anything, or at least inputting anything expressive, then it would seem odd to give the user any ownership over the results. But that doesn’t mean the application programmer should have it either. A set of rules that produces outputs in response to inputs is, I’ve argued elsewhere, the (better) definition of a “system” for copyright purposes, meaning that automated music generators, or videogame map or dialog generators, are systems over which copyright does not extend. And–although I don’t know that I’ve seen a case on this exact point–it follows that the creator of the system should not be able to claim copyright in the results of applying the system either. Otherwise Baker v. Selden makes little sense–Baker gets off the hook for (allegedly) copying Selden’s system, but he’s back on the hook (at least under later doctrine) for contributory infringement for all the infringing uses made by users of Selden’s bookkeeping system. Both Baker and Section 102(b) (“In no case does copyright protection . . . extend . . .”) suggest that there is no liability at all.

But that still leaves the user that operates a program, but does not “input anything expressive,” in my words. Why not grant copyright ownership to the user? Samuelson argues (or did, 25 years ago) there there are several pragmatic reasons counseling in favor of granting protection to the user, even where the user’s contribution is minimal. Such material is likely to be claimed by someone; it may be difficult to distinguish it from material in which there was sufficient creative input; and copyright protection will incentivize distribution, if not creation, which is still important. Samuelson’s article was written pre-Feist, and post-Feist there’s by hypothesis no minimally creative hook for the user to hang the hat of copyright ownership on. A user that generates an additional area in a video game just by exploring there, or generates a song by pressing a button, seems removed from the expression that results in the same way that the Second Circuit found sports players to be removed from the plays that result from their actions in NBA v. Motorola.

That still leaves the practical complications Samuelson identified twenty-five years ago. In particular, once computers can write decent poems or essays or songs, and a person submits a poem or essay or song for registration, how can we be confident that it is actually a work of authorship? But it seems in such a world, where teachers will not be able to be certain who authored homework assignments and life might be accompanied by a persistent automatic soundtrack, copyright ownership will be only one concern among many.

Cross-posted at Madisonian.net.

[Related post: Speech by Proxy.]

This Post Has 3 Comments

  1. Tom Kamenick

    I find this really interesting for several reasons. One, I used to work in the video game industry and have a few friends who still do.

    Second, it reminds me of a Roald Dahl short story about a failed writer who invents an automatic writing machine that can churn out novels in a few hours with the “author” operating the machine through all kinds of pumps, levers, switches, and other doohickeys that change different variables in the story.

    Third, because as a senior in college (2001 or so), as a final project in a modern music course (music education degree), I worked with a computer programming major to create a program that would randomly generate a piece of serial music.

    Serialism was an early-twentieth century musical form which worked from a “tone row” consisting of the 12 chromatic tones (C, C#, D, D#, etc.) in random order. That tone row would then be used to generate an additional 11 tone rows according to a specific formula, and the piece of music would then be composed (usually with the composer choosing instrumentation, rhythms, dynamics, length, etc.) with those pitches.

    I took that concept to the extreme and used the generated tone rows to control every aspect of the composition. The random sets determined the instrument, every note’s pitch and octave, every note’s duration, what dynamic level would be used next and how long each lasted, and probably a few other things I’ve forgotten.

    I’m really interested to read this article now, especially since it was being written right around the same time I was working on this project. Thanks for the article!

  2. Gordon Hylton

    I have always felt that Ridley Scott’s Blade Runner was morally objectionable because of its “androids are human too” thesis. Put me down for no First Amendment or IP rights for computers.

  3. Bruce E. Boyden

    Thanks, Tom and Gordon, for your comments. Gordon, that’s an interesting take on Blade Runner. I’ve always seen the movie as a bit ambiguous on what the status of the androids is. At least the non-Rachel ones are not fully human. Leroy fails the moral response test at the beginning of the movie, and is irrationally attached to his “precious photos,” risking capture to go retrieve them. Rachel fails the test eventually too, although it takes longer. The suggestion is that the interior mental life of these characters is incomplete somehow — but without access to their thoughts it’s impossible for them or us to say exactly what the difference is.

Leave a Reply to Gordon Hylton Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.