ABA Journal Story on YouTube and Copyright Infringement

I’m quoted in a story in this month’s ABA Journal on the interaction between copyright law and websites that host user-generated content, such as YouTube. YouTube is defending itself right now in a lawsuit pending in the Southern District of New York brought by the media company Viacom. I analyzed the Viacom complaint in a post over on Prawfsblawg shortly after it was filed—all the way back in March 2007. (I recently checked the docket, and the case appears to still be in discovery. The gears of litigation turn slowly.)

The quote in the article is from a later post looking at the implications of a recent Northern District of California decision—Io v. Veoh—for the YouTube case. While most observers saw the defense win in Io to be good news for YouTube, I saw elements of the reasoning that I thought could pose problems for YouTube. That’s because I’ve long argued that the key good fact for Viacom, and possibly the main reason it sued, was not simply the wide availability of Viacom content on YouTube, but this allegation, from Paragraph 7 of the complaint:

Moreover, YouTube has deliberately withheld the application of available copyright protection measures in order to coerce rights holders to grant it licenses on favorable terms. YouTube’s chief executive and cofounder Chad Hurley was quoted in the New York Times on February 3, 2007, as saying that YouTube has agreed to use filtering technology “to identify and possibly remove copyrighted material,” but only after YouTube obtains a license from the copyright owner…. Those who refuse to be coerced are subjected to continuing infringement.

Viacom’s best argument, in my view, is that an ISP cannot offer filtering technology to only some copyright owners, withholding it from those that don’t pay, and retain its Section 512(c) immunity. There’s two questions to be resolved here.

First, there’s a provision in Section 512 that would seem to explicitly excuse ISPs from liability for failing to use technological tools to police their systems. Section 512(m) provides that “Nothing in this section shall be construed to condition the applicability of subsections (a) through (d) on … a service provider monitoring its service or affirmatively seeking facts indicating infringing activity….” Section 512(m) is captioned, “Protection of Privacy,” which would seem to indicate that the concern was to eliminate an obligation on the part of ISPs to monitor individual usage. But if YouTube’s filtering technology works by, say, blocking all uploads that match a certain fingerprint, that would seem not to be “monitoring” within the scope of Section 512(m).

Even if filtering is “monitoring,” Viacom has another response here. That is, Viacom’s suit seeks to condition YouTube’s ability to take advantage of Section 512(c) not on their monitoring or affirmatively seeking facts indicating infringement, but rather on their offering “monitoring” tools to some but not others. Section 512(m) doesn’t bar liability for such behavior. For instance, while Section 512(m) says there’s no duty to affirmatively seek facts indicating infringement, it’s clear that if an ISP does affirmatively seek facts indicating infringement, and then ignores them, it will lose the protection of the Section 512(c) safe harbor. Similarly, while there’s no obligation to begin monitoring, if the ISP does monitor, but intentionally withholds that monitoring from being applied to demonstrably infringing content, Section 512(m) shouldn’t stand in the way of an infringement claim.

Even if Viacom is successful in defeating the application of Section 512(m), that only eliminates a potential block to Viacom’s liability claim; it doesn’t by itself show that YouTube can’t claim Section 512(c) immunity on its own terms. That’s the second of the two major issues the YouTube case raises: does discriminatory policing of the system eliminate an ISP’s immunity under Section 512(c)? I think it may, for two reasons. First, as suggested in Io v. Veoh (and my post on that decision), implementing a filtering system may demonstrate a “right and ability to control [the infringing] activity,” which is half of the showing necessary to prove vicarious liability for infringement. Vicarious liability is excepted from the Section 512(c) safe harbor. Second, implementing a filtering system but applying it in only some cases of repeated infringement would seem to constitute a failure to “act[ ] expeditiously to remove, or disable access to, the [infringing] material” after becoming “aware of facts or circumstances from which infringing activity is apparent,” namely, those facts and circumstances that lead YouTube to filter other owners’ content.

This discussion has been pretty technical, but if Paragraph 7 of the complaint above is true, I think it makes sense to hold YouTube and other ISPs liable. Section 512(c) and the other safe harbors were premised on the immense burden that would be imposed on ISPs if they were forced to manually review every single upload or file in transit for potential infringement. The reason why intermediaries had such a burden in the past was because automated publication was not feasible; human review was necessary for all sorts of reasons other than screening for infringement.

But now the technology seems to be shifting again. To the extent that filtering technologies become feasible (and are actually deployed), that indicates that we now have automated review that can keep up with automated publishing. Now it is the copyright owner that faces a comparatively heavy burden of tracking down each individual case of infringement, while the ISPs have tools that would easily block it in transit. If that’s the case, then it seems correct to say that the ISPs cannot hold back those tools from some copyright owners but not others—and it seems to me that Section 512 actually says that.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.