Commentators and historians have long recorded the regrettable moments of high-profiled individuals. However, it was not until recently that this kind of record was possible for the everyday person. Two attributes of Internet data made this possible: its permanence and its accessibility. In contrast with the limited distribution and narrow scope of previous records, the record of the Internet is all-encompassing and eternal; web pages are rarely deleted, and those that are deleted are nevertheless preserved by caching services like Google Cache and the Internet Archive. At the same time, search engines index data across the web, making that data easily accessible. In using social media, commenting on online articles, or writing in discussion boards, the ordinary person leaves in the eternal record of the Internet a trail of photos and writings that he or she sometimes regrets. At the same time, news organizations have moved online, and their reports of the embarrassments and criminal activities of some are only a quick Google search away. With these developments, we have become familiar with Googling ourselves for embarrassing posts or tweets. However, if someone else posted the content — for example, news articles, copies of your content shared by others, or cached web pages — we are often unable to remove it easily. The result is a loss of control over one’s information, pictures, and other data that can result in adverse social and professional consequences for, potentially, the rest of one’s life. See, for example, the case of Lindsey Stone.
The European solution to this problem is the so-called right to be forgotten. The right as currently implemented allows a data subject to request that a search engine remove particular search results when the search terms include his or her name. The search engine is obligated to remove the search results if they are “no longer necessary,” “inadequate, irrelevant or no longer relevant.”
The right to be forgotten finds its statutory authority in the Data Protection Directive (Directive 95/46/EC) adopted by the European Parliament in 1995. The Directive established a privacy framework in the European Union, requiring that data “controllers” “respect” the privacy rights of data subjects. Furthermore, it granted data subjects the right to restrict data “processing” if he has “justified” and “legitimate grounds.” The Directive had little bearing on search engine activity until May 2014, when the European Court of Justice ruled against Google in the case of Mario Costeja González. Costeja González had requested the removal of a search result that linked to an auction notice for his repossessed house. The Court of Justice ruled for him, holding that Google was a data controller subject to the Directive and that generating search results involved data “processing” subject to the restriction requests of data subjects. The court then required that Google accept search result removal requests and to grant them unless Google is “justified” in disseminating the data.
Since the 2014 decision, Google has received over 220,000 requests and granted roughly 40% of them. The rest have been rejected or referred to an internal appeals process. In deciding whether it is “justified” in disseminating the data, Google considers factors including whether the data subject is a public figure, whether the data was published by the data subject or by another, whether the data is about criminal charges, and the recency of the data. If a data subject disagrees with Google’s decision, he or she has the option of appealing the decision to European privacy regulators.
The attraction of the right to be forgotten is just that as human beings forget someone else’s embarrassing actions or writings over time and allow that person to move on with their life, so too should the Internet forget old or irrelevant information. With continental Europe’s strong tradition of privacy rights, it was natural for the European Parliament to create broad rights to privacy in the Directive and for the Court of Justice to extend those rights to search engine results.
However, there are downsides to the right to be forgotten. For example, although Google appears to have handled the number of requests well, it might be difficult for smaller companies to process such requests. Furthermore, by requiring search engines to decide whether the content is “justified,” the search engines take on a quasi-public role, even though data subjects can appeal a rejection to regulatory entities. Perhaps most importantly, it restricts search engines’ and content producers’ speech by requiring them to remove links to even true information merely because the data subject is uncomfortable with others knowing that information. This, of course, also decreases the public’s access to information. Therefore, though the right to be forgotten may pass muster in a continental tradition where privacy trumps free speech, it falls flat in an Anglo-Saxon tradition that holds free speech paramount.
The European Union is pushing for an expansion of the right to be forgotten. Currently, Google only removes search results from its European domains (e.g., google.fr for France), to which European residents are defaulted. However, because searching at non-European domains like google.com easily circumvents the link removal, there are calls to expand the removals to all of Google’s domains. Although this would thwart circumvention, global application of link removals would affect Google searches taking place inside of other jurisdictions, including the United States. This step can therefore be expected to meet strong resistance from the States, setting up a potential conflict between the American and European courts.
In addition to increasing the geographic scope of the right to be forgotten, the European Union is also considering expanding its subject matter scope. The right to be forgotten is currently only implemented against search results. However, the Data Protection Regulation proposed in 2012 would require the erasure of unnecessary data or data for which the data subject has withdrawn consent. This regulation targets content itself, rather than links to the content. For example, instead of just requiring Google to remove the link to an embarrassing news article, the data subject would be able to force the news organization to delete the article itself. The Regulation would place the burden of proving the data’s necessity on the data controller, while requiring that the data controller inform third parties of the erasure. To enforce these policies, the Regulation would fine a nonconforming data controller 5% of its global revenues.
These are worrying developments for any company on the Internet. Some, like Wikipedia, are fighting back. We can expect more to join the fray as users continue to submit link removal requests.
On Tuesday, March 17, 2015, Simpson Thacher gave a lunch talk sponsored by the Berkeley Center for Law and Technology on the so-called right to be forgotten. This blog post summarizes themes from that talk.