AustLII Home | Databases | WorldLII | Search | Feedback

Melbourne University Law Review

Melbourne Law School
You are here:  AustLII >> Databases >> Melbourne University Law Review >> 2017 >> [2017] MelbULawRw 16

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Suzor, Nicolas; Seignior, Bryony; Singleton, Jennifer --- "Non-consensual Porn and the Responsibilities of Online Intermediaries" [2017] MelbULawRw 16; (2017) 40(3) Melbourne University Law Review 1057


NON-CONSENSUAL PORN AND THE RESPONSIBILITIES OF ONLINE INTERMEDIARIES

NICOLAS SUZOR,[*] BRYONY SEIGNIOR[†] AND JENNIFER SINGLETON[‡]

This article considers the legal options for the victims of non-consensual distribution of sexually explicit media — sometimes known as ‘revenge porn’. The Australian Law Reform Commission has called for Australia to introduce a new tort for serious invasions of privacy, and the Senate Legal and Constitutional Affairs References Committee has recently reinforced the need for stronger penalties. A private members’ Bill was introduced in the last federal Parliament, but has since lapsed. Each of these proposals focuses primarily on the wrongful acts of the perpetrator. As a deterrent and a strong signal of social opprobrium, they may be partially effective. They do not, however, consider in detail how victims may be able to seek some relief once material has already been posted online. In this article, we consider explicitly what role internet intermediaries should play in responding to abuse online. The challenge in developing effective policy is not only to provide a remedy against the primary wrongdoer, but to impose some obligations on the platforms that host or enable access to harmful material. This is a difficult and complex issue, but only by engaging with these processes are we likely to develop regulatory regimes that are likely to be reasonably effective.

CONTENTS

I INTRODUCTION: THE SCOPE OF THE PROBLEM

This article considers how to regulate the non-consensual distribution of intimate images and videos. This problem, although not new, has been magnified in recent times by a series of social changes brought by new technologies — ubiquitous digital cameras, the ease of distributing content online and the commonplace use of cloud services to store personal information.[1] We consider how the law can and ought to change to deal with new threats posed by the use of new technologies.

The contexts in which people may lose control over their intimate images are many and varied. In some cases, it might be that a person who received or recorded an image with the consent of the subject later breaches that trust by sharing it more broadly. The term ‘revenge porn’ was popularised from the time the website ‘Is Anyone Up?’ was founded in late 2010.[2] The site featured thousands of sexually explicit images of people, predominantly women, accompanied by identifying information and belligerent comments.[3] These images were usually submitted to the site by an ex-lover without the consent of the person depicted, as a form of ‘revenge’ for a break-up. At the height of its popularity, Is Anyone Up? received 30 million page views per month.[4] The site has since been shut down,[5] but there are still many outlets where explicit images are shared without the permission of the person depicted.

In other cases, the attackers may be wholly unknown to the victim. In a series of attacks in 2014, attackers broke into the Apple iCloud accounts of hundreds of people, including several dozen celebrities, and released hundreds of intimate images to the public at large (known as ‘the Fappening’).[6] Shortly after the iCloud celebrity attacks, another cloud-based leak saw attackers release over 100 000 private images sent through the Snapchat app.[7] The second attack, dubbed ‘the Snappening’, targeted a third-party website (snapsaved.com) that allowed users to surreptitiously save images sent through the Snapchat app.[8]

The term ‘revenge porn’ is potentially misleading in dangerous ways.[9] The term is too narrow to cover the circumstances in which people suffer harm outside of an intimate relationship. For example, ‘the Snappening’ was not ‘revenge’ in any sense of the word, as the attackers were not even known to the victims. Perhaps of more concern, the term encourages victim blaming by presupposing some wrongdoing on the victim’s part — that the abusive act of releasing intimate images is somehow done in ‘revenge’ for some perceived wrong.[10] Alternative terms to ‘revenge porn’ include ‘non-consensual distribution of private sexual material’ or ‘non-consensual sharing of intimate images’. More broadly, the phenomenon is one facet of the larger problem of ‘technologically facilitated sexual violence’.[11]

For victims, the non-consensual posting and sharing of sexually explicit imagery is a devastating problem. Victims are often subject to harassment and abuse, particularly if they are female.[12] As images are posted, the primary invasion of privacy often cascades into severe ongoing shaming of women’s bodies and sexuality by those who comment upon and spread the images across networks.[13] Victims report serious harms that stem from both public slurs and a sense of powerlessness to stop the spread of either the images or the comments.[14] Where victims are easily identifiable (and the images show up in search results for their names), victims have also reported significant abuse online and offline, the loss of professional and educational opportunities and exposure to stalking, as well as increased risk of harm and violence when they speak out.[15] This can cause a crisis of identity for victims, as they lose the ability to control how they are presented to the world.[16]

More generally, the problem of non-consensual sharing of intimate images occurs within a broader context of sexual and domestic violence. Explicit images are increasingly used by sexual partners ‘as a tool to threaten, harass and/or control both current and former partners.’[17] Abusers deploy the threat of releasing images, either broadly or directly to the victim’s employers,[18] or immediate or extended family in order to intimidate their victims — including, in some cases, to prevent the victim from bringing formal complaints of domestic violence.[19] When images are circulated by partners on social networks and through sites that are made up of people the victim knows, the material not only has the capacity to quickly reach a large audience, but the harms are also direct and more personalised.[20] The reputational effects, in these cases, are thus not the more abstract fear that images are indexed and searchable on the wider internet, but the direct and certain knowledge that they are accessible to acquaintances of the victim.[21] The stigma and shame that is frequently attached to women’s bodies and sexuality[22] can cause immense harm for victims.[23] Tragically, there have been cases reported in the United States (‘US’) and Canada where young women have committed suicide when their images were disseminated without their consent.[24]

A Civil Remedies and Criminal Offences

Much of the focus of legislators in responding to non-consensual sharing of intimate images has been centred on the introduction of new criminal and civil remedies that target the primary abusive act. Currently, in Australia, civil law provides few remedies for victims. There is no general individual right to privacy in Australia.[25] The Australian Law Reform Commission’s (‘ALRC’) successive recommendations for the introduction of a statutory tort of serious invasion of privacy[26] have been unanswered for the past nine years.[27] Victims might have a remedy under stalking offences, but generally only where the conduct is repeated and causes fear or alarm.[28] Defamation law is unlikely to provide an avenue for compensation for the reputational harm that results from the publication of imagery now that truth is an unqualified defence in all states.[29] An action in copyright requires the victim to be the author of the photo[30] — which means victims only have a remedy where their selfies are leaked and not when the photo is taken by a current or ex-partner.

Victims will often have a cause of action under the equitable doctrine of breach of confidence,[31] at least against threatened disclosure and at the stage where the imagery is first posted.[32] This will likely apply both in cases where images are captured within the context of an intimate relationship and where images are taken without consent from hacked accounts or devices.[33] Once an image has been made public to a sufficiently wide audience, however, it will likely lose its confidential character — which means that victims have no real recourse against secondary distributors.

There are existing offences at both state and federal levels that are sufficiently broad to criminalise the non-consensual distribution of intimate imagery. Section 474.17 of the Criminal Code Act 1995 (Cth) prohibits the use of communication networks to menace, harass or cause offence to another.[34] This is an extremely broadly-worded provision[35] that carries a maximum sentence of three years’ imprisonment. This offence is certainly sufficiently broadly-worded to cover the non-consensual distribution of intimate images online, but it has not been widely used to date. According to the Australian Federal Police, s 474.17 has not been widely used in relation to

non-consensual sharing of intimate images,[36] although there are some unreported judgments,[37] and the provision is likely to be dealt with more

widely summarily.[38]

Each state and territory in Australia has an offence prohibiting the distribution or publication of obscene or indecent material.[39] These offences can be used to address ‘revenge porn’,[40] although their suitability in this context has been questioned. The offences are a somewhat awkward fit for the harm in question, and there is a good argument that the language of ‘indecency’ or ‘obscenity’ has the potential to further entrench victim blaming in contemporary culture.[41] In addition to these general offences, there have been moves to introduce new criminal offences specifically targeting those who distribute intimate images, such as in South Australia and Victoria. The new s 26C of the Summary Offences Act 1953 (SA), introduced in 2013, creates an offence for distributing an ‘invasive image’ of another person, knowing or having reason to believe that the other person does not consent to the distribution of the image.[42] In 2014, Victoria created offences of maliciously distributing, or threatening to distribute, intimate images without consent.[43] Western Australia has proposed to introduce a new power to prevent distribution of intimate material as part of a domestic violence order, backed by a criminal penalty of up to two years imprisonment.[44] The New South Wales Attorney-General, Gabrielle Upton, announced in September 2016 that the State will introduce a new criminal provision.[45] A number of international jurisdictions have also made moves to criminalise both the threat and the actual distribution of intimate images without consent from the subject.[46]

At a federal level, a recent report of the Commonwealth Senate Legal and Constitutional Affairs References Committee (‘Senate LACARC’) recommended that Commonwealth, state, and territory governments create a series of new criminal offences targeting the non-consensual recording or sharing of intimate images, and threats to share those images.[47] In the last Parliament, opposition MPs Tim Watts and Terri Butler proposed in a private member’s Bill a new criminal offence for using a carriage service for publishing or distributing private sexual material.[48] The Bill would have introduced a new offence of using a carriage service for publishing or distributing private sexual material.[49] The Bill lapsed at the dissolution of Parliament in April 2016.

It is possible that new criminal offences will help deter people from distributing intimate images without consent.[50] They would likely at least send a strong message that doing so is widely disapproved of in Australian society.[51] If law enforcement officers and prosecutors are appropriately directed, trained and resourced, it may also shift the burdens of enforcement away from victims who in the past have reported experiencing ‘trenchant disinterest and unresponsiveness’[52] when cases were reported to authorities.[53] There are anecdotal signs that this is changing, and police are now more regularly charging defendants for distributing non-consensual pornography under both the Commonwealth and relevant state offences. Unfortunately, no rigorous research exists to date as to the extent or outcomes of prosecutions under these laws, and there are few reported cases. More empirical work to understand how these offences are being used and to evaluate their effectiveness would be very helpful.

There are, at any rate, important limits to the extent to which actions directly aimed at the perpetrators can be effective. Both criminal prosecutions and civil actions are costly, time-consuming, not readily accessible and may exacerbate the pain already experienced by victims due to drawn out processes and unwanted publicity.[54] The remedies available for civil litigants may not adequately compensate victims for the harm they have suffered, particularly where perpetrators do not have sufficient financial resources.[55] For victims who have suffered domestic violence, it is often exceedingly difficult to seek redress through the justice system.[56] In cases where images are obtained by hacking into the computer accounts of victims, by anonymous perpetrators who may or may not be in Australia, Australian law may not be effective

at all. Finally, once images have been publicly posted online, actions

against individuals are not likely to be effective at preventing their

ongoing distribution.

B Addressing Distribution Online

One of the core limitations of existing law is that there are few remedies once intimate images have already been posted online. In this article, we focus on the issues that specifically lay around the enrolment of online intermediaries in the regulation of the non-consensual dissemination of intimate images. Like many other areas of internet regulation, removing images hosted publicly online is extremely difficult. Even if a cause of action exists against the person who is initially responsible for leaking the imagery, the harm caused by

leaked images is intensified and repeated by their continual distribution by third parties.[57]

The Senate LACARC, led by Senator Glenn Lazarus, canvassed a number of different approaches to target online service providers.[58] Options ranged from a liability scheme that imposes notice and takedown requirements, to opening a cooperative, formal channel of communication with telecommunications firms to ‘regularly engage on issues related to non-consensual sharing of intimate images’.[59] The report, however, did not provide clear guidance about how these types of regulation might operate and representatives of the telecommunications industry did not strongly engage with the inquiry on these issues.[60] In this article, we consider the range of different obligations that may apply to online intermediaries, with a view to identifying how effective regulation may be developed.

II REQUIRING INTERMEDIARIES TO DO MORE:
REVIEWING THE OPTIONS

If victims of leaked intimate media are to have an effective remedy, the intermediaries who host, index and make available content are the most effective points of control. These telecommunications providers, search engines, social media platforms and network hosts are the focal points of the internet and present the most efficient means of policing content available.[61] While it is generally not possible to completely eradicate material that has been posted online,[62] it is possible to mitigate the harm by reducing its visibility. Effectively, this means ensuring that intimate images do not prominently feature in search results for a victim’s name, ensuring that the material is not spread within the most popular or relevant social networks and, as far as possible, attempting to regulate the most influential sites that host and distribute content online.

Historically, online intermediaries have presented themselves as neutral platforms for communications and have largely been shielded from legal responsibility regarding content shared via their platforms.[63] Particularly in the US, where most major western social networks and search engines are based, they enjoy almost complete immunity from liability for content posted by their users.[64]

Victims are disempowered through liberal narratives of personal responsibility and conservative values around sexuality, which all too often work together to reinforce a significant degree of victim blaming. The practice of taking intimate images itself is often constructed as either naivety[65] or an implicit individual moral lapse on the part of the subject, and policy responses to panics over sexting, particularly aimed at young people, have been focused almost exclusively on abstinence.[66] The problem is further compounded by a deep gender imbalance that blames victims: women who take intimate photos, by themselves or with their sexual partners, are derided as promiscuous,[67] while men are more often able to escape moral condemnation and may even be celebrated for their sexual prowess.[68] Traditional liberal conceptions of privacy play strongly into this theme of individual responsibility — the common refrain is that if people do not want intimate images circulated, they should not take them at all, should not store them on insecure personal computers or on cloud storage providers, and should certainly not send them to others.[69]

This victim blaming narrative is exceedingly harmful. It results in a common view that ‘women who take such risks are personally responsible for any harms that subsequently befall them.’[70] This view severely understates the extent to which the harms presented by the non-consensual leaking of intimate images are rooted in gender inequality. Victims of this type of abuse are disproportionately women and girls;[71] when men are targets, they are rarely abused and humiliated in the same way that women often experience.[72] The victim blaming narrative shifts responsibility to the subject, rather than those who distribute it without consent. Not only does this view disregard the reality of strong societal pressures on women to share intimate images with their partners,[73] and the benefits of healthy, consensual sexting,[74] but it also ‘overlooks the spontaneous and embodied dimensions of sexual practice that do not lend themselves to calculative rationality’.[75]

There are encouraging signs that this typical response is changing. Indeed, one of the most important consequences of the 2014 hacking of celebrity nude photos stored privately in Apple iCloud[76] was the way that actress Jennifer Lawrence’s outrage reverberated through mainstream media and the public consciousness. Speaking of the attack that compromised her iCloud account, Lawrence told Vanity Fair: ‘It is not a scandal. It is a sex crime. It is

a sexual violation. It’s disgusting. The law needs to be changed, and we need

to change.’[77]

Albury, Hasinoff and Senft argue that the iCloud leak triggered a shift in the debate, where ‘[t]he practice of taking naked pictures was discussed not as a shameful and implicitly individualized moral lapse, but in the broader social and political context of digital privacy and security.’[78] This shift introduces a new debate about the ethical responsibilities of cloud service providers who store user images, as well as the search engines and the media platforms that make content accessible.[79]

Mapping the bounds of the responsibility of platforms is a complex and deeply contested task. There are good policy reasons to insulate intermediaries from liability for the actions of their users and to ensure that the regulatory burden they face is not excessive. The Manila Principles on Intermediary Liability (‘Manila Principles’) released in 2015 set out a strong argument for ensuring that any regulation strikes a proportionate balance between enforcing law online, protecting individual rights of freedom of expression and promoting investment and innovation in telecommunications technologies.[80] In substantive terms, these principles reflect three core concerns around imposing duties on intermediaries: private entities are not well suited to determining whether content is unlawful or not;[81] the costs of actively monitoring the flow of internet traffic on a massive scale are prohibitive; and the presumption that liability should generally attach to wrongdoing if it is to provide an appropriate standard to guide behaviour.[82] This is a point clearly restated by the United Nations (‘UN’) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, who asserts that ‘intermediaries should never be held liable for content of which they are not the authors.’[83]

It is crucial, when evaluating potential policy responses, to avoid adopting overly simplistic models of regulating intermediaries. This is something that happens relatively regularly in policy debates for the regulation of internet content. Policymakers often point to the fact that major intermediaries have implemented successful and speedy schemes to address some forms of criminal content on their networks. The most powerful example is that of child sexual abuse material,[84] and this example is sometimes used to suggest that online intermediaries could easily extend their content regulation practices into other domains, such as more actively monitoring the distribution of intimate images.[85] This, we suggest, is a serious mistake in analogy. The vast majority of sexually explicit imagery distributed on the internet is likely to be wholly legal. By contrast, the specific category of child sexual abuse material is one of the only categories of content where both possession and distribution are criminalised almost everywhere.[86] For this category, there can be no acceptable countervailing interests in removing access to the material, and the risk of over-blocking ‘false positives’ is likely to be exceedingly low. Even then, there are operational difficulties, jurisdictional differences, and definitional grey areas in assessing the legality of content, including the apparent age of subjects nearing the age of consent and the fictional depiction of sexualised children. Calls for online intermediaries to adopt similar processes to deal with the non-consensual dissemination of intimate images, while well-intentioned, are likely to be overly simplistic.[87] Unlike sexual material that clearly involves children, it is not possible to identify whether explicit images of adults were posted with consent from a cursory examination. The cost of actively monitoring content is likely to be prohibitive in many cases, and any notice-based scheme needs to be carefully crafted to ensure that the obligations it creates are workable.[88]

This is not to say that intermediaries should have no moral responsibility or legal obligation to deal with the non-consensual distribution of intimate imagery on their networks. To date, victims who have sought to have content or links to content removed by online intermediaries have generally encountered serious difficulties.[89] This is slowly changing; many of the major search engines and social networks have developed voluntary schemes to respond to complaints specifically in this area, which we turn to in Part III.

Before then, however, we consider the legal options available to Australian jurisdictions to require intermediaries to take action. In this context, Australia’s relative power is very much constrained; unless the site is hosted in Australia or operated by Australians, jurisdictional problems make any form of legal enforcement extremely difficult. In Part II, we consider each of the major options that have been proposed for Australia to pursue enrolment of online service providers in this regulatory project.

There are a number of different options that may be available to change the law to require intermediaries to assist in removing non-consensually posted intimate images or links to images from their networks. Any attempt to require intermediaries to remove content or links to content posted by third parties must be evaluated on both effectiveness and cost. By effectiveness, we mean primarily the alacrity with which content is removed and the effect that removal has on the accessibility of the content. By costs, we mean not just financial overheads (although these are important), but also the social costs to freedom of expression. In many cases, these are zero-sum trade-offs: the faster and more effective the system, the more likely it is to catch content that users have legitimate rights in posting and accessing (‘false positives’). Conversely, greater levels of due process almost inevitably mean slower, costlier and less effective processes.

This is a classic problem of proportionality balancing.[90] Whether or not content should be removed from a platform involves the reconciliation of two fundamental human rights: freedom of expression[91] and the right to privacy.[92] Both the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights provide that everyone has the right to freedom of opinion and expression.[93] This includes the right to express an opinion through speech, writing, art or any other form and the right to seek out and receive the expressions of others, and it protects expression that ‘may be regarded as deeply offensive’.[94] However, in certain circumstances, these rights can be legitimately limited, provided three conditions are met. First, the restriction must be provided for by law.[95] Second, it must be necessary for the ‘respect of the rights or reputations of others’, ‘national security’, ‘public order’ or ‘public health or morals’.[96] Third, it must meet the ‘strict tests of necessity and proportionality.’[97] Necessity requires that the objective could not be met in a way that does not restrict the right to freedom of expression.[98] Proportionality requires that restrictions are ‘appropriate to achieve their protective function’ and ‘proportionate to the interest to be protected.’[99] Here, as the dissemination of intimate images without consent likely constitutes a breach of the right to privacy for victims, an appropriate limitation can be said to be necessary for the ‘respect of the rights and reputations of others’.[100] There are few countervailing speech interests — there is no general public interest in sharing or accessing intimate media of people taken or distributed without their consent, even for public figures.[101] However, whether any specific measure taken to protect these rights meets the test of proportionality requires further evaluation.

Poorly designed regulatory schemes raise the serious risk that those intermediaries that are unable to avoid potential liability altogether will instead adopt risk-averse policies that err on the side of caution, resulting in a significant chilling effect on legitimate speech.[102] Below, we evaluate three recent developments: the European Union’s ‘Right to Be Forgotten’; the ALRC’s proposal for a tort of serious invasion of privacy; and a suggested tweak to copyright law that would allow victims to seek redress under the well-established notice and takedown scheme in the US’ Digital Millennium Copyright Act, 17 USC (1998) (‘DMCA’).[103]

A The ‘Right to Be Forgotten’ and the Regulation of Search Engines

In 2013, the European Court of Justice (‘ECJ’) ruled that individuals could request major internet search engines to remove links to personal information that is inaccurate, inadequate, irrelevant or excessive.[104] Dubbed the ‘Right to Be Forgotten’ (‘RTBF’), the ECJ’s ruling reflects a stark ideological divide across the Atlantic over the appropriate balance between privacy and speech interests,[105] as well as amongst those who worry about the costs to telecommunications companies and flow-on effects for investment in innovation.[106]

The RTBF obligation could extend to links to intimate images posted without consent.[107] Targeting search engines provides some redress for victims whose images are posted on sites that are unresponsive to takedown requests. It does not remove the content from the open web, but it can significantly limit the harm that flows from the appearance of intimate images in

search results.[108]

The costs of the RTBF are, however, quite substantial. The ECJ decision requires search providers to balance rights of privacy with freedom of expression.[109] Major search engines Google and Bing have implemented a web form to allow people to request removal of a particular URL from their index.[110] These URLs are manually reviewed, and so far Google has approved 43 per cent of requests.[111] Even if these costs are reasonable for a giant search engine like Google, they may operate to keep smaller entrants out of

the market.

Successfully removing links under the RTBF to non-consensually uploaded explicit media is likely to go some way to restoring a victim’s right to privacy. A great deal of the harm suffered by victims lies in the knowledge that their intimate images are easily discoverable by their personal and professional contacts who enter their name in a search query.[112] This is a classic case of context collapse: victims lose control over how they present themselves to different audiences.[113] The RTBF, in effecting the removing the links, stops one aspect of the ongoing violation. However, it is difficult to say with conviction whether the RTBF meets the test of proportionality. A complete evaluation would require more rigorous empirical analysis of how decisions are made by search engines,[114] with a focus on identifying (with some precision) the ultimate costs to freedom of speech. Currently, decisions by intermediaries are made in a very opaque manner;[115] it is accordingly exceedingly difficult to be sure how well private decision-makers are able to balance competing concerns and evaluate factual questions.[116] Given the risk of limiting access to legitimate speech, this system may not be the most appropriate means to protect an individual’s right to privacy.

The question of whether the RTBF is a proportionate measure is deeply contested. Clearly, the ECJ found the measure to be justified, explaining that the fundamental rights under arts 7 and 8 of the Charter of Fundamental Rights of the European Union ‘override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name.’[117] Google resisted the move initially, but has since cooperated by adding a simple request form for users.[118] It publishes regular statistics showing the number of requests it receives — over 700 000 requests complaining of nearly 2 million URLs to date.[119] Google only recently agreed

to implement its result filtering on all of its search domains for European users — previously, RTBF requests were only blocked on localised search portals, not on Google itself.[120]

Academic opinion differs too — often along a familiar US/European Union divide.[121] For those who care primarily about freedom of speech, the ECJ’s decision appears to be a direct threat to free speech and an unjustifiable imposition on a private actor to adjudicate on complex matters of law and fact.[122] To those who care deeply about substantive protections for privacy, the decision is largely regarded as a positive intervention by the European Union.[123] Much of the conflict centres around the difficulty that private organisations have in evaluating whether a particular piece of information is ‘relevant’ and the concomitant risk that information which rightfully belongs in the public domain is able to be purged on request.[124]

The main questions, then, are around the burden on intermediaries and the risk of false positives in cases where either the complainant is not in fact the person depicted or has previously consented to the widespread distribution of the images. The relative weight of these costs of compliance is, while not insignificant, probably relatively small as long as the volume of requests remains low. The major search engines are already expending these costs, although it is possible that smaller entities may be disproportionately disadvantaged. The risks of false positives or fraudulent requests are also likely to be relatively small, although there is no available data for this. On the whole, a strong argument can be made for an RTBF-style obligation that addresses the non-consensual sharing of intimate images. In order to ensure that decisions are made transparently and according to acceptable criteria, however, any such obligation would be strengthened by a legitimate court process,[125] or at least a streamlined administrative process that provides the potential for direct judicial oversight. As we will see in Part III, however, given that most major search engines are already responding to requests to remove unwanted information, the utility of introducing formal legislation is likely to be very low and maybe even counterproductive.

B Civil and Criminal Liability for Online Intermediaries

Apart from search engines, the other major set of online intermediaries that play a key role in disseminating intimate images are content hosts. These are the discussion forums, social media platforms and image and video sites that store content uploaded by users and make it accessible online. When attackers leaked celebrity photos taken from iCloud, or posted the archive of Snapchat photos obtained by compromising snapsaved.com, large discussion board sites like Reddit were the primary places for people to post and seek links to the images.[126] Other platforms — like the notorious Is Anyone Up?, where the term ‘revenge porn’ became popularised — specifically solicited submissions of intimate images without the consent of the subjects.[127]

The straightforward legislative response to deal with content posted on the internet in cases where it is too difficult or too late to target the individual responsible for posting it is to seek to shift liability to the hosts of the content. These are often the organisations that are the ‘least cost avoiders’ — they are the most efficient points of control to limit the further dissemination of content.[128] In recommending the introduction of an Australian tort of serious invasion of privacy,[129] the ALRC suggested that intermediaries may also be liable under the tort.[130]

The operative test of the ALRC’s proposed privacy tort is that a person intentionally or recklessly commits a serious invasion of the privacy of another by, inter alia, disseminating their private information.[131] Clearly, distributing private nude or sexual images of another person, without their consent, would constitute a serious invasion of privacy. It would also be effective against Australian online service providers who actively solicit others to leak images. How exactly the ALRC’s proposed tort would apply to general-purpose online intermediaries (ie, search engines, content hosts and social networks), however, is as yet unclear.

Normally, the operator of a general-purpose intermediary would not have the requisite mental element (intention or recklessness) to satisfy the tort.[132] Importantly, the ALRC notes that a court may find an intermediary to have ‘intended an invasion of privacy, or been reckless, if they know that their service has been used to invade someone’s privacy, and they are reasonably able to stop the invasion of privacy, but they choose not to do so.’[133]

Defamation law provides perhaps the closest analogy to intermediary liability under a potential new tort of serious invasion of privacy. Under defamation, an intermediary is liable where they ‘publish’ defamatory imputations[134] — which may include, in the case of search engines and content hosts, continuing to make content posted by third parties available once it has been brought to their attention.[135] In its report, the ALRC implied that ‘publish’ may be a broader verb than ‘invade’, but also cited defamation authority for the proposition that knowledge might condition liability.[136]

The ALRC did not recommend the introduction of a ‘safe harbour’ scheme,[137] with the implication that an intermediary could be culpable ‘if they know that their service has been used to invade someone’s privacy, and they are reasonably able to stop the invasion of privacy, but they choose not to do so.’[138] Conceivably, then, the operation of the proposed tort would provide a victim within Australia with a legal mechanism to require online intermediaries to remove content and potentially even links to content posted by third parties. How far such an obligation may stretch is a matter of some debate. An ongoing case in Northern Ireland is proceeding to trial on the claim that Facebook ought to have acted expeditiously in removing images of a naked 14 year old girl that were reposted multiple times[139] — what copyright owners have controversially been seeking as a ‘notice and staydown’ obligation.[140]

In practice, however, the extent of intermediary liability under a new privacy tort may be too ill-defined to be an appropriate or proportionate response. It may encourage Australian intermediaries to comply quickly, but it is also likely to add to a concerning series of laws that introduce uncertainty in online hosting. First, the imposition of liability at this level will fall short of covering the major foreign intermediaries that host potentially harmful content. Many of the biggest content hosts are based in the US, where they benefit from ‘safe harbour’ legislation that provides immunity from almost all potential legal consequences that flow from the actions of their users, even for content that they encourage people to post and refuse to remove.[141] This is not the case in Australia,[142] and there is a risk that ill-defined limitations on liability may encourage Australian service providers to seek more favourable rules in other jurisdictions and discourage investment in Australian internet services. In terms of protections for speech, there is also a concern that conditioning liability upon knowledge creates a strong incentive for intermediaries to remove content upon receiving an allegation, regardless of its veracity. The recent cases extending liability in defamation law in Australia to search engine intermediaries are deeply controversial,[143] and there are serious concerns about the spillover effects on speech interests. Intermediaries have few incentives to seek to validate that allegations are correctly made and are therefore likely to systematically over-block legitimate speech. It is for this reason that guidelines like the Manila Principles strongly recommend that intermediaries be shielded from general liability, and that if an obligation to remove speech is to be imposed, it be done through a notice and takedown scheme that adequately protects due process and speech rights

(discussed below).[144]

If a civil liability scheme for intermediaries is unlikely to be justifiable, a criminal offence is even more problematic. As discussed earlier, in September 2015, federal opposition MPs Terri Butler and Tim Watts introduced the Criminal Code Amendment (Private Sexual Material) Bill 2015 (Cth) into the House of Representatives. The Bill, now lapsed, would have created new offences for distributing, threatening to distribute and preparing to distribute private sexual material over the internet or telephone network without the consent of the person depicted. As drafted, the Bill presented worrying uncertainty for online intermediaries.

Culpability under the proposed offence arises when a ‘person transmits, makes available, publishes, distributes, advertises or promotes material’.[145] The required mental element is intention to distribute, which, within the broad language of the offence, would presumably be able to be proved beyond reasonable doubt in proceedings against an operator like Hunter Moore (the operator of Is Anyone Up?). Worryingly, however, it is not clear that a general-purpose intermediary would be immune from criminal responsibility under this draft. The words ‘transmit’, ‘make available’, ‘publish’ and ‘distribute’ are used but not defined within the Bill. Without more careful drafting, these terms could conceivably be sufficiently broad to catch content hosts, social networks and even search engines that host and link to private sexual material posted by third parties. While the operators of a general purpose intermediary would generally lack the requisite intent to be convicted under this proposed offence, it is not clear whether a court may impute intention in circumstances where the intermediary becomes aware of private sexual content (or links to private sexual content) on their network and fails to take action to remove it within a reasonable period.

Without greater clarification, civil or criminal liability for hosting or distributing content uploaded by a third party is likely to be insufficiently tailored to pass a proportionality analysis. In most cases, rather than holding intermediaries directly liable for the actions of others that they fail to redress, it is more appropriate to develop clear procedures for illicit content to be removed — such as the notice and takedown procedures developed under copyright law.

C A Copyright-Based Notice and Takedown Scheme

The notice and takedown procedures for copyright, originally developed as part of the US’ DMCA and later exported around the world,[146] are designed specifically to provide certainty to service providers about their potential liability for content posted or disseminated by others. Online service providers are granted immunity from suit in exchange for following a detailed procedure for the removal of content (or links to content) that is allegedly infringing copyright.[147] While the system has certainly been the subject of abuse,[148] and is showing signs of strain under a flood of incoming notices,[149] it is largely effective at providing the certainty that intermediaries require when removing content.

The success, in pragmatic terms, of copyright notice and takedown has led some commentators, including Professor Derek Bambauer, to suggest that it might provide an effective mechanism for victims of ‘revenge porn’ to curtail the spread of their images.[150] Under current law, victims are typically not able to use copyright remedies because they are often not the owner of copyright in images or videos recorded by another person (selfies, of course, are the exception here).[151] Bambauer suggests a tweak to copyright law that would grant the subjects of ‘intimate media’[152] a copyright interest, enabling them to sue for damages and, most importantly, make use of the notice and takedown procedures available to copyright owners.[153]

Such a tweak to Australian law might enable Australians to impose legal obligations on intermediaries normally beyond the effective reach of domestic laws. It appears that this can be achieved without any change to US law. Historically, conflict of laws issues under the Berne Convention for the Protection of Artistic and Literary Works’ principle of national treatment[154] have often been resolved in a manner that subjects all issues of copyright in an international dispute to the jurisdiction of the protecting country.[155] In 1998, however, the US Second Circuit Court of Appeals in Itar-Tass Russian News Agency v Russian Kurier Inc reasoned that the scope of national treatment only applies to substantive copyright protection and does not extend to ownership.[156] The current view, accepted in later decisions in other circuits, appears to be that ownership should be properly determined by the originating country.[157] What this means, effectively, is that if Australian law were to grant subjects of intimate media an ownership interest, they would likely have title to sue, even though US law would not normally recognise such an interest. This would in turn allow Australian victims to utilise the DMCA takedown mechanism — a particularly useful remedy for large US-based content hosts and search engines.

Ultimately, Bambauer’s suggestion involves a relatively large change to copyright principles for a small subset of copyright works. It would require the copyright owners of media that is sexually explicit or shows nudity to obtain the consent of subjects before distributing or displaying the media. In practice, extending joint authorship rights to the subjects of intimate media would allow victims to utilise the take down mechanisms of the DMCA. The primary concern with Bambauer’s suggestion, at least from the perspective of common law countries, is that it repudiates the general principle that subjects are not entitled to copyright protection, which is crucial to the freedom of photographers and filmmakers to be able to capture social life.[158] Extending authorship to subjects of media may result in an impractical increase in the amount of people with interests in copyright works. This may lead to a ‘tragedy of the anticommons’ where the excessive number of rights to exclude undermines the efficiency of the system.[159] Transferring and managing rights could become increasingly complex in a large creative work involving many authors, and there is no clear mechanism to resolve conflicting claims by joint authors.[160] Copyright law already poses significant problems for distributors and re-users of film footage who need to trace long chains of contractual agreements to identify and obtain licences from all potential copyright owners,[161] and policymakers should be wary of exacerbating this problem.

These concerns may be alleviated in part through careful drafting. An integral part of Bambauer’s proposal involves only extending joint authorship to subjects of media that are portrayed in situations of graphic nudity.[162] This much more limited grant of interests would mean that the number of situations in which contractual consent from subjects is required would be relatively few. The adult film industry is the main area in which Bambauer’s proposal would require significant changes in practice.[163] Some commentators suggest that requiring explicit consent in these cases, however, may actually be an important and welcome change; given that the abuse and exploitation faced by performers in the adult film industry has been well documented.[164] Extending authorship rights to performers may help to curb this problem. While such a system would still require careful balance to avoid the pitfalls described above, such an imposition can be more readily justified where it is necessary to avoid the exploitation of vulnerable individuals.[165]

We suggest that granting a carefully tailored copyright interest to subjects of intimate imagery would likely pass a proportionality analysis. The costs to legitimate speech — particularly the transaction costs of obtaining consent when photographing or filming people who are naked or engaged in sexual activity — are real but relatively limited. For victims, on the other hand, the ability to access a real, working mechanism to control the dissemination of representations of their bodies is a tangible and significant benefit to their privacy. At least to the extent that notice and takedown provisions are proportionate when exercised by copyright owners in commercial contexts, they are also likely to be proportionate in addressing clear invasions

of privacy.

Perhaps the strongest objection to Bambauer’s proposal is that it muddies our conception of what copyright law is for. The privacy and autonomy interests it seeks to protect are orthogonal to the goals of copyright law of promoting the production and dissemination of expression.[166] In

Garcia v Google Inc, the Court highlighted the impropriety of using copyright law to control distribution for non-copyright purposes: ‘[Garcia’s] harms are untethered from — and incompatible with — copyright and copyright’s function as the engine of expression.’[167] Similarly, Bambauer’s proposal involves a significant mismatch between the claim of authorship sought and what the victims are seeking to remedy.[168] We suggest that legislators should be very wary of contributing to the use of copyright takedowns to achieve non-copyright ends: we know already that a large proportion of takedown requests are designed to stifle speech or competition.[169] There is a serious risk that further extending the basis upon which copyright takedowns can be used may lead to further erosion to the identifiable core goal of copyright protection and this should generally be resisted if possible.[170]

Ultimately, Bambauer’s proposal is, in technical terms, a ‘hack’, or perhaps a ‘kludge’: it is a relatively inelegant solution. By piggybacking on the widely implemented DMCA takedown process and appropriating the strong remedies of copyright damages, it may well be the most effective way to provide a real avenue for victims to convince the bulk of online intermediaries to assist in reducing the spread of intimate media. Certainly, the proposal would render an already complex body of law even more complex. In practical terms, however, it must also be noted that copyright law has already been twisted and pulled in different ways to address the interests of different groups: the rights of performers, broadcasters and producers are all also ‘kludges’ built on top of copyright over the last century. Copyright law has not developed as a pure utilitarian bargain — it is the result of a continual series of negotiations and compromises between established and emerging interests.[171] Whether Bambauer’s proposal, if implemented, would actually make routine copyright practice more difficult is an open question, but there is good reason to think that this horse has long ago bolted.

We proceed with some ambivalence as to whether a copyright-based notice and takedown solution is likely to be desirable. In pragmatic terms, creating an ownership interest for subjects of intimate media would empower victims with a real, effective remedy. It may also go some way to alleviating some of the victim blaming that stems from the perceived ‘naivety’ of women who agree to share intimate images without any mechanisms to control how those images are subsequently used. As a concrete tool for empowering women to take control of how representations of their bodies are distributed, a notice and takedown mechanism of some kind is likely desirable. It is certainly likely to pass a proportionality analysis. But it is always likely to be a second best solution; a specific legislative regime that is targeted at the abusive infringement of privacy fits better than shoehorning these interests into copyright. Given that an effective regime that binds US intermediaries is unlikely to be introduced in the near future, however, the proposal has considerable merit. Below, we turn to the question of whether a voluntary takedown system is sufficient. If it is not, a tightly limited move to grant an ownership interest to subjects of intimate media may well represent a simple and proportionate mechanism to graft a right of privacy onto existing and established copyright structures.

D An eSafety Commissioner

We are in a period of strong demand for greater regulatory control over internet content, and public administrative agencies are starting to experiment with different regulatory models. Both the Victorian Law Reform Commission (‘VLRC’) and the Commonwealth Senate LACARC inquiries recommended a greater role for regulators in dealing with the non-consensual dissemination of intimate imagery. The VLRC recommended following New Zealand in creating a ‘Communications Tribunal’ which could deal with complaints, in the hope that its authority would be recognised as legitimate by foreign intermediaries.[172] The Senate LACARC report similarly recommended that the Commonwealth empower an agency to issue takedown notices, and that formal mechanisms be created enabling a dialogue between intermediaries and the government.[173]

These regulatory approaches present significant benefits over intermediary liability schemes on both pragmatic and proportionality grounds. Pragmatically, co-regulatory approaches developed in consultation with industry are less likely to expose intermediaries to unmanageable legal risks.[174] They are therefore potentially more likely to help secure compliance from foreign firms and less likely to inhibit investment in digital services. From a speech perspective, the availability of a dedicated body with specialist knowledge that can investigate complaints and liaise directly with intermediaries could significantly improve the quality of takedown requests — reducing the risks of deliberate or inadvertent abuse of notice and takedown systems.[175] While an administrative body does not have the legitimacy of a court in determining that content is unlawful, it may strike an appropriate balance between due process and the need for fast, cheap solutions. Formally, as long as decisions are enforceable and reviewable under the supervision of an appropriate court, the risks are minimal.[176]

Substantively, it is conceivable that a well-resourced independent administrative body could come to decisions about whether content should be removed in a way that is more legitimate than, as in the RTBF case, requiring private platforms to weigh public interest concerns. Reasonable opinions differ here — from a strict liberal freedom of speech perspective, private platforms are fully entitled to make decisions on content and the interference of the state, particularly the executive, must be viewed with deep suspicion. But if these co-regulatory approaches explored are implemented with legitimate due process mechanisms and transparent government practices, these measures are likely to meet the test of proportionality.

The key difficulty, of course, lies in designing a system that is both affordable and effective. Administrative processes are typically more expensive and slow than would be required to effectively address non-consensual pornography — where the need to remove content promptly, before it spreads uncontrollably, is absolutely vital. A tribunal system that processes and adjudicates complaints is unlikely to be workable at any real scale, and there is little political will to fund regulators to the extent that they need to routinely enforce the law.

Short of a legal requirement to compel online intermediaries to remove content or links to content, regulators are beginning to experiment with new co-regulatory approaches. Given the jurisdictional limits of regulating the actions of foreign telecommunications firms, there is increasing interest in ‘adaptive regulation’, where regulators ‘tinker’ with ‘inputs, connectivity, incentives, and feedback’[177] to encourage firms to act in ways that further the public good.

The recent Australian development of the Office of the Children’s eSafety Commissioner (‘OCeSC’) is a prime example of a regulator’s keen understanding of the limits of legal authority. The OCeSC is tasked with administering a complaints system for online bullying material targeted at Australian children.[178] While the OCeSC has legal enforcement powers, these are only realistically effective against Australian intermediaries, not the major foreign social media platforms. Core to the OCeSC’s operation is a voluntary scheme designed to encourage social media platforms to cooperate by developing procedures for the rapid removal of bullying content.[179] The scheme’s viability relies, to a large extent, on the ability of the OCeSC to engage in direct communication and in public messaging[180] to encourage platforms to participate as good corporate citizens. The scheme was only recently introduced — its effectiveness to this end remains to be seen, although early indications are promising. The Federal government has recently announced that the Office would be renamed to the ‘Office of the eSafety Commissioner’, reflecting a broader remit to address harmful online content.[181] News reports suggest that the Office has received over 350 complaints about the non-consensual distribution of intimate images in 2016–17, and it appears to have had at least some success in communicating concerns about these complaints with the major social media platforms.[182]

New Zealand is currently experimenting with a similar approach. New Zealand has enacted the Harmful Digital Communications Act 2015 (NZ), which is designed to ‘deter, prevent, and mitigate harm caused to individuals by digital communications’ and ‘provide victims of harmful digital communications with a quick and efficient means of redress.’[183] The Act encompasses a broad range of harmful speech, and would certainly extend to cover the non-consensual distribution of intimate media. The Act creates an agency empowered to receive, investigate and resolve complaints.[184] It also provides for the making of orders in relation to harmful material, including that it be taken down by a defendant and an online content host.[185] Again, while this New Zealand tribunal has some enforcement powers, it is expected to operate against foreign platforms through a ‘soft law’ regulatory approach that engages intermediaries in the ‘development of consistent, transparent, and accessible policies and protocols’.[186]

It is too early to tell how effective these administrative approaches are likely to be, and more empirical research is needed to identify their limits. But we are cautiously optimistic that these regulatory experiments in influencing the operators of private networks could efficiently drive the development of better self-regulatory practices without the high costs of administrative tribunals or judicial processes. We think that an appropriate regulatory body, working in consultation with online intermediaries and public interest groups, could present an effective and efficient means to regulate the non-consensual dissemination of intimate images, at least on respectable and cooperative networks. Of course, by transferring the costs of enforcement to private actors, these systems risk giving up some of the legitimacy of public mechanisms of enforcement — and so we expect that careful attention will have to be paid in the future to developing new systems of review of private decisions. On the whole, however, while these consultative models of governing internet content are still in their infancy, we think that they represent an important innovation in regulatory practice with real potential to address harmful content in a legitimate way.

E Liability versus Responsibility: Balancing the Right to Freedom
of Expression and Privacy

Laws can help to enrol intermediaries in combating abuse, but successfully dealing with harmful internet content ultimately requires the cooperation of private actors. The fact that US law provides such strong shelter from legal liability means that online intermediaries faced with legal obligations they deem to be too onerous can, at least to some extent, escape the reach of other national legal systems. In order to effectively tackle the problem, strong social pressure on these providers will likely be as important, if not more, than new legal obligations.

There are encouraging signs that many online intermediaries are beginning to take the problem of abuse on their networks much more seriously. Over the last few years, considerable strides have been made by civil society groups to convince major platforms to be more responsive to harmful material on their networks — and these efforts have been particularly successful in the context of the non-consensual dissemination of intimate imagery. Groups like the Association for Progressive Communications, for example, have released influential research reports that highlight the failings of major intermediaries in adequately addressing violence against women online.[187] This work examined the policies of major intermediaries Facebook, YouTube and Twitter and found that all three lack transparency around re-posting and redress processes and that they have no public commitment to human rights standards, other than the encouragement of free speech.[188] When confronted with challenging questions, these platforms have ‘erred on the side of unrestrained expression, often to women’s detriment’.[189] The intermediaries studied at the time demonstrated a ‘reluctance to engage directly with technology-related violence against women, until it becomes a public relations issue’,[190] which, according to the report, suggested:

a lack of appreciation of the seriousness of violence against women online, and a lack of recognition of the responsibility of the intermediary to take measures to mitigate the frequency and seriousness of instances of violence, and to provide redress.[191]

This work, echoed and amplified by mainstream and social media attention, and supported by other civil society groups and advocates, has been relatively successful in bringing social pressure to bear on the massive online platforms. In 2013, Facebook altered their policy towards hate speech online, promising to do more in response to the demands of activist groups including Women, Action and the Media (‘WAM!’) and the Everyday Sexism Project.[192] Reddit in 2014 responded positively to the outrage and public pressure following its role as a central hub in the celebrity iCloud leak scandal, and subsequently committed to removing links to any non-consensually posted content reported to it.[193] Google and Microsoft later moved to adopt relatively simple systems to respond to complaints from victims who seek to remove links to intimate images from search results.[194] Twitter, too, has vowed to do more to combat abuse against women on its network — and has recently announced the development of a ‘Trust and Safety Council’ to work more closely with advocacy organisations in responding to abuse.[195]

The Terms of Service (‘ToS’) of Facebook,[196] Twitter,[197] Reddit[198] and Google’s revenge porn removal policy[199] now all expressly disallow the non-consensual posting of intimate imagery on their sites. Each of these intermediaries rely on a system of user based reporting for enforcement. Facebook allows users to notify operators of content which violates their ToS by clicking a drop-down menu on the content. Twitter has also created a detailed form users can fill out to report,[200] whilst Reddit directs its users to contact them via email.[201] Google,[202] Bing[203] and Yahoo![204] have provided webforms for requests for removal of non-consensually uploaded nude or sexually explicit material (separately to the RTBF request webform). Both Google and Bing require the user to indicate whether they have ever consented to the distribution of the material — in the case of Google, a positive response here will result in the user being redirected away from the voluntary process to the more onerous legal process.[205]

The willingness of these major platforms to address the non-consensual distribution of intimate imagery is encouraging. These policy changes reflect growing social pressure for online intermediaries to do more to combat abuse. Part of this is market-led — perhaps through demand by users and the threat of a mass exodus of women and minority groups from some social media platforms.[206] There is also an increasing recognition that the non-consensual dissemination of illicit images, as a form of gendered hate speech that harms and silences women, is a human rights issue.[207] Indeed, the UN has asserted that businesses have a duty to act to limit the use of their products or networks in ways that are abusive.[208]

One of the core challenges for intermediaries seeking to protect human rights on their platforms is the difficulty of developing processes that are both sufficiently responsive to abuse and sufficiently protective of freedom of expression. The voluntary removal processes described above are characteristically opaque — the criteria against which decisions are made are not published and there are limited (if any) avenues for appeal.[209] As the former UN Special Rapporteur on Freedom of Expression, Frank La Rue, points out, the ‘lack of transparency in the intermediaries’ decision-making process ... often obscures discriminatory practices or political pressure affecting the companies’ decisions.’[210] The escalating costs associated with providing a simultaneously timely and careful decision discourage intermediaries from being transparent about this process.

It is possible to reconcile the apparently conflicting demands of speech embodied in documents like the Manila Principles with the calls for greater protection for privacy interests. One example is the work of the Dynamic Coalition on Platform Responsibility (‘DCPR’), a component of the UN Internet Governance Forum, which is in the process of drafting a framework for contractual ToS agreements which seeks to address privacy, freedom of expression and due process concerns.[211] The DCPR attempts to provide a model for best practice in the way that online intermediaries deploy adequate ‘due diligence’ standards when evaluating complaints about content on their networks. The draft recommendations suggest that platforms should provide mechanisms for users to report content that breaches their privacy, but require that decisions are made in a way that is clear, predictable and impartial, and that users are provided with the reasons upon which decisions are made and an opportunity to be heard in response.[212] These principles are necessarily still aspirational. While they are not binding in any way, there is some hope that the multi-stakeholder process through which they and other similar instruments are being developed will encourage intermediaries to

do more to ensure that they govern their networks in ways that are legitimate and fair.

There is still a long way to go in encouraging private online intermediaries to protect people from abuse on their platforms. While some of the major platforms have agreed to do more, there are many more providers who have not. It also remains to be seen how effective the promises of the major social network services to respond more quickly to abuse are in practice. For those who wish to see these intermediaries address human rights violations on their platforms, maintaining social pressure to do so is critical. While at least some private actors can be counted on to respond to market demand for safer online spaces, gender-based harassment is still pervasive and widely accepted online.[213] It is accordingly not yet safe to say that legal regulation is not or will not be required in the immediate future.

F Incubating a Culture of Consent

Most of the options canvassed in this article deal with different methods of responding to complaints about content. But the core problem with both legal notice and takedown, and ToS-based complaints procedures is that they are necessarily reactive policies: they do not actively stop the non-consensual distribution of intimate images from occurring in the first place. Necessarily, a policy that relies on victims to complain ‘only addresses the symptom of the problem, not its cause.’[214] In general terms, complaints-based systems are a global default; few intermediaries are interested in expending the resources required to proactively monitor content submitted by users. The costs of doing so are often likely to be so prohibitive that any legal requirement

to monitor is likely to impermissibly interfere with core freedom of

expression rights.[215]

It is also necessary to recognise the critical importance of consent in this context. While it is imperative that intermediaries respond to the non-consensual distribution of intimate images, it is also important that images of women’s bodies uploaded with consent are not arbitrarily censored. In recent times, social media platforms have come under great criticism for the ways in which they remove consensually uploaded images of women’s bodies. Facebook, for example, has controversially removed images of breastfeeding mothers,[216] Indigenous women[217] and even overweight women,[218] in ways that suggest its definitions of acceptable content reflect and entrench Western heteronormative views. These moves are indicative of prevailing oppressive, disempowering and body shaming attitudes that women face — the same attitudes that lead to the relentless shaming of victims when their intimate images are circulated by perpetrators. Addressing the abuse of women online will require more than responding to individual instances of abuse; a cultural change is required, and part of this change likely requires some work by platforms to ensure that ordinary women are not prevented from sharing their images and expression in ways that continue to reinforce toxic

feminine ideals.

One of the most promising examples of major platforms working to protect women from abuse comes from Reddit’s ‘Gonewild’ community. Reddit Gonewild is a subreddit — a derivation of Reddit with its own rules and moderators that takes a much more proactive approach to ensuring material is posted with consent.[219] Reddit Gonewild exists specifically for the purpose of sharing intimate imagery for enjoyment in an environment which facilitates anonymity.[220] Reddit Gonewild’s rules require that all content that is uploaded be done with the consent of the subject.[221] To ensure this, the moderators of the site require users to undergo a process of ‘verification’. Before being permitted to post content, users must demonstrate their consent by posting images that clearly identify the poster’s body, along with a handwritten sign.[222] Once a user has been verified, they are identified by a distinctive icon on their user profile. Moderators may require verification if they are not convinced that the person posting the images is in fact the subject. Users who post images without consent are banned, and moderators may delete content posted if verification is not completed sufficiently quickly.[223]

The Gonewild story is interesting and insightful as a direct, community driven response to the problems faced by women who lose control over their images. By prioritising control and consent, Gonewild squarely places the responsibility for preventing abuse on the posters and moderators of content. Rather than require victims to go through the difficult process of seeking to have content removed, Gonewild is able to flip the burden of proof. In this way, Gonewild seeks to create an empowering space that enables women to maintain a greater degree of control over the way their images are viewed, without creating a stigma that delegitimises their sexuality and agency in choosing to share intimate images.[224] In the face of frequent victim blaming, where women are admonished for taking or sharing intimate images in the first place, Gonewild’s efforts to help women regain control stand out as a singularly empowering move.[225]

The adaptability of similar moves to combat the non-consensual distribution of sexually explicit imagery in some wider contexts is at least plausible. Beyond noting the stated aims and explicit policies of the subreddit, we are unable to guess at the effectiveness of Gonewild’s processes in practice. More work is likely required to better understand this and other private regulatory innovations to combat abuse. But it is encouraging that some fora are willing to experiment with options that directly prioritise consent as a core precondition for sharing of intimate imagery. Provided they are implemented in a way that is sufficiently transparent, accurate and not overly onerous, these procedures are likely consistent with the principles of proportionality. More than complaints-based systems, these types of processes are likely to be able to positively change cultural norms, at least within relatively close-knit communities. Policies that reinforce the importance of consent are precisely the types of regulatory experiments that should be encouraged in the immediate future.

III CONCLUSION

In this article, we have canvassed a number of different regulatory options to help empower people to better control how representations of their bodies are distributed online. In terms of a regulatory approach to address the non-consensual dissemination of intimate imagery, we think that some of the current experiments in co-regulatory schemes are worth pursuing further. It will be important to ensure that administrative agencies are well-resourced and that sufficient effort is made to develop productive dialogues with both civil society groups and telecommunications providers to find workable but legitimate mechanisms to respond to complaints. These systems should also be used to encourage providers to do more on their own initiative to respond to harms perpetrated through their networks — although, again, in a legitimate way. We do not think that further legal sanctions are generally

justifiable at this stage, although some may be needed to deal with more recalcitrant intermediaries.

More broadly, however, we note that there is still a long way to go to develop means to address abuse online. The extent to which women and minority groups are disempowered and silenced through pervasive abuse in online networks is becoming a critical problem for internet governance more generally. The work to reconcile the historical commitment of telecommunications providers and many civil society organisations to strong conceptions of freedom of speech, on the one hand, with the increasing recognition that the proliferation of abuse must be addressed, on the other, is just beginning. Some civil society initiatives have started this work by articulating best practice standards for balancing speech and privacy interests in a legitimate way. This work should be encouraged — and must be extended to develop broader action within both telecommunications businesses and governments. Likewise, some telecommunications businesses, working with and led by civil society organisations, have sought to respond in positive ways to abuse perpetrated on and through their networks. This work too is only beginning — but we are optimistic, given the increasing visibility of controversies around abuse and how much has been achieved in recent years.


[*] LLB, BInfoTech, LLM, PhD (QUT); Associate Professor, School of Law, Queensland University of Technology. Associate Professor Suzor is the recipient of an Australian Research Council DECRA Fellowship (Project No DE160101542). Many thanks to Tess Van Geelen for excellent research assistance. We are also grateful to the anonymous reviewers and the editors who provided helpful comments on this article.

[†] LLB (Hons) (QUT).

[‡] LLB (Hons) (QUT).

[1] Michael Salter and Thomas Crofts, ‘Responding to Revenge Porn: Challenges to Online Legal Impunity’ in Lynn Comella and Shira Tarrant (eds), New Views on Pornography: Sexuality, Politics, and the Law (Praeger, 2015) 233, 234–5.

[2] Ibid 239.

[3] Ibid 240.

[4] Derek E Bambauer, ‘Exposed’ (2014) 98 Minnesota Law Review 2025, 2027, citing Memphis Barker, ‘“Revenge Porn” Is No Longer a Niche Activity which Victimises Only

Celebrities — The Law Must Intervene’, The Independent (online), 19 May 2013 <http://www.independent.co.uk/voices/comment/revenge-porn-is-no-longer-a-niche-activity-which-victimises-only-celebrities-the-law-must-intervene-8622574.html> .

[5] Salter and Crofts, above n 1, 239.

[6] Thomas Fox-Brewster, ‘Stealing Nude Pics from iCloud Requires Zero Hacking Skills: Just Some YouTube Guides’, Forbes (online), 16 March 2016 <https://www.forbes.com/sites/thomasbrewster/2016/03/16/icloud-hacking-jennifer-lawrence-fappening-apple-nude-photo-leaks/#20318bab75b3>.

[7] Rashid Razag, ‘Snapchat Hackers Release 100,000 Videos and Photographs Including Explicit Images of Children’, Evening Standard (online), 13 October 2014 <www.standard.co.uk/news/techandgadgets/snapchat-hackers-release-100000-videos-and-photographs-including-explicit-images-of-children-9790598.html>.

[8] Ibid. Unless saved by the user, Snapchat images typically expire between 1–10 seconds after they have been received.

[9] See Senate Legal and Constitutional Affairs References Committee, Parliament of Australia, Phenomenon Colloquially Referred to as ‘Revenge Porn’ (2016) 15 [2.3] (‘Revenge Porn

Senate Report’).

[10] Ibid 15–16 [2.4]–[2.5].

[11] Revenge Porn Senate Report, above n 9, 16 [2.8], quoting Project Respect, Submission No 21 to Senate Legal and Constitutional Affairs References Committee, Inquiry into Phenomenon Colloquially Referred to as ‘Revenge Porn’, 14 January 2016, 4.

[12] See, eg, Salter and Crofts, above n 1, 238–9; Danielle Keats Citron and Mary Anne Franks, ‘Criminalizing Revenge Porn’ (2014) 49 Wake Forest Law Review 345, 353.

[13] See Ganaele Langlois and Andrea Slane, ‘Economies of Reputation: The Case of Revenge Porn’ (2017) 14 Communication and Critical/Cultural Studies 120, 126–7.

[14] For more on the consequences of online abuse, see Nancy S Kim, ‘Web Site Proprietorship and Online Harassment’ [2009] Utah Law Review 993, 1010.

[15] See Citron and Franks, above n 12, 347–52; Paul J Larkin Jr, ‘Revenge Porn, State Law, and Free Speech’ (2014) 48 Loyola of Los Angeles Law Review 57, 65.

[16] Langlois and Slane, above n 13.

[17] Nicola Henry and Anastasia Powell, ‘Beyond the “Sext”: Technology-Facilitated Sexual Violence and Harassment against Adult Women’ (2015) 48 Australian and New Zealand Journal of Criminology 104, 113.

[18] Citron and Franks, above n 12, 352.

[19] Henry and Powell, above n 17, 113.

[20] Law Reform Committee, Parliament of Victoria, Inquiry into Sexting (2013) 191–2 (‘Inquiry into Sexting’).

[21] Citron and Franks, above n 15, 350.

[22] Victims are predominantly, but not exclusively, women: see below nn 67–72 and

accompanying text.

[23] Inquiry into Sexting, above n 20, 43–4.

[24] Ibid 43.

[25] See especially Victoria Park Racing & Recreation Grounds Co Ltd v Taylor [1937] HCA 45; (1937) 58 CLR 479; Lenah Game Meats Pty Ltd v Australian Broadcasting Corporation (2001) 208 CLR 199.

Cf Grosse v Purvis (2003) Aust Torts Reports 81–706.

[26] Australian Law Reform Commission, For Your Information: Australian Privacy Law and Practice, Report No 108 (2008) vol 1, 88–90; Australian Law Reform Commission, Serious Invasions of Privacy in the Digital Era, Report No 123 (2014) 59–62 [4.1]–[4.14] (‘ALRC Serious Invasions of Privacy Report’).

[27] Normann Witzleb, ‘It’s Time for Privacy Invasion to Be a Legal Wrong’, The Conversation (online), 4 September 2014 <https://theconversation.com/its-time-for-privacy-invasion-to-be-a-legal-wrong-31288>.

[28] Crimes Act 1900 (ACT) s 35; Crimes (Domestic and Personal Violence) Act 2007 (NSW) s 13; Criminal Code Act 1983 (NT) sch 1 s 189; Criminal Law Consolidation Act 1935 (SA) s 19AA; Criminal Code Act 1924 (Tas) s 192; Crimes Act 1958 (Vic) s 21A.

[29] Civil Law (Wrongs) Act 2002 (ACT) s 136; Defamation Act 2005 (NSW) s 26; Defamation Act 2006 (NT) s 23; Defamation Act 2005 (Qld) s 26; Defamation Act 2005 (SA) s 24; Defamation Act 2005 (Tas) s 26; Defamation Act 2005 (Vic) s 26; Defamation Act 2005 (WA) s 26.

[30] Copyright Act 1968 (Cth) s 35(2).

[31] Corrs Pavey Whiting & Byrne v Collector of Customs (Vic) [1987] FCA 266; (1987) 14 FCR 434, 443

(Gummow J); Smith Kline & French Laboratories (Aust) Ltd v Secretary, Department of Community Services and Health [1989] FCA 384; (1990) 22 FCR 73, 86–7 (Gummow J) (‘Smith Kline’).

[32] See Giller v Procopets [2008] VSCA 236; (2008) 24 VR 1, where damages were awarded to a woman for breach of confidence after her former partner distributed their sexually explicit videotapes with the intention of humiliating and embarrassing her; Wilson v Ferguson [2015] WASC 15

(16 January 2015) where the plaintiff was awarded compensation and an injunction prohibiting her former partner from further publishing explicit images and videos that he had posted to his Facebook profile.

[33] The obligation arises where the information is imparted on the understanding that it is to be treated as confidential or where the receiver ought to have realised that in all the circumstances the information was to be treated as confidential: Smith Kline [1989] FCA 384; (1990) 22 FCR 73,

86–7; Coulthard v South Australia [1995] SASC 4927; (1995) 63 SASR 531, 545–7 (Debelle J).

[34] Note that criminal liability does not extend to an internet service provider or content host in their normal operations: Criminal Code Act 1995 (Cth) s 473.5.

[35] Alan Davidson, Social Media and Electronic Commerce Law (Cambridge University Press,

2nd ed, 2016) 354.

[36] Revenge Porn Senate Report, above n 9, 28 [3.8].

[37] See, eg, R v McDonald (2013) A Crim R 185; DPP (Cth) v Haynes [2017] VSCA 79

(6 April 2017).

[38] See, eg, Dever v Commissioner of Police [2017] QDC 65 (22 March 2017) (involving an appeal against conviction in the Caloundra Magistrate’s Court).

[39] Crimes Act 1900 (NSW) s 578C; Criminal Code Act 1983 (NT) sch 1 ss 125A, 125C; Criminal Code Act 1899 (Qld) s 228; Summary Offences Act 1953 (SA) s 33; Criminal Code Act 1924 (Tas) s 138; Criminal Code Act Compilation Act 1913 (WA) s 204A (which prohibits showing offensive material to a child under 16).

[40] For example, the New South Wales provision was used in response to a 2011 revenge pornography case, Police v Usmanov [2011] NSWLC 40 (9 November 2011), where the defendant pleaded guilty when charged with posting six intimate photos of his former partner to his Facebook page without her consent.

[41] Legislative Council Standing Committee on Law and Justice, Parliament of New South Wales, Remedies for the Serious Invasion of Privacy in New South Wales (2016)

34–5 [3.32]–[3.33].

[42] A maximum penalty of $10 000 or two years’ imprisonment applies (unless the image is of a person under 17, in which case the penalty is doubled). An ‘invasive image’ is defined to mean a moving or still image of a person ‘(a) engaged in a private act; or (b) in a state of undress such that (i) in the case of a female — the bare breasts are visible; or (ii) the person’s bare genital or anal region is visible’, but does not include an image of a person who is in a public place: Summary Offences Act 1953 (SA) ss 26A(2)–(3).

[43] Summary Offences Act 1966 (Vic) ss 41DA, 41DB. ‘Intimate image’ is defined as ‘a moving or still image that depicts (a) a person engaged in sexual activity ... (b) a person in a manner or context that is sexual; or (c) the genital or anal region of a person or, in the case of a female, the breasts’: at s 40 (definition of ‘intimate image’).

[44] Restraining Orders and Related Legislation Amendment (Family Violence) Bill

2016 (WA) s 14.

[45] NSW Government, The Sharing of Intimate Images without Consent – ‘Revenge Porn’, Discussion Paper (2016) <http://www.justice.nsw.gov.au/justicepolicy/Documents/ discussion-paper-sharing-intimate-images-14092016.pdf> .

[46] See especially Legislative Council Standing Committee on Law and Justice, above n 41,

53 [3.117]–[3.119] which notes that the Philippines, Israel, Japan, Canada, New Zealand, the United Kingdom and 26 states in the US have introduced or are drafting new laws specifically to address revenge porn.

[47] Revenge Porn Senate Report, above n 9, vii [5.18]–[5.19].

[48] Criminal Code Amendment (Private Sexual Material) Bill 2015 (Cth). The Bill would have defined the meaning of private sexual material to cover media depicting people engaged in a ‘sexual pose or sexual activity’, a person depicted in a sexual context and media depicting a sexual organ, anal region, or breast of a female, intersex or transgender person or someone who identifies as a female: at s 474.24D. It would have imposed a maximum penalty of imprisonment up to three years for distribution of private sexual material and an aggravated offence of imprisonment up to five years where there is a purpose of commercial gain or benefit: at s 474.24G.

[49] Ibid s 474.24E.

[50] Salter and Crofts, above n 1, 245–7.

[51] Inquiry into Sexting, above n 20, 150.

[52] Salter and Crofts, above n 1, 239.

[53] There are few reported cases against people who deliberately and maliciously upload, or threaten to upload, sexually explicit imagery: see Michael Salter, Thomas Crofts and Murray Lee, ‘Beyond Criminalisation and Responsibilisation: Sexting, Gender and Young People’ (2012) 24 Current Issues in Criminal Justice 301, 311.

[54] See Citron and Franks, above n 12, 358.

[55] Kim, above n 14, 1008.

[56] Domestic Violence Legal Service and the North Australian Aboriginal Justice Agency, Submission No 120 to Australian Law Reform Commission, Inquiry into Serious Invasions of Privacy in the Digital Era, May 2014, 7.

[57] Bambauer, above n 4, 2029.

[58] Revenge Porn Senate Report, above n 9, 53–4 [5.24]–[5.30].

[59] Ibid 53 [5.26].

[60] Note, however, that some of the major international online service providers did make submissions to the inquiry: see Digital Industry Group, Submission No 14 to Senate Legal and Constitutional Affairs References Committee, Inquiry into Phenomenon Colloquially Referred to as ‘Revenge Porn’, 14 January 2016.

[61] See generally Jack Goldsmith and Tim Wu, Who Controls the Internet? Illusions of a Borderless World (Oxford University Press, 2006) 70.

[62] Citron and Franks, above n 12, 360.

[63] Tarleton Gillespie, ‘The Politics of “Platforms”’ (2010) 12(3) New Media and Society 347, 356.

[64] See, eg, Communications Decency Act, 47 USC § 230(c) (1996) (‘CDA’). Notably, this does not extend to intellectual property, where online service providers have a more limited safe harbour that includes obligations to remove infringing content: Digital Millennium Copyright Act, 17 USC § 512 (1998) (‘DMCA’).

[65] Henry and Powell, above n 17, 105.

[66] Kath Albury, Amy Adele Hasinoff and Theresa Senft, ‘From Media Abstinence to Media Production: Sexting, Young People and Education’ in Louisa Allen and Mary Lou Rasmussen (eds), The Palgrave Handbook of Sexuality Education (Palgrave McMillan, 2016) 527.

[67] Salter and Crofts, above n 1, 233–4.

[68] Citron and Franks, above n 12, 353 note that ‘women would be seen as immoral sluts for engaging in sexual activity, whereas men’s sexual activity is generally a point of pride.’

[69] Laurel O’Connor, ‘Celebrity Nude Photo Leak: Just One More Reminder

that Privacy Does Not Exist and Legally, There’s Not Much We Can Do

about It’ on Golden Gate University Law Review Blog (21 October 2014) <http://digitalcommons.law.ggu.edu/cgi/viewcontent.cgi?article=1030 & context=ggu_law_review_blog> .

[70] Salter and Crofts, above n 1, 236.

[71] Danielle Keats Citron, Hate Crimes in Cyberspace (Harvard University Press, 2014) 13.

[72] The phenomenon we are dealing with here is inherently gendered. Certainly, men can and are the victims of these types of attacks. But men are much less frequently targeted and prevalent heteronormative views mean that cisgendered heterosexual men are much less likely to suffer serious shaming as a result: see Citron and Franks, above n 12, 353–4.

[73] Henry and Powell, above n 17, 107.

[74] Bambauer, above n 4, 2033.

[75] Salter and Crofts, above n 1, 236.

[76] Thomas Fox-Brewster, ‘Stealing Nude Pics from iCloud Requires Zero Hacking Skills: Just Some YouTube Guides’, Forbes (online), 16 March 2016 <https://www.forbes.com/sites/thomasbrewster/2016/03/16/icloud-hacking-jennifer-lawrence-fappening-apple-nude-photo-leaks/#20318bab75b3>.

[77] Sam Kashner, ‘Both Huntress and Prey’, Vanity Fair (online), November 2014 <http://www.vanityfair.com/hollywood/2014/10/jennifer-lawrence-photo-hacking-privacy> .

[78] Albury, Hasinoff and Senft, above n 66, 539.

[79] See generally Farhad Manjoo, ‘Taking a Naked Selfie? Your Phone Should Step in

to Protect You’ on New York Times, Bits (5 September 2014) <http://bits.blogs.nytimes.com/2014/09/05/taking-a-naked-selfie-your-phone-should-step-in-to-protect-you/> Woodrow Hartzog and Evan Selinger, ‘Why Smart Phones Should Help Us Avoid Selfie Sabotage’, Forbes (online), 10 September 2014 <http://www.forbes.com/sites/privacynotice/2014/09/10/why-smart-phones-should-help-us-avoid-selfie-sabotage/> .

[80] Electronic Frontier Foundation et al, Manila Principles on Intermediary Liability (2015) <https://www.eff.org/files/2015/10/31/manila_principles_1.0.pdf> (‘Manila Principles’).

[81] Frank La Rue, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, 17th sess, Agenda Item 3, UN Doc A/HRC/17/27 (16 May 2011) 12 [42] (‘2011 Report of the Special Rapporteur’). See also Nicolas Suzor and Brian Fitzgerald, ‘The Legitimacy of Graduated Response Schemes in Copyright Law’ [2011] UNSWLawJl 1; (2011) 34 University of New South Wales Law Journal 1, 25–7.

[82] See Jane Stapleton, ‘Duty of Care: Peripheral Parties and Alternative Opportunities for Deterrence’ (1995) 111 Law Quarterly Review 301, 317. If the objects of the law are to change the behaviour of intermediaries, it may be better to develop specific standards and penalties that apply to intermediaries themselves, rather than holding them responsible for the wrongdoing of others that they did not control: Kylie Pappalardo, ‘Duty and Control in Intermediary Copyright Liability: An Australian Perspective’ (2014) 4 IP Theory 9, 11.

[83] Frank La Rue, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, 67th sess, Agenda Item 70(b), UN Doc A/67/357

(7 September 2012) 23 [87].

[84] Criminal Code Act 1995 (Cth) s 474.25 creates a criminal offence where an internet service provider does not report child abuse material to the Australian Federal Police.

[85] See, eg, Select Committee on Communications, Social Media and Criminal Offences Inquiry, House of Lords Paper No 37, Session 2014–15 (2014) [84].

[86] See generally Crimes Act 1900 (NSW) s 91H; Criminal Code Act 1899 (Qld) ss 228C–228D; Criminal Law Consolidation Act 1935 (SA) ss 6363A; Protection of Children Act 1978 (UK)

c 37, s 1.

[87] On the difficulties of developing acceptable lists of prohibited content to be removed or blocked by internet intermediaries, see generally TJ McIntyre, ‘Child Abuse Images and Cleanfeeds: Assessing Internet Blocking Systems’ in Ian Brown (ed), Research Handbook on Governance of the Internet (Edward Elgar, 2012) 277, 291–5; Karel Demeyer, Eva Lievens and Jos Dumortier, ‘Blocking and Removing Illegal Child Sexual Content: Analysis from a Technical and Legal Perspective’ (2012) 4(4) Policy and Internet 1.

[88] See, eg, Manila Principles, above n 80, 2 warning against imposing obligations on online service providers to proactively monitor content or to hold them legally responsible for content posted by others without carefully designed protections.

[89] See Sarah Bloom, ‘No Vengeance for “Revenge Porn” Victims: Unraveling Why this Latest Female-Centric, Intimate Partner Offense is Still Legal and Why We Should Criminalize It’ (2014) 42 Fordham Urban Law Journal 233, 252–5.

[90] See especially Grant Huscroft, Bradley W Miller and Grégoire Webber (eds), Proportionality and the Rule of Law: Rights, Justification, Reasoning (Cambridge University Press, 2014).

[91] International Covenant on Civil and Political Rights, opened for signature 16 December 1966, 999 UNTS 171 (entered into force 23 March 1976) art 19 (‘ICCPR’).

[92] Ibid art 17; Universal Declaration of Human Rights, GA Res 217A (III), UN GAOR, 3rd sess, 183rd plen mtg, UN Doc A/810 (10 December 1948) art 19 (‘UDHR’).

[93] ICCPR art 19; UDHR, UN Doc A/810, art 19.

[94] Human Rights Committee, General Comment No 34: Article 19 Freedoms of Opinion and Expression, 102nd sess, UN Doc CCPR/C/GC/34 (12 September 2011) 3 [11] (‘General Comment No 34’).

[95] ICCPR art 19(3).

[96] Ibid.

[97] General Comment No 34, UN Doc CCPR/C/GC/34, 6 [22].

[98] Ibid 8 [34], quoting Human Rights Committee, General Comment No 27: Freedom of Movement (Article 12), 67th sess, 1783rd mtg, UN Doc CCPR/C/21/Rev.1/Add.9 (1 November 1999) 4 [14].

[99] Human Rights Committee, General Comment No 27: Freedom of Movement (Article 12),

67th sess, 1783rd mtg, UN Doc CCPR/C/21/Rev.1/Add.9 (1 November 1999) 4 [14].

[100] ICCPR art 19(3)(a). ‘“Rights” includes human rights as recognized in the Covenant and more generally in international human rights law’: General Comment No 34, UN

Doc CCPR/C/GC/34, 7 [28].

[101] Cf Barker, above n 4.

[102] 2011 Report of the Special Rapporteur, UN Doc A/HRC/17/27, 12 [40].

[103] Digital Millennium Copyright Act, 17 USC § 512 (1998).

[104] Google Spain SL v Agencia Española de Protección de Datos (C-131/12) [2014] ECJ 317 (‘Google Spain’).

[105] Jef Ausloos, ‘The “Right to Be Forgotten” — Worth Remembering?’ (2012) 28 Computer Law and Security Review 143, 144.

[106] Companies may be deterred from investing in countries where they risk unfavourable regulation: Steven C Bennett, ‘The “Right to Be Forgotten”: Reconciling EU and US Perspectives’ (2012) 30 Berkeley Journal of International Law 161, 187.

[107] Aidan Forde, ‘Implications of the Right to Be Forgotten’ (2015) 18 Tulane Journal of Technology and Intellectual Property 83, 119; Lawrence Siry, ‘Forget Me, Forget Me Not: Reconciling Two Different Paradigms of the Right to Be Forgotten’ (2015) 103 Kentucky Law Journal 311, 336.

[108] See generally Viktor Mayer-Schönberger, Delete: The Virtue of Forgetting in the Digital Age (Princeton University Press, 2009); Langlois and Slane, above n 13, 11–13.

[109] Michael Douglas, ‘Questioning the Right to Be Forgotten’ (2015) 40 Alternative Law

Journal 109.

[110] See Google, Request Removal of Content Indexed on Google Search Based on Data Protection Laws in Europe (2017) <https://support.google.com/legal/contact/lr_eudpa?product=

websearch>; Bing, Request to Block Bing Search Results in Europe (2017) <https://www.bing.com/webmaster/tools/eu-privacy-request>.

[111] Google, European Privacy Requests for Search Removals (20 March 2017) <https://www.google.com/transparencyreport/removals/europeprivacy/?hl=en> (‘European Google Removal Requests’).

[112] Citron and Franks, above n 12, 350.

[113] Alice E Marwick and Danah Boyd, ‘I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience’ (2011) 13 New Media and Society 114, 122.

[114] See especially Patricia Sánchez Abril and Jacqueline D Lipton, ‘Right to Be Forgotten: Who Decides What the World Forgets?’ (2014) 103 Kentucky Law Journal 363, 380.

[115] See, eg, Jeffrey Rosen, ‘The Delete Squad’, New Republic (online) (29 April 2013) <https://newrepublic.com/article/113045/free-speech-internet-silicon-valley-making-rules>.

[116] Abril and Lipton, above n 114, 382 raise concerns when private bodies are responsible

for determining:

(a) whether truthful information should be treated differently to false information; (b) how to classify information as “old” versus “new” and at what point does the “staleness” of information require its removal on request by a data subject; (c) the relevance of the original source of the publication to a removal request.

[117] Google Spain (C-131/12) [2014] ECJ 317, 337 [97]; Charter of Fundamental Rights of the European Union (2000) OJ C 364/01 arts 7–8.

[118] Google has, however, continued to challenge the request of the French government to implement a successful RTBF removal request globally — that is, rather than only blocking a search result from the region in which the request was made, it would no longer be shown anywhere in the world. Google argues that it is impractical to expect domestic laws to apply in other countries, and that complying with such a request would lead to a ‘race to the bottom’ for internet control: Peter Fleischer, ‘Implementing a European, Not Global, Right to Be Forgotten’ on Google, Google Europe Blog (30 July 2015) <https://europe.googleblog.com/2015/07/implementing-european-not-global-right.html>.

[119] European Google Removal Requests, above n 111.

[120] Samuel Gibbs, ‘Google to Extend “Right to Be Forgotten” to All its Domains Accessed in EU’, The Guardian (online), 11 February 2016 <https://www.theguardian.com/technology/2016/

feb/11/google-extend-right-to-be-forgotten-googlecom>.

[121] Paul Bernal, ‘The EU, the US and Right to Be Forgotten’ in Serge Gutwirth, Ronald Leenes and Paul De Hert (eds), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges (Springer, 2014) 61.

[122] See, eg, Jeffrey Rosen, ‘The Right to Be Forgotten’ (2012) 64 Stanford Law Review Online 88.

[123] See, eg, Forde, above n 107, 130.

[124] Abril and Lipton, above n 114, 380.

[125] See Meg Leta Ambrose, ‘Speaking of Forgetting: Analysis of Possible Non-EU Responses to the Right to Be Forgotten and Speech Exception’ (2014) 38 Telecommunications Policy 800.

[126] See, eg, Zachery Davies Boren, ‘Jennifer Lawrence Nude Pictures Leaked: Reddit Removes “The Fappening” Board Dedicated to Sharing Naked Pictures of Celebrities’, The Independent (online), 7 September 2014 <http://www.independent.co.uk/life-style/gadgets-and-tech/news/jennifer-lawrence-naked-photos-leak-reddit-s-fappening-is-no-longer-happening-9716800.html> .

[127] See Abby Rogers, ‘The Man Behind a Website That Let You Post Nude Pics of Your Ex is Back with a Brand New Venture’, Business Insider (online), 29 November 2012 <https://www.businessinsider.com.au/isanyoneupcom-returns-with-huntermooretv-2012-11>.

[128] See, eg, Douglas Lichtman and William Landes, ‘Indirect Liability for Copyright Infringement: An Economic Perspective’ (2003) 16 Harvard Journal of Law and Technology 395.

Cf Julie E Cohen, ‘Configuring the Networked Citizen’ in Austin Sarat, Lawrence Douglas and Martha Merrill Umphrey (eds), Imagining New Legalities: Privacy and Its Possibilities in the 21st Century (Stanford University Press, 2012) 129, 136–8.

[129] ALRC Serious Invasions of Privacy Report, above n 26, 310–16 [16.4]–[16.28].

[130] Ibid 208 [11.103].

[131] Ibid ch 7.

[132] Ibid 207–8 [11.100]–[11.102].

[133] Ibid 208 [11.103].

[134] Byrne v Deane [1937] 1 KB 818.

[135] Trkulja v Google Inc [No 5] [2012] VSC 533 (12 November 2012); Trkulja v Yahoo! Inc LLC [2012] VSC 88 (15 March 2012); Duffy v Google Inc [2015] SASC 170; (2015) 125 SASR 437; Google

Inc v Trkulja [2016] VSCA 333 (20 December 2016).

[136] ALRC Serious Invasions of Privacy Report, above n 26, 208 [11.101]–[11.103], citing

Byrne v Deane [1937] 1 KB 818.

[137] ALRC Serious Invasions of Privacy Report, above n 26, 207–11 [11.100]–[11.118].

[138] Ibid 208 [11.103].

[139] AY v Facebook (Ireland) Ltd [2016] NIQB 76 (9 September 2016) (Northern Ireland). Stephens J rejected Facebook’s motion to strike out claims under EU and Northern Irish data protection laws.

[140] See Jennifer M Urban, Joe Karaganis and Brianna L Schofield, ‘Notice and Takedown in Everyday Practice’ (Research Report, Berkeley Law and the American Assembly, Columbia University, 2016) 60 <http://papers.ssrn.com/abstract=2755628> .

[141] CDA, § 230(c) (1996); DMCA, § 512 (1998); Bambauer, above n 4, 2029–30.

[142] The Australian analogues are much weaker: Broadcasting Services Act 1992 (Cth)

sch 5 cl 91(1)(a) only provides immunity up until the point the service provider is put on notice, and the copyright safe harbours in the Copyright Act 1968 (Cth) s 116AG apply only to the limited subset of online service providers that are also internet service providers. See also Rebecca Giblin, ‘The Uncertainties, Baby: Hidden Perils of Australia’s Authorisation Law’ (2009) 20 Australian Intellectual Property Journal 148; David Lindsay, ‘Internet Intermediary Liability: A Comparative Analysis in the Context of the Digital Agenda Reforms’ (2006) 24 Copyright Reporter 70; Peter Leonard, ‘Safe Harbors in Choppy Waters — Building a Sensible Approach to Liability of Internet Intermediaries in Australia’ (2011) 3 Journal of International Media and Entertainment Law 221; Timothy Webb and Jessica Cowell, ‘Expanding the Safe Harbour Scheme — The Existing Law, the Government Consultation Paper and the Industry Response’ (2012) 88 Intellectual Property Forum: Journal of the Intellectual and Industrial Property Society of Australia and New Zealand 35.

[143] See above n 135 and accompanying text; Kim Gould, ‘Hyperlinking and Defamatory Publication: A Question of “Trying to Fit a Square Archaic Peg into the Hexagonal Hole of Modernity”?’ (2012) 36(2) Australian Bar Review 137; Leanne O’Donnell and Matthew Gardner, ‘Defamation: Are Search Engines Publishers?’ (2012) 37 Alternative Law Journal 194.

[144] Manila Principles, above n 80.

[145] Criminal Code Amendment (Private Sexual Material) Bill (Cth) s 474.24E(1)(a).

[146] See, eg, Copyright Regulations 1969 (Cth) div 3A.4; Copyright Act 1994 (NZ) s 92C; Electronic Communications and Transactions Act 2002 (South Africa) s 77.

[147] DMCA § 512(c)(3)(A).

[148] See Jennifer M Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices under Section 512 of the Digital Millennium Copyright Act(2006) 22 Santa Clara Computer and High Technology Law Journal 621.

[149] Bruce Boyden, ‘The Failure of the DMCA Notice and Takedown System: A Twentieth Century Solution to a Twenty-First Century Problem’ (Policy Brief, Centre for the Protection of Intellectual Property, 5 December 2013) <http://cpip.gmu.edu/2013/12/05/the-failure-of-the-dmca-notice-and-takedown-system-2/> .

[150] Bambauer, above n 4.

[151] See Allison Tungate, ‘Bare Necessities: The Argument for a “Revenge Porn” Exception in Section 230 Immunity’ (2014) 23(2) Information and Communications Technology Law

172, 179.

[152] Bambauer, above n 4, 2057–8 suggests a definition of ‘intimate media’ that means ‘photographs and audiovisual works with four additional characteristics’, being (a) ‘one or more living humans’; (b) the plaintiff is one of those persons; (c) the plaintiff can ‘reasonably be identified’ either by the material alone, or in combination with any identifying information also presented; and (d) the media includes ‘intimate information’, meaning ‘sexually explicit conduct involving the plaintiff, or the plaintiff’s genitals, pubic area, or (if female) exposed nipple or areola’ (citations omitted).

[153] Ibid 2065–7.

[154] Berne Convention for the Protection of Artistic and Literary Works, opened for signature

14 July 1967, 828 UNTS 222 (entered into force 29 January 1970) art 5 (‘Berne Convention’).

[155] See Anna Tydniouk, ‘From Itar-Tass to Films by Jove: The Conflict of Laws Revolution in International Copyright’ (2004) 29 Brooklyn Journal of International Law 897, 906, citing Edward J Ellis, ‘National Treatment under the Berne Convention and the Doctrine of Forum Non Conveniens’ (1996) 36 IDEA — Journal of Law and Technology 327, 331.

[156] [1998] USCA2 370; 153 F 3d 82, 89–91 (2nd Cir, 1998). After establishing that US legislation provided no other mechanism for resolving choice of law issues in regards to ownership, the Court drew on the principles of private international law to find that the application of the principle of national treatment did not cover both substantive rights and ownership.

[157] See Lahiri v Universal Music & Video Distribution Inc, 513 F Supp 2d 1172, 1178 (Wright J) (WD Cal, 2007); Edmark Industries v South Asia International (HK), 89 F Supp 2d 840, 843 (Cobb J) (ED Tex, 2000); Goldie Gabriel, ‘International Distributions: Divergence of Co-Ownership Laws’ (2007) 9 Vanderbilt Journal of Entertainment and Technology Law 519,

524–5; Mireille M M van Eechoud, Choice of Law in Copyright and Related Rights: Alternatives to the Lex Protectionis (Kluwer Law International, 2003) 108–9 discusses the unsuitable application of art 5(2) of the Berne Convention as a choice of law rule and further states that if one must apply it as such a rule, then it does not apply to all issues of copyright, specifically laws of ownership. van Eechoud concludes that the country of origin rule would most likely apply to issues of ownership: at 121–4.

[158] See generally Rebecca Tushnet, ‘Performance Anxiety: Copyright Embodied and Disembodied’ (2013) 60 Journal of the Copyright Society of the USA 209.

[159] Michael A Heller, ‘The Tragedy of the Anticommons: Property in the Transition from Marx to Markets’ (1998) 111 Harvard Law Review 621. See also Rebecca Tushnet, ‘How Many Wrongs Make a Copyright?’ (2014) 98 Minnesota Law Review 2346.

[160] Garcia v Google Inc, 786 F 3d 733, 736–7, 742 (McKeown J) (9th Cir, 2015) dealt with this issue, after an actress (Garcia) agreed to a role in the amateur production ‘Desert Warrior’ without the knowledge that it would later become ‘The Innocence of Muslims’, and her two lines dubbed over with insults to the Prophet Mohammed. After receiving death threats, Garcia made a claim in copyright, requesting Google remove all links to the film. The claim was granted at the first instance, but later rejected on appeal, on the basis that such a ruling would open the door to every ‘casting director, costumer, hairstylist, and “best boy”’ becoming entitled to an authorship interest in film productions, undermining the efficiency of the copyright system: Garcia v Google Inc (9th Circ, No 12-57302, 18 May, 2015) slip op 19–20 (citations omitted).

[161] Terry Flew, Nicolas Suzor and Bonnie Rui Liu, ‘Copyrights and Copyfights: Copyright Law and the Digital Economy’ (2013) 1 International Journal of Technology Policy and Law

297, 307–8.

[162] Bambauer, above n 4, 2058.

[163] Rebecca Tushnet, ‘How Many Wrongs Make a Copyright?’ (2013) 98 Minnesota Law Review 2346, 2353.

[164] See, eg, Ann Bartow, ‘Pornography, Coercion, and Copyright Law 2.0’ (2008) 10 Vanderbilt Journal of Entertainment and Technology Law 799, 817–24.

[165] Ibid 802.

[166] Jenna K Stokes, ‘The Indecent Internet: Resisting Unwarranted Internet Exceptionalism in Combating Revenge Porn’ (2014) 29 Berkeley Technology Law Journal 929, 937–8.

[167] 786 F 3d 733, 745 (McKeown J) (9th Cir, 2015).

[168] Citron and Franks, above n 12, 360.

[169] Urban and Quilter, above n 148.

[170] Tushnet, ‘Performance Anxiety’, above n 158, 238.

[171] Nicolas Suzor, ‘Access, Progress, and Fairness: Rethinking Exclusivity in Copyright’ (2013) 15 Vanderbilt Journal of Entertainment and Technology Law 297; Jessica Litman, Digital Copyright (Prometheus Books, 2006).

[172] Inquiry into Sexting, above n 20, 198–202.

[173] Revenge Porn Senate Report, above n 9, 53 [5.25]–[5.26].

[174] See Christopher T Marsden, Internet Co-Regulation: European Law, Regulatory Governance and Legitimacy in Cyberspace (Cambridge University Press, 2011).

[175] See Marc J Randazza, ‘Lenz v Universal: A Call to Reform Section 512(f) of the DMCA and to Strengthen Fair Use’ (2016) 18 Vanderbilt Journal of Entertainment and Technology Law 743.

[176] For further consideration of administrative enforcement in copyright, see, eg, Suzor and Fitzgerald, above n 81.

[177] Richard S Whitt, ‘Adaptive Policymaking: Evolving and Applying Emergent Solutions for US Communications Policy’ (2008) 61 Federal Communications Law Journal 483, 487.

[178] Enhancing Online Safety for Children Act 2015 (Cth) s 18.

[179] Ibid pt 4.

[180] See, eg, ibid pt 4 div 4 which enables the Commissioner to publish a notice of non-compliance.

[181] Enhancing Online Safety for Children Amendment Bill 2017 (Cth).

[182] Penny Timms, ‘Facebook Announces Crackdown on Revenge Porn’ ABC News (online), 6 April 2017 <http://www.abc.net.au/news/2017-04-06/facebook-announces-crackdown-on-revenge-porn/8421826> .

[183] Harmful Digital Communications Act 2015 (NZ) s 3.

[184] Ibid ss 7–8.

[185] Ibid s 19.

[186] Law Commission (NZ), Harmful Digital Communications: The Adequacy of the Current Sanctions and Remedies, Report No MB3 (2012) 133 [5.128].

[187] See, eg, Carly Nyst, End Violence: Internet Intermediaries and Violence against Women Online (August 2014) Association for Progressive Communications <https://www.apc.org/

en/node/19697>.

[188] See, eg, Carly Nyst, ‘End Violence: Internet Intermediaries and Violence against Women Online’ (Executive Summary and Findings, Association for Progressive Communications, July 2014) 3.

[189] Ibid 5.

[190] Ibid.

[191] Ibid 3.

[192] Marne Levine, Controversial, Harmful and Hateful Speech on Facebook (29 May 2013) Facebook <https://www.facebook.com/notes/facebook-safety/controversial-harmful-and-hateful-speech-on-facebook/574430655911054>.

[193] Emily van der Nagel and James Meese, ‘Reddit Tackles “Revenge Porn” and Celebrity Nudes’, The Conversation (online), 27 February 2015 <http://theconversation.com/reddit-tackles-revenge-porn-and-celebrity-nudes-38112> .

[194] Google, Removal Policies (2017) <https://support.google.com/websearch/answer/2744324>; Microsoft, Content Removal Requests Report (2016) <https://www.microsoft.com/about/csr/transparencyhub/crrr/> (2015).

[195] Patricia Cartes, ‘Announcing the Twitter Trust & Safety Council’ on Twitter, The Official Twitter Blog (9 February 2016) <https://blog.twitter.com/2016/announcing-the-twitter-trust-safety-council>.

[196] Facebook, Community Standards (2017) <https://www.facebook.com/communitystandards>.

[197] Twitter, The Twitter Rules (2017) <https://support.twitter.com/articles/18311>.

[198] Reddit, Reddit Content Policy (2017) <https://www.reddit.com/help/contentpolicy/>.

[199] Google, Remove Revenge Porn from Google (2017) <https://support.google.com/websearch/answer/6302812?hl=en>.

[200] Twitter, I’m Reporting Exposed Private Information (2017) Twitter Help Center <https://support.twitter.com/forms/private_information>.

[201] Reddit, What Is Involuntary Pornography and Glorification of Sexual Violence? Reddit Help <http://reddit.zendesk.com/hc/en-us/articles/205704725-What-is-involuntary-pornography> .

[202] Google, Remove ‘Revenge Porn’ From Google (2017) <https://support.google.com/websearch/answer/6302812?hl=en>.

[203] Microsoft, Report Content to Microsoft (2017) <https://support.microsoft.com/en-us/getsupport?oaspworkflow=start_1.0.0.0&wfname=capsub&productkey=RevengePorn&ccsid=635732383626875131>.

[204] Yahoo!, Get Help if Someone Posts Intimate Content of You without Your Permission, Yahoo! Help <https://help.yahoo.com/kb/search-for-desktop/posts-intimate-content-permission-sln26123.html>.

[205] Neither search engine provides clarification as to what constitutes ‘consenting to distribution’. It is a broad concept and we can speculate that this may reflect a risk-assessment on Google’s behalf: either victims who consent to some distribution of images are not the ‘real’ victims Google is seeking to help, or otherwise Google is unwilling to rely solely on the victim’s assertion that the scope of consent granted excludes distribution on that particular site.

[206] See, for example, reports that Disney and other investors have chosen not to pursue opportunities to buy Twitter because Twitter has not been able to effectively deal with abuse on its network: Mathew Ingram, ‘Here’s Why Disney and Salesforce Dropped Their Bids for Twitter’ Fortune, 18 October 2016 <http://fortune.com/2016/10/18/twitter-disney-salesforce/> .

[207] See, eg, United Nations Broadband Commission for Digital Development Working Group on Broadband and Gender, ‘Cyber Violence against Women and Girls: A World-Wide Wake-Up Call’ (Report, 2015) <http://www2.unwomen.org/~/media/headquarters/attachments

/sections/library/publications/2015/cyber_violence_gender%20report.pdf?v=1&d=20150924T154259>.

[208] United Nations Officer of the High Commissioner for Human Rights, Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework, UN Doc HR/PUB/11/04 (2011) 13.

[209] See generally Nicolas Suzor, ‘The Role of the Rule of Law in Virtual Communities’ (2010) 25 Berkeley Technology Law Journal 1817; Greg Lastowka, Virtual Justice: The New Laws of Online Worlds (Yale University Press, 2010).

[210] 2011 Report of the Special Rapporteur, UN Doc A/HRC/17/27, 12.

[211] Internet Governance Forum, Dynamic Coalition on Platform Responsibility (DC PR) <http://www.intgovforum.org/cms/2008-igf-hyderabad/event-reports/74-dynamic-coalitions/1625-dynamic-coalition-on-platform-responsibility-dc-pr#action-plan> .

[212] Ibid.

[213] Ann Bartow, ‘Internet Defamation as Profit Center: The Monetization of Online Harassment’ (2009) 32 Harvard Journal of Law and Gender 384, 428; Citron and Franks, above n 12, 353; Salter and Crofts, above n 1, 238.

[214] Van der Nagel and Meese, above n 193.

[215] 2011 Report of the Special Rapporteur, UN Doc A/HRC/17/27, 12–13 [40]–[43].

[216] Georgie Keate, ‘Facebook Removes “Offensive” Photo of Breastfeeding Mother’, The Times (London), 30 October 2014, 29.

[217] Leigh Alexander, ‘Facebook’s Censorship of Aboriginal Bodies Raises Troubling Ideas of “Decency’’’, The Guardian (online), 24 March 2016 <https://www.theguardian.com/technology/2016/mar/23/facebook-censorship-topless-aboriginal-women>.

[218] Sam Levin, ‘Too Fat for Facebook: Photo Banned for Depicting Body in “Undesirable Manner”’, The Guardian (online) 24 May 2016 <https://www.theguardian.com/technology/2016/may/23/facebook-bans-photo-plus-sized-model-tess-holliday-ad-guidelines>.

[219] Emily van der Nagel, ‘Faceless Bodies: Negotiating Technological and Cultural Codes on Reddit Gonewild’ (2013) 10(2) Scan: Journal of Media Arts Culture <http://apple-xs-1.scmp.mq.edu.au/scn/journal/vol10number2/Emily-van-der-Nagel.html> .

[220] Ibid.

[221] Gonewild, The Gonewild FAQ (2017) Reddit <https://www.reddit.com/r/gonewild/wiki/faq>.

[222] Van der Nagel and Meese, above n 193.

[223] Gonewild, above n 221.

[224] Emily van der Nagel and Jordan Frith, ‘Anonymity, Pseudonymity, and the Agency of Online Identity: Examining the Social Practices of r/Gonewild’ (2015) 20(3) First Monday <http://firstmonday.org/ojs/index.php/fm/article/view/5615> .

[225] It must be noted that, while the governance mechanism that prioritises consent is useful, Gonewild itself reproduces homogenous body standards, prioritising the bodies of women who are ‘white, young and slender’: Jenny Kennedy, James Meese and Emily van der Nagel, ‘Regulation and Social Practice Online’ (2016) 30 Continuum: Journal of Media and Cultural Studies 146, 152.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/MelbULawRw/2017/16.html