Steven Price

Guide to NZ Media Law

Official Information Act

Official Information Act


Bill of Rights Act

Media law resources

Feeds (RSS)


« | Main | »

Harmful Digital Communications Bill submission

By Steven | February 11, 2014

Introduction

I am a barrister specialising in media law and a lecturer in media and privacy law at Victoria University of Wellington’s law school. I am also a blogger and occasional journalist. I am the author of a textbook for journalists called “Media Minefield”. I have dealt with, studied, and commented on many cases involving harmful digital communications.

I support this Bill, and particularly the provisions creating a new complaints regime. In the first part of this submission, I explain why. In the second part, I suggest some changes. In the third part, I make some concluding comments.

1. Why I support the Bill

There is plenty of evidence that digital harassment is a problem, and a unique one. I need only refer to the Law Commission’s research about the extent of New Zealanders’ experience of harmful digital communications, particularly those involving young people. It is sometimes very serious indeed, and in NZ and overseas has been a factor in suicides. I agree that the nature of such bullying – it can be easy to start, instantly and widely accessible, searchable, and difficult to remove – means it presents challenges we haven’t seen before.

Existing remedies are inadequate. It can be argued that our existing laws are sufficient to tackle the various problems. The law of defamation, privacy and breach of confidence, the powers of the Privacy Commissioner, the Harassment Act, other laws governing threats and incitement, cover much of the territory of this Bill. That is particularly so if the amendments in the Bill to the Privacy Act, the Human Rights Act, the Crimes Act and the Harassment Act (which I also support) are passed. But that does not solve the problem. The remedies under these laws are not always accessible, sufficient or available. What is needed is a quick and cheap method of obtaining an injunction or take-down order when truly damaging material is posted online. These laws don’t provide it. In some cases (such as in criminal cases) there is no take-down power. Nor can the Privacy Commissioner issue take-down notices. In some cases (such as the torts of defamation and privacy and the Harassment Act) the victim needs thousands of dollars to take a civil action. In any event, the case law sets extremely high thresholds before injunctions can be granted. The law and procedures are arcane. Costs spiral. Delays are rife.

The Bill addresses the remedies gap. The principles at the heart of the Bill are designed to reflect the current law. I think they largely do so, when read in conjunction with the limiting factors in clause 17. What is significantly new are the remedies. These offer informal resolution by the Agency, and a new court process for obtaining orders for take-down, cease-and-desist, and unmasking anonymous posters.

The Agency offers crucial support for complainants. Complainants are often faced with digital material harming them now, but have little in the way of resources or knowledge of the law and its processes. The Agency would provide information and advice. Someone to try to reach resolution on their behalf – get the bullying stopped and/or the harmful material removed. Someone who’s got an existing relationship with Facebook and Twitter, has experience with the issues, and is able to negotiate “in the shadow of the law” – ie “if you don’t remove the post, this person can apply to court for an order taking it down.” Particularly for young people, this may be an enormous benefit, even if it doesn’t work all of the time. Netsafe is performing some of this function now, but those posting or hosting harmful material – including Facebook and other sites – will be more likely to take notice of an official government agency than a lobby group.

The Bill contains protections to ensure the powers are not misused. I think there are sufficient protections in the Act to stop any unjustified uses of the law to attack legitimate speech. The coercive powers in the complaints regime are limited. The biggest gun is the take-down order. That cannot be used unless the complainant has first tried to resolve the complaint through the Agency. The breach of the principles must be serious or repeated. The harm must be “serious emotional distress”. No order can be made unless it is demonstrably justified under the Bill of Rights Act. It must be made by a judge, who must apply the principles of natural justice. And the judge will have to consider a range of sensible contextual factors under cl 17(4), including whether the communication was true, whether it was in the public interest, the conduct of the parties, and the vulnerability of the victim. There is a right of appeal.

2. Some suggested changes

Purpose

Clause 3: Deterrence and prevention of harm should be added in the purpose section.

Interpretation

Consideration should be given to including a definition of “victim”, to constrain the operation of the complaints regime to those who are deliberately targeted by the harmful communication, rather than to anyone who may feel offended and seriously distressed by it. This means that the communication would have to be sent to them or be about them personally, for them to be a victim. This has been done in clause 19(4), in relation to the offence. But not in relation to the complaints regime.

The Agency

It seems strange to me that the Act hasn’t worked out what to do with the Agency. Will it be Netsafe? A government department? That shouldn’t be left open in the Bill.

The Agency should have to consider the clause 17(4) contextual factors.

Clauses 11(1) and 12(2)(a): It should be made clear that the Agency has the power to approve the rapid referral of a complaint to the court if the circumstances require it.

The Agency should be empowered to act on behalf of complainants before the court when it deems this appropriate and the complainant requests it, as it can do under clause 20(4) when dealing with online content hosts.

The Agency should have specific power to refer the complainant to the police or BSA, Press Council, etc.

The Agency should be tasked with gathering statistics about cyber-bullying.

Clause 8(1)(a): as presently worded, this allows the Agency to consider complaints about any sort of serious emotional distress caused by digital communications, whether or not they breach the principles. Is this deliberate?

Clause 8(2): this allows the Agency to seek and receive information it considers necessary to resolve complaints, etc. Is it necessary to spell this out? Is it intended that this creates an obligation on others to comply with requests for information? It is not clear.

The threshold for court intervention

Clauses 10(1)(a) and 11(2): Threats to cause serious harm should also be a ground for an injunction. Why should complainants have to wait until the harm has been done if someone is (eg) threatening to post a naked photo of them?

Clause 10(2): the Coroner has power to seek a takedown order (etc), but only if the Coroners Act is contravened. Should this include the (newly amended in the Bill) provisions of the Crimes Act dealing with suicide?

The content of the principles

These should make it clear that impersonation of someone (eg fake Facebook page) is covered. This is not clearly a “false allegation” or “grossly offensive” or part of a pattern of harassing conduct.

The limiting factors in clause 17(4)

Consideration should be given to including some provision addressing the role of humour. This is somewhat vexed. Humour can be nasty and harmful, especially to young people. It can also leaven a communication that might otherwise be harmful, or encapsulate a satirical point of importance.

“The extent to which the communication is a legitimate news story” should be a factor in clause 17(4). There may well be a difference between a blogger publishing a communication that says the world is a better place for the death of a “feral” West Coast man in a car accident (which may be grossly offensive, justifying a remedy) and a news story that reports on that blog post (which may well not be). That difference is probably captured in the other limiting factors, but it may be as well to add this as a factor.

Alternatively, the mainstream media should be excluded from the complaints regime in cases where there is an established complaints body with power to issue take-down orders, and rapidly if necessary, such as the Online Media Standards Authority. (This may incentivise the Press Council to acquire this power).

Consideration should be given to including some provision dealing with opinions. They are generally regarded as being more important to protect because they often contribute to a debate, and less harmful because the audience know they can disagree with them. This can be overstated – opinions expressed by influential people can be very harmful, and opinions do not always contribute to useful discussions. But “the extent to which the communication is recognisable as the expression of opinion” may be worth including in the clause 17(4) list.

Clause 17(5) seems partially to duplicate clause 6(2).

Procedural matters

Cases involving child and youth bullying should go to the Youth Court, not the ordinary District Court.

When the Act talks about “content hosts”, it should clarify whether it means site hosts or page administrators or both (clauses 17, 20).

Orders against third parties should only be made when those parties have a right to be advised of the application in advance and right to respond, except in exceptional circumstances.

Consideration should be given to what happens when two complainants complain about each other’s communications.

Clause 15(1) touches on this, but it should be made explicit that both parties have a right of appeal.

Remedies

Clause 17(1): Tehre should be an additional remedy requiring deletion of material, and a remedy preventing publication of something that hasn’t yet been published.

Clause 17(2)(b): The power to unmask someone who’s anonymous should include a power to unmask someone who’s using a pseudonym.

Clause 17(2)(b): It should be made explicit that the court should consider the impact on free expression, both in the individual case, and on the flow of communications by anonymous people generally (which can be socially valuable) before ordering the release of the identity of an anonymous author. (It should also be made clearer to whom the identity is to be released. To the complainant? The judge? To the world?)

Clause 17(1)(e): Consideration should be given to elaborating on the right of reply. What if the proposed reply itself arguably breaches the principles? Are there limits on what topics can be raised or whether strong personal criticism can be included? How are disputes about the wording to be determined?

Clause 17(1)(f): I am not sure that an apology is a justified remedy. An ordered apology is not sincere. It doesn’t seem to me to add much to the rest of the remedies.

Clause 17(3)(a): Might this apply to the world at large? There seem to be arguments each way. It is certainly easy to imagine blog posts encouraging all readers to engage in harmful communications to someone. But the more natural reading of this clause is that it only applies to particular listed people. There seems to be no requirement that there be any evidence that those people be likely to engage in that conduct. In any event, the sorts of orders that can be made against a primary defendant (take-down, correction, rights or reply, unmasking anonymity, etc) do not seem to lend themselves to third parties, and certainly not if they ave not yet acted on the encouragement.

It should be made clear that the court can give orders even in relation to organisations outside NZ’s jurisdiction, and even when there is no identification of the wrongdoer. (Because such orders might be complied with by responsible websites).

3. Concluding comments

There should be provision for training of judges in the principles of the Bill and their interface with the Bill of Rights Act. This was organised in the UK before the Human Rights Act came into force, and meant that judges were equipped to deal with the new framework. I am concerned that District Court judges are not well equipped to deal with cases that raise complicated questions engaging free expression rights.

Most of my submissions ahve been directed to the complaints regime. I ahve also supported the tweaks to the Human Rights Act, Privacy Act, Harassment Act and Crimes Act. That leaves two other elements of the Bill: the new offence, and the liability of online content hosts.

The new offence (clause 19)

I have less to say about this. I accept that sending deliberately harmful communications about someone, which in fact do serious emotional harm, may be conduct that warrants criminalising. But I note that the threshold for this criminal offence, in some ways, seems lower than that for the complaints regime. There is no requirement that any principle be breached, so its reach is in that respect greater. Nor does the breach have to be “serious or repeated”. The list of contextual factors in clause 19(2) is shorter than that in clause 17(4), since it’s aimed at assessing whether harm has been caused, so it doesn’t address contextual factors that may render the communication significant. Nor does it explicitly require that any assessment be consistent with the Bill of Rights Act, which makes the reader wonder whether this omission is deliberate, since it is (twice) set out for the complaints regime. (On the other hand, the offence is narrower than the complaints regime in other respects: it only involves digitally sending or posting information about someone, it must be deliberately harmful, and the bar for “harm” may be set higher).

Much of what is published on the internet is “information, whether truthful or untruthful” about someone else. A significant amount of it may be said to cause that person serious emotional distress. A news story about a funeral or accident, for example. A piece of critical investigative journalism. A critical blog post. Coverage of a court case. Tweets about Charlotte Dawson. It may be thought that these are not deliberately harmful. But the criminal law can treat as intentional something that is done with foresight of consequences, even if those consequences weren’t the motivation for the action.

The offence essentially criminalises (among other things) deliberate defamations and privacy invasions. But it does not provide any of the defences that constrain those torts, such as honest opinion or legitimate public concern. The law of tort has struck a careful balance between speech and the harms caused by speech. This provision seems unnuanced.

I am concerned that the offence as drafted is too wide.

Online content host liability (clause 20)

I note that on its face, this seems to create a general “safe harbour” regime for ISPs and other content hosts. That is, it is not restricted to preventing liability arising under the Bill’s complaints regime or its new offence. It would also apply to other civil or criminal liability, including defamation, invasion of privacy, and breach of confidence. This conclusion is strengthened by the specific exclusion of copyright breaches in clause 20(6). If so, this seems a very significant provision.

In general, I support the creation of a general safe harbour provision, where accompanied by requirements on the host to behave responsibly, if it does not impose unworkable burdens on ISPs, and balances the rights of authors. I make the following observations:

It is not clear to me whether content hosts include, for example, bloggers that hosts comments threads, or mainstream media sites that host bloggers but don’t edit them. The definition seems to cover them. This should be clarified.

It creates a powerful incentive for the content host to remove material upon receipt of a complaint. It is the easiest thing for them to do. They may well penalise speech that isn’t unlawful at all.

What is threshold for a complaint? It seems extraordinarily low. The complaint can allege unlawfulness, but need not do so. It can merely allege that the hosted material “ought to be taken down because it is harmful or otherwise objectionable”. It does not even have to allege that the the content breaches a communications principle (which itself would be a low threshold, given that the communications principles do not themselves create legal obligations, and can only be used as stepping stones to court orders that must pass other thresholds). This seems too low, and open to abuse by complainants. I think they should have to allege that a law has been broken, and not even merely a communications principle.

I welcome the provision that requires complainants to identify the precise material they challenge, and explain why they think it is unlawful (or harmful/objectionable).

The requirement to take “reasonable steps” carries a lot of water in this provision. If an ISP concludes in good faith, but wrongly, that the post complained about doesn’t break the law, and so can remain up, is that taking “reasonable steps”? I think it should be.

There seems to be no requirement for the author of the material to be informed about these complaints, or given a right to challenge them, or any process by which the author can argue that the content is not unlawful, harmful or objectionable. There should be a mechanism to challenge them, but one that does not involve the ISP making a ruling on legality.

The Bill doesn’t say, but does create an argument, that any content host that does not respond to a complaint under this section thereby becomes liable for the content it allows to remain up. I am inclined to think that it should be made clear that there is no such presumption, which would allow ISPs and their ilk to rely on the developing law relating to when they can be regarded as publishers.

Topics: General, Internet issues, NZ Bill of Rights Act | Comments Off on Harmful Digital Communications Bill submission