On Anzac Day 2018, comedian and writer Catherine Deveny made controversial comments on Twitter and Facebook describing the day as "Halloween for Bogans" and a "fetishisation of war and violence". Thirty years ago protests against Anzac Day were commonplace, but times have changed.
In response to Deveny's commentary, she was doxxed multiple times. Her home address was posted all over the internet, and she received an avalanche of credible rape and death threats. She became the focus of several Facebook hate groups. One night, five men in a ute turned up to her house. One of them knocked on her door and videoed himself doing it.
The situation was so serious that Victoria's counter-terrorism police become involved in protecting Deveny and her family.
There is no doubt that online disinformation and hate speech can fuel violence. In 2018, a Facebook hate speech campaign led to the slaughter of thousands of Muslim Rohingyas in Myanmar. Facebook whistleblower Frances Haugen recently alleged the company's algorithms are used to fan hate speech and violence.
A new law dealing with cyberhate, the Online Safety Act, came into effect on January 23. It seeks to address serious online abuse that meets a test of being "menacing, harassing or offensive" in all circumstances. It would capture doxxing, which it defines as revealing personal information to deliberately make someone feel unsafe, and encouraging violence against someone else.
The scheme requires adults alleging serious online abuse to report it to the online platform involved. Then, if that doesn't result in quick action, a report must be made to the online regulator, the eSafety Commissioner. The eSafety Commissioner can then investigate the matter and assess whether to issue a take-down notice to the online platform, which has 24 hours to comply. A failure to comply may result in a significant financial penalty of up to $555,000.
Under this scheme, where an anonymous troll threatens the safety of others, you are required to go through a bureaucratic complaints process. That process may involve delays, particularly if the eSafety Commissioner is busy. The troll inciting violence is able to retain their anonymity, and you can't sue either the social media platform or the troll to compensate for the damage caused. In contrast, if the troll defames you - that is, they harm your reputation - you can sue either the individual or the social media platform.
Although well-intentioned, the Online Safety Act will fail.
For many years, workplace safety regulators held an effective monopoly in addressing workplace bullying. In 2010, the Productivity Commission examined the issue of workplace bullying and found regulators like Worksafe were unable to deal with the issue effectively and only investigated a tiny proportion of cases. Prosecutions were even more rare. The situation was finally remedied by the Gillard government, which enacted anti-bullying laws in 2013, allowing individuals targeted by workplace bullying quick access to a tribunal to seek appropriate orders. In recent months, the Morrison government has recognised the success of that system by extending it to victims of workplace sexual harassment.
Laws targeting a widespread problem will always fail where they confer a monopoly upon a regulator to enforce them. Regulators simply don't have the resources to investigate and/or prosecute. The Fair Work Ombudsman can't investigate or prosecute all cases of underpayment of wages. It pursues a tiny percentage of complaints. In the corporate sphere, ASIC pursues only a small number of prosecutions of companies that break the law. The immutable fact is that state regulators invariably have insufficient resources to investigate all credible complaints, let alone take action.
Cyber abuse and other unsafe behaviour on our social media platforms is a daily occurrence. The online world is vast. One dangerous troll can wreak havoc upon the lives of many with relative ease and impunity.
The eSafety Commissioner will never have the resources to adequately police cyberspace. It will never have the resources to assist in most cases of serious harm. And yet under the Online Safety Act it has a monopoly on enforcement.
Those people who are targeted and whose safety is jeopoardised are unable to take steps to protect themselves.
For the last five years, we have been advocating for laws that would impose a duty of care on big technology companies to the users of their platforms. The legal duty would require the big tech companies to proactively take steps to keep their platforms safe for users. Crucially, such a duty of care could be enforced by individuals impacted by dangerous trolling, allowing the individual to sue either the individual troll or the company or both. Such an approach has recently been taken in the British Parliament's Online Safety Bill, which promises to "call time on the Wild West online".
Imposing a duty of care will be fiercely resisted by big tech companies, who will undoubtedly argue that it would impede free speech and innovation. This criticism should be stared down. Employers only focused on safety once a duty of care was imposed on them to ensure the safety of their workers. Supermarkets and other public places also extend a duty of care to their users.
It's time to disrupt the disruptors.
Some ideas and sentiments discussed in this article are raised in Ginger's book, Troll Hunting.