SYDNEY • Social media executives risk jail for failing to take down violent extremist content quickly, under controversial laws passed in Australia yesterday — a “world first” in the wake of the Christchurch mosques massacre.
Lawmakers voted overwhelmingly in favour of the laws, which hold firms like Facebook Inc and YouTube — and their executives — responsible for removing “abhorrent material” quickly.
The companies face fines approaching billions of dollars — or 10% of global annual turnover — for failing to enact the “expeditious removal” of footage of terrorism, murder and other serious crimes, while executives could face up to three years in jail.
Technology companies, policy experts and lawyers pilloried the legislation — which was jammed through Parliament in two days and faces an uncertain future beyond elections expected in May.
Prime Minister Scott Morrison, who is facing a difficult re-election battle, said: “Big social media companies have a responsibility to take every possible action to ensure their technology products are not exploited by murderous terrorists.”
Attorney General Christian Porter said the legislation is “most likely a world first”.
The Opposition Labor party expressed serious misgivings, but voted in favour of the legislation — in a step that echoed the bipartisan passage of a similarly controversial law forcing technology firms to weaken encryption.
With those two reforms, Australia has put itself at the forefront of global efforts to regulate social media giants more closely. But both measures have been roundly condemned by industry and experts as “knee-jerk” and ill-conceived.
It will be up to a jury to decide whether the platforms acted with good speed to take down offending content, raising questions about how the law will be implemented.
“No one wants abhorrent content on their websites, and Digital Industry Group Inc members work to take this down as quickly as possible,” said Sunita Bose, MD of Digital Industry Group which represents Google LLC, Facebook, Twitter, Amazon.com Inc and others.
“But with the vast volumes of content uploaded to the Internet every second, this is a highly complex problem that requires discussion with the technology industry, legal experts, the media and civil society to get the solution right — that didn’t happen this week.”
She also warned that the law would encourage companies to “proactively surveil” users and slammed Parliament’s “pass it now, change it later” approach. “This is not how legislation should be made in a democracy like Australia.”
Technology companies now face the task of developing failsafe moderation tools capable of quickly detecting offensive material in hundreds of billions of media uploads to their platforms.
In the immediate aftermath of the Christchurch shootings, Facebook alone said it had taken down 1.5 million videos of the attack.
Current tools like Microsoft’s Content Moderator API “cannot automatically classify an image, let alone a video”, according to Monash University’s Robert Merkel.
The legislation could be a particular problem for smaller platforms used by the far-right, like 4Chan and 8Chan.
Australian technology firm Atlassian CEO Scott Farquhar warned of a broader economic impact. “The legislation is flawed and will unnecessarily cost jobs and damage our tech industry,” he said on Twitter.
News organisations also worry that they could face legal action.
The Law Council of Australia warned the legislation could have “serious unintended consequences”, like muzzling whistleblowers and “could also lead to censorship of the media, which would be unacceptable”.
The laws are expected to be followed by steps toward treating social media giants more like publishers, which would make them legally responsible for any content on their platforms. — AFP