- OUR CASES
- Current Issues
-
Join Us
-
About
Is the government finding creative ways to legislate online censorship?
Following comprehensive criticism of the Federal Government’s proposed misinformation bill (Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023), the government was forced to withdraw the bill late last year, announcing it would ‘take on board the response to the public consultation and improve the bill’. The bill’s introduction to parliament has now been delayed, and is unlikely to be introduced until next year.
The bill came under fire for its use of overly broad and vague terminology, with concerns the laws could be used to restrict public debate, and censor unpopular opinions. It was also heavily criticised for failing to strike the correct balance between combating untrue information and protecting freedom of speech and expression.
What many people may not be aware of, however, is that the government has quietly finalised consultation on the Online Safety (Basic Online Safety Expectations) Amendment Determination 2023 (“the Amendment Determination”), a determination made under the Online Safety Act 2021, which, on its face, appears to utilise delegated legislation to try and achieve some of the objectives of the now thwarted misinformation bill.
It is concerning that it has received relatively little attention to date.
While some of the provisions of the Amendment Determination – such as those which impose new obligations on service providers to deal with issues raised by generative AI technologies and to safeguard ‘the best interests of the child’ – are to be commended, there are others contained within the instrument which are of grave concern.
In particular, the Amendment Determination purports to prevent ‘harmful’ material as well as ‘hate speech’.
The Determination adds a new definition of hate speech as:
“communication by an end-user that breaches a service’s terms of use and, where applicable, breaches a service’s policies and procedures or standards of conduct mentioned in section 14, and can include communication which expresses hate against a person or group of people on the basis of race, ethnicity, disability, religious affiliation, caste, sexual orientation, sex, gender identity, disease, immigrant status, asylum seeker or refugee status, or age” [emphasis added].
This definition will inform the process of “detecting and addressing hate speech” which breaches a service’s terms of use. It may capture lawful material which is categorised by the service provider as hate speech under its own policies for moderating content, with no external accountability or assessment of the policies or their application.
The concept of harm is likewise excessively broad. Under the Misinformation Bill “harm” would include “hatred against a group in society on the basis of ethnicity, nationality, race, gender, sexual orientation, age, religion or physical or mental disability”. This subjective and unclear definition could capture lawful, legitimate, and harmless expressions and is likely to apply under this Determination as well.
The mechanisms of the Amendment Determination, in regulating online service providers, are comparable to those under the Misinformation Bill in various respects, including relying extensively on their good conduct. Both measures would result in empowering these service providers to censor content without public awareness of it, and without the service providers being held to account.
The Amendment Determination sets a low bar for interference with free speech, which presents a real risk that the law will be misused to engage in ideological and political censorship. It provides no balancing provisions or anything to preserve free speech and is presented without any consideration of potential impacts on free speech, and gives no comfort that it will not result in overzealous intervention. On the contrary, it licences excessive restriction on content, applying open ended, undefined terminology, and gives no clear parameters for pivotal concepts such as ‘unlawful’ or ‘harmful’.
With no oversight mechanisms, the Determination has the potential for serious misuse without public ever being aware. It produces unjustified restrictions on free speech, through the application of subjective, vague and uncertain assessment by non-experts of what is unlawful or, though perfectly lawful, deemed harmful.
The Human Rights Law Alliance has provided a submission to the consultation process, expressing its strong opposition to the Determination.
Do you like this page?