Interview with Ali Moore, ABC Radio Melbourne

ALI MOORE, HOST: Well, do you watch your social media feed and wonder just how much of what comes your way is plain wrong, and not just factually incorrect, but active disinformation?

The question is, I guess, how do you always know, and how do you stop it?  The Government is planning to give the Media Regulator new legislative powers to try to reduce the spread of misinformation and disinformation. Michelle Rowland is the Federal Communications Minister. Michelle Rowland, good morning.

MICHELLE ROWLAND, MINISTER FOR COMMUNICATIONS: Good morning.

MOORE: What exactly is being planned? 

ROWLAND: What we are proposing to do here is respond to significant community concerns and concerns that we have as a Government about the safety of Australians online. We recognise more needs to be done by the online platforms to ensure that the spread of mis and disinformation is minimised.

We know that misinformation can actually pose a threat to the safety and well-being of Australians; it can cause civil disruption, it can impact on health and safety and electoral integrity, and we are pleased that there is a voluntary Code of Practice that the industry has in force at the moment.

However, what we are proposing now as a Government is to empower the Regulator –the ACMA— to have the ability to enforce that Code. what that will mean is essentially the Regulator holding the platforms to account under their own Code, and there being penalties and other measures that could be taken if they fail to do so.

MOORE: So, is the Code, the voluntary nature of the code, failing? 

ROWLAND: The Code really fits into the overall framework that we've had in this sphere for a very long time for telco and broadcasting. It is a co-regulatory framework. It's one that says, "We'll let the industry work out some of the mechanisms that it needs to work meet certain goals." 

What we also have in telco and broadcasting is the back stop of the Regulator to step in when industry codes are not being complied with and to take action. Essentially what we intend to do with these new laws is to bring the online environment into the same realm as telco and broadcasting which currently exists under the ACMA's remit.

MOORE: And am I right though that since the code was introduced, ACMA has actually found that more than 80 per cent of Australians have experienced or encountered misinformation which would seem to indicate that the voluntary part of the code is not working? 

ROWLAND: Certainly as a Government we are concerned that more can be done, and the safety and wellbeing of Australians is always our paramount concern here.

We know that the platforms have implemented certain measures to combat mis and disinformation but we are also concerned that the scale and the scope and the speed at which this misinformation is disseminated is something that we need to address now. We think the time is opportune, having had a number of reviews of the Voluntary Code, the Regulator examining how it's operating in practice, and now is the time to legislate to give that extra enforcement opportunity.

MOORE: You're listening to the Federal Communications Minister, Michelle Rowland. Michelle Rowland, you talk about scale, scope and speed, and I guess that's the challenge. So in a very practical sense, if you give the Media Regulator legislative powers to enforce the Voluntary Code, in a practical sense what would that mean?  Can you give me a practical example?  So I come across a story that I think is blatant disinformation, I contact ‑ who would I contact, and what would happen from there? 

ROWLAND: Certainly, and maybe if I use an actual example that has happened in Australia. It arose out of the pandemic, where we had pieces of mis and disinformation out there online associating 5G technology with causes of coronavirus. What we had was the platforms responding in a variety of ways to ensure that that misinformation was suppressed as much as possible. There was artificial intelligence being used and other mechanisms.

But what the Regulator would do if that was to happen today would be threefold. Firstly, the Regulator can ask the platforms to show how they are meeting their obligations under the Code, so additional information gathering powers. Secondly, in terms of complaints handling, the first complaints would be made to the platforms themselves, but we envisage that the ACMA will be able to monitor and also ensure that the complaint handling processes are, as the platforms say they are going to be enforced. Thirdly, having that level of transparency –that enforceable transparency – will be the difference here.

Having these new enforcement powers puts greater incentives on the platforms to do everything they can, and everything that they commit to do under this Code to suppress mis and disinformation.

MOORE: But again, it goes back to scale, scope and speed, and to get that level of transparency actually requires huge resources, and we're seeing just Twitter, there's just one example of what's happening at the moment and how various people's feeds have changed and how the amount and the type of information on those feeds have changed. So you might be able to monitor just how transparent they're being, but what happens if you find that you're not happy with what they're picking up?  What can you do? 

ROWLAND: Well, we envisage that just like we currently have in the telco and broadcasting space, the Regulator will be able to have directions making, and also information gathering powers to actually say to the platforms, "You need to comply with this particular aspect to ensure this outcome." We also have a penalty regime that applies in the other parts of the telco and broadcasting sectors already, and we envisage that that will operate similarly for the ACMA.

I will point out that there have been ongoing developments in how these platforms are actually monitoring this. A lot of it is done through artificial intelligence and complaints as a data monitoring tool, but there are other incidences where, for example, different behaviours can be identified by the platform.

For example, if there is a large amount of traffic on a particular issue and that is something that is causing concern, then that can already be monitored by the platforms and they do have the ability to do that. Under the Code, there is no formal directions as to the precise technologies that can be used; there's some examples that are given, but certainly the technology that is being developed by the platforms has come a long way in the last couple of years as well. We want to make sure that they continue to develop that to keep up with that scale and scope of the problem.

MOORE: And are you going to be giving ACMA more resources to do this? 

ROWLAND: The ACMA does have additional resources in this area, and we will continue to monitor what sort of resources they need, so it is  more effective, there's no point having a law and having these powers if they're not going to be effective.

But I would point out that we intend to release a discussion paper in the first half of this year which will also go to the practical elements of actually implementing the legislation.

We are very mindful of the need to have not only the most robust laws, but to be able to actually administer that for the purpose for which it is legislated.

MOORE: Minister, obviously when it comes to misinformation and disinformation there are extremes, and there are extremes that, you know, most reasonable people would be able to say, "Yep, that’s wrong" or "Yep, that's deliberate disinformation."  But there's a whole lot of stuff in the middle that might be misinformation to you, but is not misinformation to me. Who's going to arbitrate that? 

ROWLAND: Well, it is not going to be the role of the Government, the Minister or the Regulator to arbitrate that. We are approaching this from the perspective and utilising the mechanisms that are recognised by like-minded democracies, and in particular some of the leading regulatory principles that the EU has articulated, and that you put the onus on the platforms. We're very mindful of the need to ensure that part of a functioning democracy is to ensure freedom of speech, and is to ensure that news content is free and fairly available.

But certainly, the context of misinformation that's being considered here, it does have specific elements; it needs to be information or content that is verifiably false, misleading or deceptive, it's disseminated on those digital platforms, including by automation, and it's reasonably expected to cause or contribute to serious harm, or the risk of serious harm.

So there are those criteria there. Certainly, in a very practical sense, this is not about suppressing free speech, it is not about suppressing opinion. To the contrary, it will specifically exclude news content and it will specifically exclude authorised electoral material.

What this is about is protecting our democracy and that the digital platforms are doing everything that they commit to do to suppress the amount of misinformation that is being disseminated.

MOORE: So that was a definition, or some of the principles behind misinformation. If it's reasonably expected to cause harm and verifiably false, what's disinformation? 

ROWLAND: So disinformation is basically the same, but the key difference, there needs to be the intent to mislead or deceive and cause harm, and sometimes this is ‑ you have examples of rogue states or bad actors that do this, but the key difference is the intent is there.

MOORE: Michelle Rowland, thank you very much for joining us this morning.