COMMUNICATIONS MINISTER ANIKA WELLS: We know Australians naturally have a lot of questions about this world-leading law, the first of its kind. And I was grilled yesterday by the Five Dock Public School press gang with Member for Reid Sally Sitou, a budding group of journalists whom I think can join the Canberra Gallery one day. Today, I’m answering more of those questions with Australia’s eSafety Commissioner, Julie Inman Grant, and announcing the publication of the regulatory guidance for Australia’s landmark social media laws.
The Albanese Government is on the side of families. We want kids to know who they are before platforms assume who they are. From 10 December, social media platforms will have a responsibility to remove young Australians under the age of 16 from their platforms. With the support of the regulatory guidance being published today, there is no excuse for non-compliance. Platforms must take reasonable steps to detect and deactivate underage accounts, to prevent re-registration or circumvention, and to provide an accessible complaints process for their users. Platforms must implement the laws in a way that is effective, private, and fair. They must communicate openly and transparently with their users about the age assurance systems that they are using. And they must use age assurance technology that is private, data-minimising, secure and trustworthy, and must comply with the existing protections under the Privacy Act.
We know the platforms have the capability to do this. These are some of the biggest and best resourced companies in the world. We will always keep backing Australian families and keeping Australian kids safe. We cannot control the ocean but we can police the sharks, and today, we are making clear to the rest of the world how we intend to do this.
I’ll now ask the eSafety Commissioner to speak in a little bit more detail about the guidance released today.
ESAFETY COMMISSIONER JULIE INMAN GRANT: As the Minister indicated, eSafety today has released its regulatory guidance outlining the reasonable steps platforms must take to prevent children from under 16 from holding or having accounts on age-restricted social media platforms.
This is not new. The guidance is informed by extensive consultation we’ve undertaken both with industry and the community, and we believe that we’ve arrived at a regulatory guidance that is reasonable, feasible, and fair. It’s worth noting that this is not the beginning of the conversation that we’ve had with industry. These discussions, in fact, started right after the legislation was passed in late 2024. And our engagement with industry in understanding and the adoption will continue and has been reported. We will be meeting with a number of these major platforms as well as AI companies in the Silicon Valley and San Francisco next week.
The Minister has outlined the key areas eSafety expects platforms to focus on with their initial implementation. But it’s really important to emphasise that the guidance is not prescriptive. It recognises the diversity of platforms, technologies, approaches available and the fact that most age-restricted social media platforms have already developed their own proprietary tools around age assurance or already use third-party tools. We now have an additional range of evidence of independently tested technologies through the Age Assurance Technical Trial, which gives some additional options for platforms. Also, with this guidance, it recognises the same thing that the government recognised through the Age Assurance Technical Trial – there is no one-size-fits-all approach.
Now, it may be a surprise to some that that the guidance does not mandate a single technology approach to age assurance or set a required accuracy rating for age estimation. On the contrary, it suggests a multi-layered waterfall approach with the proviso that the provision of government or digital ID can never be the sole or final choice. There must always be alternatives. And this is why the guidance is also clear that companies should first focus on the goal of deactivating or removing accounts of under-16s on 10 December, and make discoverable user reporting available so that parents and educators have recourse when under-16 accounts are missing and a rigorous appeals and redress mechanism when accounts have been mistakenly removed. We also recognise that preventing future under-16s from joining platforms will take longer and be more complex in terms of building systems. And also, it recognises that both age-based and location-based circumvention measures are likely to take place. We put very specific technical information on how we expect that to happen, but what we also expect the companies to do to mitigate these risks and, when necessary, re-verify the ages of users who may be using VPNs, for example.
You’ll see that eSafety is taking a principles-based approach to guide the platforms in the implementation. These are laid out there, but key to this is that the technologies, systems, and processes they’re using are reliable, accurate, robust, and effective, privacy-preserving, and data-minimising, and minimally invasive. No company wants to create user friction on their platforms. We want to make sure it’s inclusive, accessible, and fair, transparent, proportionate, evidence-based and responsive to emerging technology and risk.
Now, this is a bold statement, but I believe that a feature of this legislation is that it encourages the captured entities to innovate, to invest, and to aim for consistent improvement. So, this will be regulation that encourages innovation and recognises the dynamism of the burgeoning age assurance industry, demonstrating that AI-advanced technologies deployed purposefully can lead to much better safety outcomes. Beyond social media, just yesterday, dating site Tinder announced that they would be implementing Face Check, their proprietary facial age estimation technology, as has Roblox, thanks to our engagement with them, the online gaming site, using a third-party solution called Persona that was evaluated through the technology trial. This morning, Apple also announced that they would be introducing much more granular age attribute information, so signals that they can give to platforms to be sure that they’re over 13 or over 16 or over 18.
So, all of this is really creating an ecosystem that suggests that this is the way the world is going. It’s also the way that global regulation is going. And I’ve been in touch with the European Commission in response to President Ursula von der Leyen’s speech, the State of the Union speech, indicating that they would be working closely with Australia and potentially emulating what we’re doing here.
I just want to raise one other example for us all to think about because different people at different ages have different levels of comfort with things like taking selfies or using biometrics or even digital ID. It really varies. But few of us remember now when Apple introduced facial analysis that enabled us to lock our phones. That was actually November 2017. We used to use our thumbprints, and that felt very strange. When we were forced to wear masks during COVID, Apple trained their facial ID to focus their models on the eye region. And now we almost take for granted that facial ID is used to open a range of mission-critical services today, including accessing our bank accounts. And I believe that over time, facial age estimation, liveness tests, and other forms of age assurance will become normalised as well.
In wrapping up, I just want to reinforce that privacy was front of mind in everything we did, and we’ve been engaging deeply with the Privacy Commissioner and the OAIC on a range of issues. The Commissioner is up in Korea right now for the Global Data Protection Commissioner’s Conference, but she expects to have her privacy guidance out in the coming weeks.
We also know that this is going to be a monumental event for a lot of children. A lot of children welcome this, as certainly parents do, but we know this will be difficult for kids. And so, we have also released today our commitment to protecting and upholding children’s digital rights and recognising that they, their parents and educators, will continue to need education and resources to prepare them for this moment and that’s precisely what we’re all prepared to do. Thank you.
WELLS: Thank you so much Commissioner. So, we’ll go to questions now and given that we have the new eSafety Commissioner with us we’ll start with today’s announcement and guidelines before going to broader around the grounds questions.
JOURNALIST: Minister, so what has the response of tech companies been to the guidelines so far?
WELLS: Well, I think they’re probably busy reading them given that they’ve been published today. We’ve been working very closely with them. I understand that the eSafety Commissioner is meeting weekly with the big ones now. Our office is certainly working closely with them to make sure that there are no surprises on either side about what’s coming.
JOURNALIST: Have they had a lot of input into the way that they were developed?
GRANT: Like I said from the beginning, my first meetings with senior officials at the technical and the leadership level was in December 2024 back in Washington DC. I think we had consultations with 165 different entities throughout the consultation process. We are talking to the companies on a regular basis. They did not input directly into this. This was done independently as it needs to be. But as the Minister said, I will be meeting with a number of them in California next week to meet with their technical teams and their compliance teams and those that are rolling this out.
This is really part of the fairness and due diligence work that I think we need to do to make sure that they understand what they’re required to do and by when. And if there are challenges or impediments, we need to understand that from a compliance and enforcement perspective as well. The best way for this to work is for this to be done together like we have just done with the industry codes. Again, the world is moving this way. I know that the industry is interested in what we call regulatory coherence, so that when we’re working with other regulators around the world like the 27 digital services coordinators in the European Commission, they’d rather be building solutions towards one specific outcome rather than 40 different outcomes.
JOURNALIST: Just a question for the Minister, given it’s up to social media companies and the onus is on them to, and the responsibility is on them to make sure that these children are not on these social media apps, how can you guarantee that they will not be on when it’s up to those social media companies?
WELLS: Because it is the Australian law, and these social media companies will have had 12 months’ notice by the time 10 December rolls around to comply with the Australian law, as we expect any company who conducts their business on these shores to do.
JOURNALIST: You’ve rolled out these guidelines today, it’s given them a little over three months, a little over four months to get things ready. Do you think they’ll be ready come 10 December?
WELLS: They have no excuse not to be ready, and I think as the eSafety Commission has already touched on this morning, age assurance technology is used increasingly and increasingly prolifically among these social media platforms for other purposes, predominantly commercial purposes to protect their own interests. There is no excuse for them not to use that same technology to protect Australian kids online.
JOURNALIST: You’ve said that eSafety is not asking platforms to verify the age of all users. Why don’t you want social media companies doing that?
WELLS: We want these rules and the delivery of these laws to be as data minimising as possible to make sure that people’s data is as private as possible. And I would say that the reason we’re not asking everybody to verify their age is because these social media platforms know an awful lot about us of our own volition. And if you have been on, for example, Facebook since 2009, then they know that you’re over 16. There’s no need to verify.
JOURNALIST: But if you’re minimising those chances, are you also creating an opportunity for people to slip through the cracks? I mean, what about all the kids that are going to come who haven’t had Facebook since 2009?
GRANT: I think one of the scare tactics we’ve already seen, one of the platforms used is every Australian is going to have to age verify. They of course don’t want to do that because that is going to create a lot of friction and a lot of inconvenience for everyone. So, we have actually said in the guidance that would be an unreasonable step to take. Again, they’re using very granular tools, often AI based, age inference tools that, I mean, tell them very basic things. 13-year-olds very often are speaking to 13-year-olds. They’ll use natural language processing, looking at emojis and acronyms and the way language is used. They’ll look at when kids are logging in before school and after school. There are a whole range of signals that they can have to identify under-16s. Now, I think we’re all recognising that in the past, what they’ve been training their models on is 13 and under, which is probably easier to achieve because that has been the de facto age for 30 years. And of course, once you’re an adult and you’re 18, it’s much easier to be able to tell an adult. That doesn’t mean they will not need to continue improving their classifiers, but even if you look at the TLR results of the age assurance technical trial, there are at least 10 that have the highest level of technical accuracy, a 9 or an 8 score. So, these technologies are improving every day. What we would encourage them to do is make sure they’re testing more on the 13 to 15 age range and also capturing all of the ethnicities that represent Australia.
JOURNALIST: And what’s the threshold for eSafety to launch a legal action or something like that?
GRANT: Well, we do put our compliance and enforcement standpoint. Now, we don’t expect that every under-16 account is magically going to disappear on December 10. What we will be looking at is systemic failures to apply the technologies, policies, processes needed to do. As I mentioned earlier, the first order of action really is to tackle those that are on their platform now on December 10. We recognise that building the systems and technologies and processes to recognise new users coming on board will take more time. So, we’ve always been a fair regulator. We will take those into consideration. I mean, I think the other potential scenario is that there are companies that will, as a procedural fairness step, we have created and published a self-assessment tool that we’re letting all of the companies use to assess whether or not they believe they’re an age-restricted social media service. Those are due on 18 September. We’ll of course do our own assessment based on the final rules that the Minister tabled on 29 July.
And there may be differences opinions. I mean there have been a couple who have said, yeah, we do think we’re an age-restricted service. But most of them are saying they are not. So, there may be those who claim to not be, and may force a legal fight or may choose to do nothing. But I would be very surprised if companies do choose to do nothing because none of them are doing nothing around the globe. So, for instance, the day that the Minister tabled her final rules with the Prime Minister, Google put out a blog indicating that they were rolling out age assurance on YouTube in the United States. They will be using that same infrastructure to roll out age assurance technologies for Australians. So, we know that they’re capable of that today and we will certainly be watching and expect that they would do the same in compliance with Australian law.
JOURNALIST: Just a few more details if you don’t mind. With the take reasonable steps language, are you expecting all platforms to take the same steps or is what’s judged to be reasonable going to differ between platform to platform?
GRANT: Yes, what’s deemed to be reasonable, and that’s why we’re having these continued conversations. And we want to dig deeper with the technical and engineering teams to understand what is in the realm of the possible, what is going to be challenging from an implementation perspective. Every platform will be different and we’ll take a somewhat different approach, and that’s why we approached this in that way.
JOURNALIST: And, does that mean, like, a company like Meta, for example, is going to have a higher kind of threshold or expectation for the action they take? Because they are a more sophisticated company than a smaller platform.
GRANT: Well, I would say that we’ll see how they implement the reasonable steps that we put forward. In many respects, you might say that they have a head start because they had teen accounts in place for some time, where they are putting under-16s into protected accounts. Again, we’ll walk through some of those in some of my conversations with Meta this week. They’re interested in speaking about things like what does a good appeals or redress process look like. So yeah, some of these more advanced, sophisticated companies will already have very sophisticated systems in place. We will of course make allowance for those that are smaller, or this could become much of a more resource-intensive exercise. But again, this is a really great reason to have done the Age Assurance technical trial, because there are a lot of third-party tools that work at a very high accuracy level so companies will not have to build internally. And a number of companies that are sophisticated, like Reddit and Roblox, have decided to use third-party platforms.
JOURNALIST: When it comes to 10 December, you said there’s an expectation that companies will use their existing age inference tools to take a bunch of accounts off their platform. If I’m a 15-year-old on 11 December and my account hasn’t been removed, I guess should I expect that I’ve kind of managed to escape through? Or will there be kind of second and third waves where the tech platforms continue removing caps that perhaps fall through the cracks of that initial …?
GRANT: Like I said, I don’t expect it will be instantaneous and things will magically disappear. This not a complaint scheme, so we will not be responding to every complaint. But we will want to hear from the public- to report to us if there are specific platforms where their children or their students are under 16 and their platform is still there. We will of course then triage and send them to the appropriate platform. That’s why we’ve asked the platforms to make discoverable and responsive user reporting tools available, because we know people will be missed. And so, this will be an iterative process, but we will be continuing to engage on a regular basis. And again, if we detect that there’s really egregious oversights or too much is being missed, then we’ll talk to the companies about needing to retune their technology. On either side there could be extensive over-blocking or under-deactivating.
JOURNALIST: Just one more, on the $50 million fine – so that’s the upper limit. I guess, what is the offence that would incur that maximum fine? And are you expecting to, say, issue a bunch of warnings before you get to that level? Like, what’s that framework look like to you?
GRANT: We always take a graduated approach where we will engage informally with the platform first if we see something concerning, to tell them what our observations are and ask them why they think that’s happening. There’s of course certain companies that aren’t willing to engage or will be more likely to move to judicial review or a lawsuit. We need to be prepared for that as well. But in most cases, these companies want to operate in Australia and they respect Australian law. So, I expect if I raise that, there may be a few who completely decide to do nothing just to test our resolve. And if that is what they plan to do, we will meet them with force.
I think the Minister said the other week that she had sharp elbows, so it’s good to know that the Government is behind us on this.
WELLS: And I’d add to both your question and yours, Digby, that we are not anticipating perfection here. These are world-leading laws, but we are requiring meaningful change through reasonable steps that will see cultural change and a chilling effect that will keep kids safe online.
JOURNALIST: So just further to that then, so obviously come 10 December, companies that haven’t or don’t look like they’re taking reasonable steps, they won’t be fined on the spot. But it sounds like it’s quite flexible in terms of the engagement for those companies – and what’s the lead time in terms of, they haven’t taken their reasonable steps, we’ve engaged with them, they’re not engaging, so now we’re going to fine them?
GRANT: Well, again, this is why we’re doing these direct engagements with the companies most likely to be impacted now that they have the regulatory guidance so that we can have really open conversations about where they are, what are the challenges, what they see as potential impediments. I mean, one of the reasons I’m in this role is I spent 22 years in the technology industry. I’ll be bringing my COO, who is also our chief technology strategist. We’ll be having highly technical discussions, and we’ve been observing these companies for a long time. We did a transparency report called Behind the Screen in February to see where they were placed. I don’t think they’re going to be able to pull the wool over their eyes, and I think they know this is a marquee, very important policy for the Albanese Government. And I expect most of them will step up and come into line.
JOURNALIST: Is it possible to know as well, when you do your trip to Silicon Valley, which platform specifically will you be meeting?
WELLS: Well, I’m off to the UN with the PM, so I’ll be covering the East Coast and the Commissioner will be covering the West Coast. So, I defer to you on West Coast matters, Julie.
GRANT: We will be meeting with a range of companies from Apple to Discord, to Character AI to OpenAI. We also have these codes, Anthropic. We’re still finalising meetings with Google and Meta, but I’ll also be participating in the Stanford Research Trust and Safety Conference, where there will be trust and safety personnel from companies across the United States, so pretty much everyone will be there. Companies like TikTok, most of their work is not done in the United States. A lot of their development work on infrastructure is either held in Singapore or in China. We’ll also be meeting- spending a half day with Snap in Los Angeles.
WELLS: And I can say that I’ll be supporting the Prime Minister and meeting with other world leaders about our world-leading social media laws. You would have seen, as we’ve already talked about this morning, the President of the EU has come out to say they’re watching us closely. They want to talk to us about the how and the why. We’ve had a lot of interest from other world leaders, so we’ll be taking those meetings and talking about how and why next week.
JOURNALIST: Minister, just on your trip to the East Coast, who are you meeting with on that trip?
WELLS: That’s sort of as much as I can give you at the moment, but I welcome your interest and stay tuned.
JOURNALIST: For adults using any of the platforms, will they see changes to how they use the platform going forward now?
GRANT: Adults should not see huge changes to the platform, and that’s why we put into the regulatory guidance that we think it would be unreasonable if platforms re-verified everyone’s age. We want them to focus on the under-16s. We know that they have the targeting technology to do this. They can target us with deadly precision when it comes to advertising. Certainly, they can do this around the age of a child.
JOURNALIST: Just one last thing. Have you had any discussions with TikTok?
GRANT: Yes, on a fairly regular basis. They actually are quite good at age assurance. I don’t know if you saw my 7.30 Report interview, but when I met with the TikTok CEO a couple of years ago, he told me they can identify the age of a child in three seconds, meaning that a lot of companies are using biometrics now.