Written and contributed to The Smashed Avocado by Haylie.
On Tuesday 16 September, the eSafety Commissioner published Social Media Minimum Age Regulatory Guidance, a 55-page guideline for social media platforms to follow when the ban takes place on the 10 December. These guidelines come after the Online Safety Amendment (Social Media Minimum Age) Act 2024 passed in federal parliament at the end of last year, changing the Online Safety Act 2021 to increase the minimum age to hold a social media account to 16.
The Commissioner's report is a big announcement when it comes to the ban, after many months of uncertainty around what age-assurance technology platforms would be required to use, along with what would be considered ‘reasonable steps’ in making sure underage users don’t hold accounts on the platforms.
Expectations of social media platforms
This report clarifies the expectations of platforms in the lead up to and implementation of the ban.
Initially, it will be expected that platforms focus on the ‘detection and deactivation/removal of existing accounts held by children under 16,’ which includes integrating options to report underage users into their existing reporting tools. Platforms are also expected to ensure, from the start of the ban, that methods are in place to avoid underage users from making new accounts.
Platforms will then need to inform underage users on what will happen to their account, how to challenge decisions, obtaining their information, and where to seek mental health support if necessary.
Age verification methods
A trial of age verification methods was conducted by UK-based assessment body Age Check Certification Scheme (ACCS), funded by Department of Infrastructure, Transport, Regional Development, Communications, Sports and the Arts to research a variety of options to reduce underage social media use in Australia.
The trial determined that there is no method more effective than any other, as all have their own issues when it comes to effectively determining a user’s age – but that age assurance technology is feasible for use in Australia and will be effective in keeping underage users off social media.
As a result, it has been left up to the platforms to decide what methods they use, although it was recommended that they use ‘successive validation’ (multiple methods simultaneously).
Any methods used need to be user friendly, accessible, and account for any common issues with these technologies, such as being inclusive of different cultures, a lack of documentation, and language barriers.
Platforms need to be transparent, consider any risks, have measures in place to prevent basic workarounds such as VPN’s and deep fakes, and allow users to submit for a review or make a complaint if their account was deactivated despite being over 16.
The guidelines also state that platforms using government issued ID or a 3rd party provider to check ID need to have other verification alternatives available for users. Normal privacy laws must still be followed when verifying user’s ages and platforms aren’t expected to retain any information used to check ages.
If asked, platforms will be required to prove that they are taking reasonable steps to ensure underage users aren’t able to set up accounts, along with proving any other information that may be requested as part of a review.
To be considered reasonable, platforms can’t:
rely on self-declarations
wait long periods of time before confirming a user’s age
allow recently deactivated users to immediately make a new account
stop a large amount of users over the age of 16 from accessing the platform as a result of the method used.
But there may also be other instances where the eSafety Commissioner might deem a platform to not be taking reasonable steps that aren’t listed above.
All changes as a result of these new requirements need to be reflected in the platform’s terms of use and platforms need to be prepared to handle the increase in work that will likely come from the changes through an increase in reports of underage accounts.
What happens after the ban starts?
The eSafety Commission and other relevant departments involved in making the guidelines for platforms have set dates between 1–2 years to review the requirements based on how the initial roll-out goes.
Any platforms seen to not be following the new legislation in line with released guidelines will receive a written statement, which will also be published on the eSafety website, along with potentially being fined up to 49.5 million dollars.