Another social media 'ban' update
Regulatory guidance for social media platforms has been published, and YACSA Young Member, Haylie is back to unpack it!
Written and contributed to The Smashed Avocado by Haylie.
On Tuesday 16 September, the eSafety Commissioner published Social Media Minimum Age Regulatory Guidance, a 55-page guideline for social media platforms to follow when the ban takes place on the 10 December. These guidelines come after the Online Safety Amendment (Social Media Minimum Age) Act 2024 passed in federal parliament at the end of last year, changing the Online Safety Act 2021 to increase the minimum age to hold a social media account to 16.
The Commissioner's report is a big announcement when it comes to the ban, after many months of uncertainty around what age-assurance technology platforms would be required to use, along with what would be considered ‘reasonable steps’ in making sure underage users don’t hold accounts on the platforms.
Expectations of social media platforms
This report clarifies the expectations of platforms in the lead up to and implementation of the ban.
Initially, it will be expected that platforms focus on the ‘detection and deactivation/removal of existing accounts held by children under 16,’ which includes integrating options to report underage users into their existing reporting tools. Platforms are also expected to ensure, from the start of the ban, that methods are in place to avoid underage users from making new accounts.
Platforms will then need to inform underage users on what will happen to their account, how to challenge decisions, obtaining their information, and where to seek mental health support if necessary.
Age verification methods
A trial of age verification methods was conducted by UK-based assessment body Age Check Certification Scheme (ACCS), funded by Department of Infrastructure, Transport, Regional Development, Communications, Sports and the Arts to research a variety of options to reduce underage social media use in Australia.
The trial determined that there is no method more effective than any other, as all have their own issues when it comes to effectively determining a user’s age – but that age assurance technology is feasible for use in Australia and will be effective in keeping underage users off social media.
As a result, it has been left up to the platforms to decide what methods they use, although it was recommended that they use ‘successive validation’ (multiple methods simultaneously).
Any methods used need to be user friendly, accessible, and account for any common issues with these technologies, such as being inclusive of different cultures, a lack of documentation, and language barriers.
Platforms need to be transparent, consider any risks, have measures in place to prevent basic workarounds such as VPN’s and deep fakes, and allow users to submit for a review or make a complaint if their account was deactivated despite being over 16.
The guidelines also state that platforms using government issued ID or a 3rd party provider to check ID need to have other verification alternatives available for users. Normal privacy laws must still be followed when verifying user’s ages and platforms aren’t expected to retain any information used to check ages.
If asked, platforms will be required to prove that they are taking reasonable steps to ensure underage users aren’t able to set up accounts, along with proving any other information that may be requested as part of a review.
To be considered reasonable, platforms can’t:
rely on self-declarations
wait long periods of time before confirming a user’s age
allow recently deactivated users to immediately make a new account
stop a large amount of users over the age of 16 from accessing the platform as a result of the method used.
But there may also be other instances where the eSafety Commissioner might deem a platform to not be taking reasonable steps that aren’t listed above.
All changes as a result of these new requirements need to be reflected in the platform’s terms of use and platforms need to be prepared to handle the increase in work that will likely come from the changes through an increase in reports of underage accounts.
What happens after the ban starts?
The eSafety Commission and other relevant departments involved in making the guidelines for platforms have set dates between 1–2 years to review the requirements based on how the initial roll-out goes.
Any platforms seen to not be following the new legislation in line with released guidelines will receive a written statement, which will also be published on the eSafety website, along with potentially being fined up to 49.5 million dollars.
Social media ban update
There aren’t many updates to deliver, but what have we learnt about the social media ban for under 16–year–olds so far this year?
We are mere months out from the nationwide ban on under 16’s holding social media accounts and there are very few updates to give.
As YACSA young member Haylie explained here earlier in the year, decisions on what platforms would and wouldn’t be included under the ban, and how it would be implemented began being made after legislation passed.
Haylie also touched on age verification technology, and that trials in other countries to create their own versions had previously been unsuccessful.
With parliament still awaiting a report from a government-funded age assurance trial due to be released later in August, not much has changed on this front.
While the government have been looking into this, the responsibility to verify the ages of users is with the platforms, and the ban won’t dictate how they do this.
So, while they’ll probably differ between platforms, age assurance methods could include technology that estimates your age from images of your face or matches your photo with your ID images.
But while ID checks can be used for age assurance purposes, under the legislation platforms won’t be able to have this as the only option available for users to verify their ages.
As the early-December start of the ban gets closer, the federal government has clarified some platforms that will, and will not, need to remove accounts held by under 16s.
While early discussions focused on messaging platforms, attention has now turned to YouTube – with the Government deciding that the video-sharing platform won’t get an exemption from the ban.
This is despite Google (who own YouTube) threatening legal action in response on the basis of the platform’s educational uses.
The government has said platforms would be exempt if their primary purpose is:
messaging, emailing, voice calling or video calling
playing online games
sharing information about products or services
professional networking or professional development
education
health
communication between educational institutions and students or their families
facilitating communication between providers of healthcare and people using those providers’ services.
But ultimately, while the government set these rules, it will be up to the eSafety Commissioner to enforce this legislation, including determining which platforms meet these criteria for an exemption.
The social media ban – what is it?
Young member Haylie reports on the federal social media ban and where it originated.
Written and contributed to The Smashed Avocado by Haylie.
The social media ban, created under the Online Safety Amendment (Social Media Minimum Age) Act 2024, is a change to the Online Safety Act 2021, making the minimum age to use social media to 16.
The ban originated in South Australia after the Premier commissioned an independent report from the Honourable Robert French AC, who is a former chief justice of the high court. The 276-page long report provided ideas on what the ban could look like, which went through further discussion and changes. This report went on to inform the Children (Social Media Safety) Bill 2024 in SA.
The initial idea, outlined in the report, was a ban for under 14's, with 14 and 15-year-olds needing parental permission to hold social media accounts. When federal parliament joined in with the idea and planned to impose a nation-wide ban, the age was changed to under 16's in the federal Online Safety Amendment (Social Media Minimum Age) Act 2024, which was successful.
There is still a lot to be decided about the ban though, parliament needs to make decisions on what platforms are actually banned and how the ban is enforced, amongst other things.
Deciding what does get banned is the tricky part. This is because messaging platforms aren't a part of the ban, but some platforms like Snapchat do fall under this category. These are still being discussed, but once decisions are made they’ll be made public.
Age verification has been mentioned many times as a potential requirement for platforms to introduce when the ban takes place. This isn't an easy thing to do though, several other countries have attempted to introduce this technology before without success. The age verification roadmap previously made by the eSafety Commissioner highlighted that the technology wasn't developed enough to work yet.
There is however some potential in age prediction technology. While age estimates from a face aren't too accurate, there are other methods that the reliability has not yet been seen on, such as using artificial intelligence to track app usage and make a prediction on the user's age.
These decisions will come as recommendations from the eSafety commissioner, before being passed in federal parliament to be included in the ban. More information will be released during the year as the decisions are made before the ban comes into place at the end of the year.