Tech giant Meta will begin removing the Facebook and Instagram accounts of children aged 13 to 15 a week before the government’s social media ban comes into effect, notifying teen users of the changes from Thursday.
While vowing to block as many underage users as it can, the powerful multinational corporation warned there were still significant errors in age verification services, suggesting the rollout of the world-first teen ban would face complex hurdles.
Messages Meta will send to account holders it suspects are under 16 before the social media ban starts on December 10.Credit: Matt Davidson
The federal government has ordered social media platforms Tiktok, Instagram, Snapchat, YouTube, Facebook, X, Reddit, Kick and Threads to boot under-16s from their platforms by December 10.
The ban was lauded by European Commission president Ursula von der Leyen, who called it “plain common sense”, and the New Zealand parliament is set to debate similar legislation.
From December 4, Meta will begin removing accounts from Instagram, Facebook and its X competitor, Threads, that it believes are held by underage users. New accounts for under-16s will be blocked from the same date.
Teenage users of the platforms will this week be sent a combination of in-app messages, notifications, SMS messages and emails. The notifications will provide children with two weeks notice before their profiles are removed, and encourage them to download any photos, videos or posts that will be deleted when the accounts are shut down.
An example message from Meta.
Users who are incorrectly classified as underage will have to verify their identity either through a “video selfie” or by providing government ID to age verification platform Yoti. Users who have altered their age to above 16 in settings will also have to verify their age.
In an email alert about the measures, Meta acknowledged the “shortcomings and significant margin in error” in age verification, describing the age of 16 as a “globally novel” boundary.
