Even as Turkey joins the list of countries that intend to ban social media access for children and teens, Meta which finds itself impacted the most, is again trying to convince parents that its Teen Accounts proposition is safe. Turkish lawmakers, this week, have passed a bill that includes restricting social media access to children under the age of 15 years, marking continuity for a global trend to protect children from often dangerous activity on these platforms. Turkish President Recep Tayyip Erdogan now has 15 days to accept the bill, after which it becomes a law.

The bill will mandate social media platforms including Meta’s Facebook, Threads and Instagram, as well as other social media companies including X, Pinterest, TikTok, YouTube, Twitch, Kick, Reddit, band Snapchat, to put in place age-verification systems and extensive parental controls. Governments also expect social media companies to respond and take down content they deem harmful for children, and the wider societal fabric. The addition of artificial intelligence (AI) chat features within these apps, adds another layer of possibly dangerous complexity.
“We are living in a period where some digital sharing applications are corrupting our children’s minds and social media platforms have, to put it bluntly, become cesspools,” Erdogan minced no words in a televised address in Turkey, this week, ahead of the bill being passed by lawmakers. Turkey isn’t the only country concerned about the risks and pressures of young adults on social media platforms, with often documented instances of suicide, cyberbullying, addiction and exposure to violent content.
Case in point, in 2024, Meta CEO Mark Zuckerberg was asked by lawmakers in the U.S. as to why Meta allowed drug dealers to post ads on the company’s social media platforms—a report by watchdog group Tech Transparency Project had found more than 450 ads on Instagram and Facebook selling an array of pharmaceutical and other drugs.
Turkey’s move adds momentum to what is now a global response to often uncontrolled content sharing on social media, which is incredibly easy to otherwise access. Australia, in December, became the first country to ban social media for children under the age of 16 years. In January, France passed a bill to ban social media access for children 15 years or younger. Greece has already announced a ban for social media apps for anyone aged under 15 years, from January next year. Indonesia and Malaysia have banned access to for anyone under the age of 16, while Germany, Poland, Spain and the UK are weighing in on bans as well.
The Indian government is expected to propose a tiered approach to limiting social media access depending on age groups till the age of 18 years, with focused expected on enforcement of age verification using Aadhaar verification and verifiable parental consent. Despite attempts over the years, content regulation on social media has never delivered success to the extent that these platforms could be considered safe for children.
While it is obvious that social media companies would oppose any restrictions in access to their platforms, since it is a leverage of user base numbers, eyeballs and revenue, they have been forced to respond to the bans imposed in many countries, as others contemplate similar moves.
In the days leading up to the ban in Australia, Meta sprung into action and deactivated more than 550,000 accounts suspected to belong to minors, and a new content filtering system inspired by movie ratings, was put in place. YouTube deactivated more than a million accounts linked to minors globally, and continues to argue that it isn’t technically a social media company. TikTok too took down more than 200,000 accounts in Australia alone, while Snapchat saw its teen user base drop as much as 14% after Australia’s ban.
Meta makes a pitch to parents
Even as more countries ban social media platforms for children, something that particularly hurts Meta, the company is now trying to convince parents that the reinforced tools for them to supervise child social media accounts, can get the job done. This week, they’ve announced detailed insights into the use of Meta AI by their child, and are now rolling out to parents supervising Teen Accounts in US, UK, Australia, Canada and Brazil—with a global rollout expected in the coming weeks.
“Parents using supervision on Facebook, Messenger, or Instagram will now see a new Insights tab within supervision, both in-app and on web. From there, parents will be able to see the topics their teen has been asking Meta AI about in that specific app over the past week. Topics can range from School, Entertainment, and Lifestyle to Travel, Writing, and Health and Wellbeing, among others,” the company says, in a statement.
Earlier this month, Meta had updated Instagram Teen Accounts, with supposedly stronger safeguards defining age-appropriate content visible on the feeds of teenage users. First, a content filter that Meta says is inspired from the 13+ movie ratings criteria and parent feedback by default. Secondly, teens will no longer be able to follow accounts that are noted to regularly share age-inappropriate content, or if their name or bio suggests the account is inappropriate for teens.
But then again, Meta also warned that no system is Pefect and that parents must continue to give feedback.
Despite governments in many countries attempting to keep children away from risks on social media by banning access, there is the risk of VPNs, or virtual private networks, being used to circumvent these restrictions by spoofing the user’s location. There is also the reality of bypassing social media platforms for apps such as WhatsApp and Telegram, where mini communities morph into social networks, to share the same type of content that may have otherwise been shared on Instagram or TikTok or Snapchat.
An example of this is from the UK’s Online Safety Act which was enforced in July last year, mandating age-checks on web platforms. Proton VPN reported a 1400% hourly increase in new user registrations, while NordVPN reported a 1000% rise in subscriptions from UK based users. In March this year, analytics firm Apptopia reported daily active VPN sessions in Australia have peaked at 1.32 million, since the age restrictions were put in place.





