Last updated: 28.03.24

Facebook vs Twitter: The Social Media Platforms Fight For Safeguarding


It is no secret that tech billionaires Mark Zuckerberg and Elon Musk have spent much of the last decade in a public feud, with animosity being apparent since one of the latter’s rockets exploded while carrying a contracted Facebook satellite.

Things came to a head last year though, when Zuckerberg launched and openly billed his new platform Threads as a direct competitor of Musk’s platform, X (formerly known as Twitter).

Speaking of the bold move, Zuckerberg stated that he felt X had missed its opportunity to become a mega app because of the amount of hostility on the microblogging platform. The claim came just months after insiders at X told the BBC that the company was no longer able to protect users from trolling, state-co-ordinated disinformation and child sexual exploitation.

And despite Threads becoming the fastest consumer app to reach 100 million downloads ever, doing so in an impressive five days, the platform has also come under criticism for safeguarding issues. Research highlights that it is the worst social media platform for protecting user privacy.

The public social media smackdown between the two companies sparked threats of a trade secrets lawsuit in July 2023 but the controversy around this has since died down. Along with the popularity of Threads itself interestingly, with numbers dwindling and Facebook still being the core platform of Zuckerberg’s company Meta.

But what these figures do highlight is that in their race to have the ‘best’ social media platform, both tech giants appear to have been overlooking one thing: proper safeguarding.

We’ve taken a deep dive into the safeguarding policies of both Twitter and Facebook over the last decade to understand what measures have been put in place throughout the years to protect their users.


Key Findings

  • Twitter didn’t appear to have any policies relating to conduct on the platform, whereas Facebook already had a dedicated space for this.
  • In 2015, the European Union failed to agree on a uniform policy that would prevent tech companies from being able to offer their services to people under 16 years old. Meta lobbied against the proposal, with the company having designed a system the year prior that would allow children under 13 years old to join Facebook.
  • A ‘code of conduct’ was launched by the European Union in 2018 in conjunction with Facebook, Twitter and YouTube that included a requirement to review the “majority of valid notifications for removal of illegal hate speech” in less than 24 hours. Two years later Amnesty International revealed that over 1.1 million abusive or problematic tweets were sent to women in the year prior.
  • Lockdown caused a surge in the number of people contacting the government-funded Revenge Porn Helpline, with traffic to the website nearly doubling in the week beginning Monday 23 March 2020 and more cases being opened in the following four weeks than in any previous four-week period. The same year Facebook updated its policy guidelines on what constitutes ‘Revenge Porn’ or ‘Non-Consensual Intimate Images’ and the actions that are considered attempts to exploit individuals sexually.

Online Safety Act

After years of debate, the UK government's controversial ‘Online Safety Bill’ - which aims to make the internet safer for children - became law in 2023.

The legislation aims to force tech firms to take more responsibility for the content shared on their platforms and the UK’s Technology Secretary Michelle Donelan said it "ensures the online safety of British society not only now, but for decades to come." 

The goal of the Online Safety Bill is to protect children from viewing legal but harmful material through measures such as requiring pornography sites to check ages and social platforms to show commitment to removing illegal content, including promoting child sexual abuse, controlling or coercive behaviour, and self-harm and suicide.

However, according to Parliament’s Public Accounts Committee, years may pass before the Online Safety Act (OSA) makes a noticeable difference to people’s online experience so it’s important that parents, teachers and support workers don’t take their fingers off the pulse.

Tips for safeguarding children online

Virtual College by Netex are specialists in online learning and we offer a range of safeguarding courses. 

This indicates a clear recent demand for understanding more about online safety and safeguarding. 

Here are some key tips of safeguarding advice for when children start using the internet.

Using parental controls

Parental controls enable you to block and filter inappropriate content on your child’s device, which will reduce their exposure to information beyond their ability to understand. This can be done for applications, video streaming sites and in web browsers to only allow pre-approved websites.

Establishing boundaries

As is important in other areas of their lives, establishing clear boundaries when it comes to children’s online activity is important. This can be related to how long they spend online, which websites or applications they are allowed to use and what actions they must take if they encounter something inappropriate.

Staying informed

Our digital world is fast evolving so if you are allowing your child to be part of it, it’s important to stay up to speed with the latest news and developments. Look at what is popular on social media, be knowledgeable about the latest games and applications and be aware of any trends that can expose your child to danger. Online safeguarding training is also a helpful way to feel better prepared to protect your child online. 


We used publicly available data on both of the platforms’ dedicated policy pages and then looked back on the information shared on these pages over the last decade to see how they have changed. 

We are by no means claiming that this study is absolute and there may be other information that we are not aware of.