X (formerly Twitter) has revealed its content moderation practices and how much content violates its rules.
In its Global Transparency Report for H1 2024, X disclosed that 0.0123% of its posts were in violation of its rules in the past six months. This is one in every 10,000 posts.
The most common violations were categorised as hateful conduct, abuse and harassment and violent content, in that order. Hateful conduct accounted for almost half of all the violations with a post violation rate of 0.0057%.
X believes in a “Freedom of speech, not freedom of reach” enforcement philosophy, so when content is in violation of its rules, its first tier of punishment is to restrict the reach of posts, making the content less discoverable, as an alternative to removal.
Posts with restricted reach have a 82% to 85.6% reduction in impressions.
The next tier of content enforcement is for the post to be removed or suspension of the account. Between January and July, 10.7 million posts were removed or labelled.
In comparison, TikTok removed 980 million video comments in the first quarter of 2024 alone for violating its community guidelines. This represents 1.6% of total video comments published during that period.
TikTok’s content, which is either in video form or live stream, had a removal rate of 0.9% and 1.7% respectively for content in violation of its guidelines. Unlike X, when TikTok’s content is in violation it is immediately removed.
X stated in its report: “Our policies and enforcement principles are grounded in human rights, and we have been taking an extensive and holistic approach towards freedom of expression by investing in developing a broader range of remediations, with a particular focus on education, rehabilitation and deterrence.”
Platform manipulation is defined by X as engaging in "bulk, aggressive, or deceptive activity that misleads others and/or disrupts their experience."
The most common reason to suspend an account was platform manipulation and spam (464 million), followed by child safety (2.8 million).
X owner Elon Musk has been on a mission to purge the platform’s spam and bot accounts. He tried to get out of buying Twitter in 2022 because he said the platform had not provided the necessary information about bots and spam accounts.
X’s defences for manipulation and spam are “primarily proactive or automated”. The platform's content moderation for all categories is a combination of machine learning and human review. Its systems either take action automatically or surface content to human moderators based on user reports or proactive detection methods.
Facebook and Instagram's owner Meta reports its transparency figures by policy area and measures how prevalent violations are—it assumes that the effect caused by violating content is proportional to the number of times that content is viewed.
For hate speech, Instagram averaged 0.025% prevalence in H1 and just less at 0.2% on Facebook.
Snapchat, which has a younger user base than the others, enforced against more than 5.7 million pieces of content in for H2 2023. This accounted for 0.01% of content views on the platform.
When asked about brand safety for advertisers, a spokesperson for X told Campaign: "X is proud to be a platform advertisers can trust to safely market to consumers. X is 99% brand safe backed by both third party verification partners, DoubleVerify and Integral Ads Science."