The wildfire spread of trends across social media is nothing new. All it can take is one influencer or celebrity to push an item, and its popularity snowballs. Take the Stanley Cup: Despite being on sale for over a decade, this insulated flask gained TikTok notoriety overnight, netting Stanley $750 million in global revenue—and leaving me with an $85 deficit.
However, not all trends are this harmless. A recent spike in children using skincare products has worried both parents and dermatologists worldwide, with beauty retailer Mecca forced to address the rise in young customers buying its products.
I should know; after finding myself on the receiving end of demands from my 10-year-old for anti-ageing products, I discovered that some pharmacies in Sweden have banned the sale of these items to under-15s altogether. Experts and parents like me worry the trend could lead to long-term skin damage, including an increased risk of sunburn, and could have a psychological impact if children become obsessed with ageing.
As younger and younger children, dubbed ‘Sephora Kids’, are exposed to online environments (despite parental controls) and products not intended for them, more unintended consequences are likely to emerge. If Gen Z were the first generation to grow up as true digital natives, then Gen Alpha are even more plugged in and susceptible to trends like these. The risk is only growing.
It's easy for brands to point the finger at social media when harmful online trends appear. While platforms do bear some responsibility for the spread of these trends, brands must also acknowledge their role. On TikTok, where this dangerous skincare obsession has gone viral, the hashtag #teenageskincare has garnered over 26.4 million views. Much of this content is user-generated, and while moderation remains a thorny issue, platforms must make a concerted effort to curb such trends and take responsibility beyond simple age restrictions for account holders.
Similarly, governments worldwide could be doing more to address the situation. Legislation like the Online Safety Act protects internet users from some damaging content but struggles to keep pace with rapidly evolving trends. In Australia, the Therapeutic Goods Administration (TGA) aims to prevent influencers from making false claims about skincare products, yet this regulation does not apply internationally—a challenge when dealing with global social media platforms. The TGA has acknowledged the scale of the problem, issuing a warning to beauty manufacturers earlier this year about the dangers of overselling products to younger consumers.
Despite these efforts, brands need to reflect on their own actions regarding the rise of damaging movements. Too often, on platforms like TikTok, Snapchat, or Instagram, brands dive in with little regard for the audience they are speaking to. Imprecise targeting leads to unsuitable placements in the feeds of inappropriate audiences, effectively legitimising influencers or users through mere proximity. Ultimately, this is a brand safety issue—brands finding their ads placed in inappropriate, and sometimes harmful, environments.
Social media trends come and go, but if brands don’t take direct action to mitigate their impact, it won’t be long before the next trend breaks—raising questions about where this will end. It is time for stronger measures to protect users and reduce the impact of advertising in perpetuating harmful trends: Not every platform is right for every brand. The lure of chasing scale over quality puts reputations at risk and endangers poorly targeted audiences. Brands must slow down, ensuring continuous monitoring and content scanning—not just their own but also the organic and user-generated content surrounding them. Premium publishers with known and carefully targeted audiences should be a high consideration for brands seeking safer campaign environments.
To create a safer online environment, particularly in protecting children, you need to collaborate. Brands must engage openly with platforms and legislative bodies and ensure their partners are aligned with their brand safety values. Greater advocacy is needed for clearer advertising guidelines tailored to social media, and collaboration is crucial for making these guidelines impactful. Industry-wide frameworks, like those laid out by the now-defunct Global Alliance for Responsible Media (GARM), highlight the power of collective action.
Brands can leverage their influence to educate audiences, moving beyond simple public service announcements. Influencers are incredibly powerful in swaying opinions, but without awareness of sensitive issues, they can cause more harm than good. Brands must carefully select influencers and ensure proper training and guidelines for responsible content creation. Where influencers are not suitable, premium publishers can provide authoritative voices for awareness campaigns on social platforms.
With issues as complex as underage skincare trends and the rapid pace of platforms like TikTok, brands cannot rely on outdated tools like keyword blocklists to avoid placing ads near contentious topics. AI-powered brand safety tools that can read content nuance and sentiment quickly and at scale offer a more sophisticated approach. These technologies guide marketers to safer environments, helping them reach the right audience while avoiding vulnerable ones.
If the serious impacts of social media and advertising on children are to be addressed, finger-pointing alone will not suffice. True change requires all players to commit to proactive protection, open communication, and innovative solutions.
Fiona Salmon is the managing director for Mantis.