Facebook, Twitter and YouTube are expected to adopt a uniform set of standards and definitions for hate speech and harmful content by the end of the year, according to the Global Alliance for Responsible Media (GARM).
The initiative, led by the World Federation of Advertisers, could offer consistency for buying on these platforms to marketers seeking common definitions and standards for brand safety, especially in recent months when social justice movements and politics have made created a challenging environment for social media platforms and publishers.
GARM—an organization that Facebook has committed to in the face of brand safety criticism—said the three major social sites, along with TikTok, Pinterest and Snap, have given “firm commitments” to develop plans for “similar controls.”
It wasn’t revealed in the announcement, made last week, exactly what the new policy will be, or how it precisely differs from what the platforms already have in place. Those organizations didn’t immediately offer clarification on the changes. But media buyers say it promises to be an improvement.
Brand safety—and more specifically, hate speech—has been “one of the challenges of our generation,” said Stephan Loerke, CEO of the WFA, in a statement. “A safer social media environment will provide huge benefits not just for advertisers and society, but also to the platforms themselves,” he continued.
The agreement comes after a summer of social unrest that pushed more people online and, in an election year, put an additional lens on those platforms’ policies toward hate speech and misinformation.
At Facebook and sister company Instagram, the criticism came to a head less than two months ago when more than 1,000 advertisers boycotted those sites for a month. The Stop Hate for Profit movement, as it was called, was led by civil rights groups like the NAACP and the Anti-Defamation League. As a result, Facebook began labeling false posts from world leaders, released a long-delayed civil rights audit and agreed to a brand safety audit by the Media Rating Council.
At the time, media buyers said those actions were a step in the right direction. This time, other buyers told Adweek that this, too, could improve their relationships with the social media giants, especially as brands are taking extra precautions to prove to consumers (and their own employees) their values with their media spends, according to Scott Madden, senior partner and director of strategy at independent agency Connelly Partners.
“Having common definitions and reporting will allow media buyers a sense of relief when buying for integrated media plans because everyone is speaking the same language when it comes to ensuring brand safety,” said Bridget Jewell, creative director at the Minneapolis-based agency Periscope.
“It’s about time the platforms rise with solutions. We know this is a stop on a long journey and can’t wait to see these changes actually in action,” Jewell continued.
Still, the announcement wasn’t made without criticism of the platforms’ overall approach to moderating content (which capitalizes on a combination of AI, third parties and human moderation).
While the changes are “certainly welcomed,” Michelle Capasso, director of media services at Connelly Partners, said the platforms have missed opportunities to take “preemptive brand safety measures” to “protect a client from harmful adjacencies.”
Criticism has even been made recently by Procter & Gamble, one of the largest international advertisers. The CPG giant didn’t join the formal Stop Hate for Profit boycott this summer, but has been critical of social media platforms like Facebook.
Join Adweek for Purpose-Driven Marketing, a live virtual event on Sept. 29, to discuss authentic brand purpose and hear insights from top marketers on navigating these uncertain times. Register now.