In early 2018, YouTube “star” Logan
Paul earned notoriety for airing a video that included the body of a dead man, apparently a suicide, in a forest at the foot of Japan’s Mount Fuji. Paul went there intentionally — the
forest is known for the frequent suicides that are committed there — looking for gruesome footage for his YouTube channel.
Paul became a poster child for brand safety — or rather,
the lack of it — and his actions put a spotlight on YouTube and other social networks for what was seen as a lack of stewardship when it came to protecting brands and their ads from being
exposed to harmful and offensive content. YouTube in particular was raked over the coals as many advertisers paused their spending on the channel.
But now, according to a new audit and report
from Mediabrands, YouTube has the best overall brand-safety measures in place among all the major social media networks.
The IPG media management unit announced the report — The Media
Responsibility Audit — today. The full report is not being made public, but it will be updated quarterly, presumably for the benefit of clients.
The report is an offshoot of the media
responsibility principles the company unveiled in June. It is spearheaded by
performance marketing unit Reprise and assesses the major social media platforms Facebook, LinkedIn, Pinterest, Reddit, Snapchat, TikTok, twitch, Twitter, and YouTube.
“What this audit
shows is that there is work to be done across all platforms from a media responsibility perspective, and that the different platforms each need to earn their place on a brand’s marketing
plan,” stated Elijah Harris, global head of social at Mediabrands’ agency Reprise.
The audit, he added, “is a tool to hold platforms accountable for improving their media
responsibility policies and enforcement and to ensure we can track progress over time. “
The company issued these “key findings” from the report:
Enforcement Matters: Platforms fall short by not backing up their policies with consistent enforcement of those policies. Most platforms have some level of enforcement reporting, but these are
inconsistent and limited in scope. They rarely focus on the platforms holding themselves accountable for their own enforcement of policies. There is a need to better define expectations and metrics to
be included within future policy enforcement reporting.
- Lack of Consistency Across Platforms: Given broad regulations that surround anti-discrimination and data privacy (e.g.
GDPR/CCPA), there are opportunities to become even more consistent in how data collection policies are enacted across the various social platforms.
- Eradicating Hate Speech Is A Common
Goal: There is a shared recognition across platforms that eliminating hate speech is important but there are inconsistent definitions of what qualifies as hate speech,
inconsistent identification of protected classes of people, and a lack of prevalence reporting and independent auditing of hate speech reports. GARM’s proposed work to resolve these issues will
- Misinformation Is A Challenge: Misinformation is a challenge across most platforms. While certain platforms work with many organizations to combat misinformation, others
work with none at all. Some platforms cited their unique engagement models as reason to de-prioritize fact-checking, but our desktop research shows that even minor instances can lead to unsafe ad
placement for advertisers.
- Non-Registered User Experiences Vary: For platforms that allow access to their services without user registration, there is an opportunity to be more
consistent with that user experience. Some platforms still allow certain advertising placements to be viewed by a non-registered user, which may not result in responsible media delivery.
- Urgent Need For Third-party Verification: Only a few partners have specific controls for protecting advertisers from adjacency to content in objectionable or harmful categories (as in
GARM’s brand safety framework). The industry needs to promote and use third-party verification partners more widely, so we are not at the mercy of the platforms’ lack of controls.