Audit assesses social-media platforms’ progress toward responsibility

August 14, 2020 | Share this article

From policy enforcement to misinformation, a study conducted by IPG Mediabrands agency Reprise highlights the progress (or lack thereof) social-media platforms are making on brand safety.

In an era of rapid social-media growth and a similarly rapid uptick in misinformation, an IPG Mediabrands study checks into the progress social platforms are making, or failing to make, when it comes to providing a safe brand envrionment.

Led by Mediabrands’ performance-marketing agency, Reprise, the study reveals that although social platforms have made some progress on improving their performance and perception, they have a long way to go.

“What this audit shows is that there is work to be done across all platforms from a media responsibility perspective, and that the different platforms each need to earn their place on a brand’s marketing plan,” says Elijah Harris, global head of Social, Mediabrands’ agency Reprise. “The audit is a tool to hold platforms accountable for improving their media responsibility policies and enforcement and to ensure we can track progress over time.”

The Media Responsibility Audit, based on the Media Responsibility Principles Mediabrands recently released, includes an assessment of social-media platforms (Facebook, LinkedIn, Pinterest, Reddit, Snapchat, TikTok, twitch, Twitter, and YouTube) against the 10 principles to check current status and accountability against each principle. The audit consisted of 250 questions and focused on establishing a benchmark on what a responsible platform looks like.

Here’s what we learned from this audit:

  • YouTube tops the overall rankings and performs best against several principles. This is a testament to the changes YouTube has made in response to advertiser brand-safety concerns three years ago. In contrast, the embattled Tiktok has the most work to do.
  • Platforms fall short by not backing up their policies with consistent enforcement of those policies. Most platforms have some level of enforcement reporting, but these are inconsistent and limited in scope.
  • Misinformation is a challenge across most platforms. While certain platforms work with many organisations to combat misinformation, others work with none at all. The audit showed that even minor instances can lead to unsafe ad placement for advertisers.
  • Marketers are wrestling with the challenge of dealing with ad placements for non-registered users, since this experience varies across platforms.
  • There is an urgent need for third-party verification, given the massive amount of disinformation on these platforms. Worryingly, few platforms have controls for protecting advertisers from adjacency to content in objectionable or harmful categories (as in GARM’s brand-safety framework). The audit shows that the industry needs to promote and use third-party verification partners more widely.

View Full Article