GARM launches its first measurement report for digital brand safety
The Global Alliance for Responsible Media (GARM) has launched its first report tracking performance on brand safety across seven platforms, including Facebook, Instagram, Twitter and YouTube, as the next step in its mission to improve the safety, trustworthiness, and sustainability of media.
By aggregating existing platform transparency reports and adding in policy-level granularity, the new document creates a common framework that enables advertisers to assess progress against brand safety for each platform member of GARM.
The new framework also drives simplicity, focus and highlights the use of best practice methodology.
The GARM Aggregated Measurement Report is based around four key questions marketers can use to assess progress over time.
The report is consistent with the common framework used to define harmful content not suitable for advertising and introduces aggregated reporting. The new frameworks of common definitions and aggregated reporting deliver consistency in well-established practices at the same time as advancing best practices into industry standards.
Ultimately, the report provides a common and focused framework for advertising industry stakeholders to make more informed decisions about their advertising investment.
Highlights from the latest data show that more than eight-in-ten of the 3.3 billion pieces of content removed across the platforms participating in the report is from three leading categories – Spam, Adult & Explicit Content, and Hate Speech & Acts of Aggression.
The data also illuminates a growth in action taken on Hate Speech & Acts of Aggression across platforms. GARM platforms have reported increases in activity and its impact with significant progress by YouTube in the number of account removals, Facebook in the reduction of prevalence, and Twitter in the removal of pieces of content.
These initial improvements have occurred amid an increased reliance on automated content moderation to help manage blocking and reinstatements due to COVID-19 disruptions that resulted in moderation teams working with limited capacity.
GARM is a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) and supported by other trade bodies, including the Association of National Advertisers (ANA), Incorporated Society of British Advertisers (ISBA) and the American Association of Advertising Agencies (4A’s).
Stephan Loerke, CEO of the WFA: “We have built on our agreed definitions to produce a detailed database of the progress that’s being made on reducing harmful content and the potential for monetization across the digital platforms. The collaboration between advertisers, agencies and platforms has been very constructive and we now have common ground to drive even greater progress for the benefit of society, marketers and the long-term health of the digital ecosystem.”
The report follows nine months of collaborative workshops between major advertisers, agencies and key global platforms working together as one of GARM’s Working Groups, bringing together for the first-time data in a single, agreed location around four core questions and eight authorised metrics that have been agreed as critical to tracking progress on brand safety.
The Aggregated Measurement Report provides a simple and transparent framework based around four core questions that advertisers can use to understand how well the platforms are enforcing their policies in the context of the brand safety floor:
How safe is the platform for consumers? The prevalence of harmful content will be reported as the number of views of harmful content as a percentage of all views of content.
How safe is the platform for advertisers? The incidence of advertising appearing in the context of harmful content will be reported as the number of ad impressions on harmful content as a percentage of all ad impressions. For newsfeed environments, the overall consumer prevalence measure above will be reported.
How effective is the platform enforcing its safety policy? This will be reported as the total number of pieces of harmful content removed and the number of times it has been viewed.
How responsive is the platform at correcting mistakes? This will be reported as the total number of appeals made by users and the number of reinstatements made by platforms.
Independent oversight and measurement is critical to the GARM initiative, helping create accountability on the challenge of harmful content. It enables each member to ask how are we progressing collectively, how are we progressing individually, how are we tackling each of these topic areas?
The report includes self-reported data from Facebook, Instagram, Pinterest, Snap, TikTok, Twitter and YouTube. Numbers are self-reported by platforms. The full data set can be downloaded here. Twitch, which only joined GARM in March, will join the reporting process for the next report, due later this year.
GARM Working Groups continue to work on other areas of focus including better adjacency controls for brands and hopes to announce further initiatives later in the year.
Raja Rajamannar, Chief Marketing and Communications Officer, Mastercard and WFA President: “This report is great progress for our joint efforts, bringing together consistent and reliable data that marketers can depend on. It establishes common and collective benchmarks that reinforce our goals and help brand leaders, organizations and agencies make sure we keep media environments safe and secure.”
Marc Pritchard, Chief Brand Officer, P&G: “There’s no place for harmful online content in media that’s monetized by advertising, and we need to understand the size of the problem and track progress over time. The GARM Aggregated Measurement report is an important step forward in helping brands advertise in safe and suitable places—a critical element for consumer trust.
Conny Braams, Chief Digital and Marketing Officer, Unilever: “When we launched Unilever’s Media Responsibility Framework, we called for collective action to rebuild trust within the digital ecosystem, including the need for consistent metrics and measurement to evaluate and eliminate harmful content across platforms. The launch of GARM's Aggregated Measurement Report signals the accelerated progress and collective commitment across the industry to address the growing challenges. While we don’t yet have a perfect solution, it's the first time we can quantify and compare the harmful content that exists across platforms, to learn collectively and act. This is a welcome and important step forward in our mission to protect people and our brands online.”