Brand Strategy TikTok Misinformation

Media buyers on alert as misinformation about Israel-Hamas worsens on X

Author

By Kendra Barnett, Associate Editor

November 14, 2023 | 11 min read

New data indicates that misinformation about the war is spreading like wildfire on the Elon Musk-owned platform. The company denies the claims, but media buyers remain wary of the brand safety risks of advertising on X.

X app on mobile phone display

X, in addition to other social platforms, is suffering an influx of hateful content and misinformation amid the Israel-Hamas war / Adobe Stock

Misinformation about the Israel-Hamas war is proliferating on social media.

Research published today by the nonprofit organization the Center for Countering Digital Hate (CCDH) indicates that X, the platform formerly known as Twitter, is failing to crack down on misinformation and adhere to its own content moderation policies and rules concerning hate speech.

The organization found that, of a sample of 200 rule-violating posts on X about the conflict, 98% remained live a week after being reported. The posts that remain live have garnered over 24m views, and out of the 101 accounts in the study, just one was suspended and two others ‘locked.’ The CCDH also found that 43 of the 101 sampled accounts are verified, which guarantees that their posts will be algorithmically boosted on the platform.

In response to the report, X today published a lengthy blog post detailing the actions it’s taken to combat the spread of misinformation amid the worsening conflict in Gaza. The company claims it has “actioned” more than 325,000 pieces of content that violate the platform’s terms of service. “Actioning” may include account suspension or removing or restricting the reach of a post. X claims it’s removed some 3,000 accounts, including some associated with Hamas. It’s suspended an additional 375,000 accounts as part of its efforts to crack down on synthetic, manipulated and misleading content. The company went on to explain that it is also working to automate content moderation of antisemitic content and provide content moderation staffers with “a refresher course on antisemitism.” A handful of other updates were shared.

Powered by AI

Explore frequently asked questions

“Today we shared an update on our comprehensive efforts to safeguard X for all users and partners in response to the Israel-Hamas conflict,” an executive at X tells The Drum in a statement. “We’ll continue to engage with communities, governments, nonprofits, customers and others who have constructive feedback and ideas to strengthen our approach.”

The X executive who spoke with The Drum acknowledges that the blog post was published in response to the CCDH’s research, saying, “Yesterday we were made aware that the CCDH planned to issue a report evaluating a sample of 200 posts. As you can read … X has taken action on hundreds of thousands of posts in the first month following the terrorist attack on Israel.”

The executive also suggests that the CCDH’s definition for “actioning” a post is much more narrow than X’s and may not include actions like restricting the reach of a post. “By choosing to only measure account suspensions, the CCDH will not represent our work accurately.”

The details shared in X’s update today have not been verified by independent research.

Concerns about content moderation on X have grown in the year since billionaire Tesla executive Elon Musk acquired the platform and promptly slashed about half of the company’s workforce, including most of the content moderation team.

And the impact has been widely reported: in December of last year, the New York Times detailed the rise of hate speech on the platform, citing research by the CCDH as well as organizations, including the Anti-Defamation League. It found that slurs against Black Americans more than doubled and antisemitic posts referring to Jews or Judaism spiked by 61% following Musk’s takeover.

Beyond the problem of hateful content, the issue of misleading information has been similarly detailed. The research published today by the CCDH has been underscored by similar findings from other organizations. Last month, NewsGuard – which tracks misinformation and the reliability of various media outlets – released a report that analyzed content on social media during the week following Hamas’ October 7 attack on Israel. It found that verified accounts on X accounted for 74% of all unverified claims related to the conflict during that week – and that these posts were viewed 100m times globally.

In short, research from various organizations indicates that the platform’s pay-to-play verification model is exacerbating the spread of misinformation.

X, however, has largely pushed back on the notion that the dissemination of hateful content has accelerated; it has said that impressions of hate speech content are, on average, 30% lower than they were pre-acquisition.

It’s also worth noting that NewsGuard found that unverified claims about the Israel-Hamas conflict have spiked across other social platforms, including Facebook and TikTok.

“While it is challenging to quantify the full scope of misinformation specifically pertaining to the Israel-Hamas conflict across all social media platforms, we know that it is a significant concern,” says Andrew Serby, chief commercial officer at brand safety and suitability platform Zefr. Zefr, like NewsGuard, has also “seen an increase in the volume of war-related misinformation after the conflict broke out,” according to Serby.

In any case, X’s relatively lax content moderation policies (the X executive who spoke with The Drum says “[We] only suspend accounts for serious violations of our rules”) have put users and advertisers on edge.

Advertisers – who, despite new paid subscription plans on the platform, still generate most of X’s revenue – are especially wary. Brands like Coca-Cola, Ford, General Motors and Unilever – which were once among the platform’s top spenders – pulled spend in the months following Musk’s acquisition. And although some brands have since returned (in September, the company’s CEO, Linda Yaccarino claimed that 90% of X’s top advertisers had returned in the previous 12 weeks), spend rates remain critically low. The company’s US ad revenue has dropped at least 55% year-over-year every month since Musk’s acquisition, according to an October report from Reuters.

And with growing concerns about the proliferation of misinformation and hate speech on social media during the Israel-Hamas war, advertisers’ wariness of X is only growing.

“I’m getting the same message from my clients – [many of] which are major advertisers – which is that they are watchfully waiting, being cautious,” says Matt Navarra, a leading social media consultant and industry analyst. “They have strategic plans in place for them to exit the platform should things deteriorate further. Many of them are telling me that they’re not placing any significant budget – some of them not at all – towards X for its advertising plans for next year. So, that says a lot about the lack of confidence they have in the platform’s ability to tackle the misinformation problem and brand safety as a whole.”

The notion is echoed by other leaders in the ad space. “Anecdotally, we have heard of advertisers pulling back or pausing ads immediately following the attacks,” says Erik Hamilton, vice-president of search and social at Good Apple, a media buying firm. Of course, neither the scope of this trend nor the material impact on X or other social platforms is yet clear.

In general, Hamilton advises a “safety-first approach” for brands at all times – not for “sensitive or traumatic current events” alone. “Advertisers should take precautions to avoid ads being served adjacent to sensitive content. Recommendations include utilizing blocklists, inventory and category filtering and negative keywords or exclusions,” he says.

Dominic Masi, a paid social media planner, told The Drum that his company, media agency Exverus, “doesn’t do any advertising on X because the brand safety risk is too high.”

However, he clarified that this is not a new decision and that the agency hasn’t bought ads on the platform since 2021. Like Good Apple, Exverus employs standard brand safety measures for all media buys, auditing sentiment to ensure clients’ ads appear only in safe and appropriate environments, says Masi.

Despite widespread caution, X has worked in recent months to win back advertiser trust. Earlier this year, the company debuted a slate of new brand safety tools, offered to advertisers in partnership with the Global Alliance for Responsible Media, ad verification firm DoubleVerify and media measurement company Integral Ad Science.

What’s more, X’s Community Notes feature – which crowdsources context and helps to debunk misleading content – has helped to counteract some misinformation on the platform. Navarra calls the feature an especially “bright spot” for X.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Still, Community Notes are “no match for the volume that the platform faces in terms of dangerous content and misinformation,” Navarra says. “I hear people saying that X continues to be full of all sorts of undesirable content from different sources. I don't seem to hear anything suggesting it’s gotten any better.”

Plus, he argues that reports on hate speech and misinformation on the platform – such as the one published this morning by X – are likely to be met with skepticism since so many users and advertisers feel they “can never be sure that the figures [reported offer a] true and accurate representation of what’s really going on on the platform.”

Navarra, like many other industry leaders, believes that X faces a long road ahead if it hopes to win back advertisers’ trust amid an influx of misinformation about the war.

“The narrative in the media, whether it’s right or wrong, has already been set – that X is a toxic fire pit of hell for people, that Elon Musk is destroying the platform and that advertisers don’t want to be on the platform,” he says. “Some of that may be true, some of that may not be fair at all, but it is the narrative that is being spun. It’s going to be very hard … for that to be countered and overcome.”

For more, sign up for The Drum’s daily newsletter here.

Brand Strategy TikTok Misinformation

More from Brand Strategy

View all

Trending

Industry insights

View all
Add your own content +