Investigation Uncovers Extensive Bot Network Behind Target DEI Backlash
A recent investigation has brought to light a significant manipulation campaign that fueled much of the online backlash against retailers, specifically Target, after the company scaled back its Diversity, Equity, and Inclusion (DEI) initiatives. Our analysis, based on a groundbreaking report, reveals that a substantial portion of the outrage appearing to be authentic public sentiment was, in fact, orchestrated by a sophisticated network of bogus social media accounts.
The controversy emerged when Target announced a rollback of its controversial DEI initiatives, which had previously included items like “tuck-friendly bathing suits” and certain children’s products. While genuine public reaction against these initiatives was observed, our findings indicate that the backlash against Target for cancelling DEI policies was largely fabricated.
Key Findings from Cyabra’s Report:
An analysis by the Israeli tech firm Cyabra, reviewing thousands of posts on X (formerly Twitter) between January 1st and April 21st, uncovered shocking figures:
- Nearly a third (over 30%) of all social media accounts driving the outrage over Target’s policy changes were 100% fake.
- Specifically, 27% of the reviewed accounts were completely fake, many playing a major role in pushing boycott narratives against the retailer.
- Data showed that inauthentic posts surged by an astounding 764% following Target’s announcement, saturating the platform with calls for consumer boycotts and viral hashtags like “economic blackout”.
Cyabra’s CEO characterized this backlash as a “calculated effort to manufacture outrage” and a “masterclass in manufactured outrage”. These fake accounts effectively “hijacked the narrative,” becoming the primary voice of a boycott that appeared grassroots but was anything but. This tactic blurs the lines between genuine and fabricated online discourse, making it difficult for the public to discern authenticity.
Tactics and Impersonation:
Many of the fraudulent accounts were meticulously designed to mimic real users, impersonating Black consumers or conservative commentators. Examples include accounts pushing slogans like “target fast” and “40-day boycott,” or profiles accusing Target of “bending the knee to Trump”. One purported user, “nilsback,” posted about feeling “betrayed by a family member” by Target. Other profanity-laced posts fiercely condemned Target for caving to “Trump demands about DEI programs”. Importantly, our investigation confirmed that several of these highly visible and impactful posts, including those claiming to be from Black individuals or “American Patriot Minnesota,” originated from entirely fake accounts.
Cyabra employs advanced AI to detect coordinated manipulation campaigns, flagging accounts that repetitively post and recycle the same hashtags and slogans, or interact only within closed loops of other suspicious accounts. While these sophisticated detection methods were employed, the firm did not find clear evidence linking the campaign to a specific foreign or domestic actor.
Impact and Broader Implications:
The orchestrated campaign had tangible consequences. Target’s stock dropped by $12 billion, and real consumers, believing there was a genuine “massive groundswell” of opposition, joined the boycott. This “playbook”—hijacking a polarizing moment, flooding the online space with fake voices, and allowing real users to inadvertently amplify the manufactured outrage—proved highly effective.
Our channel has consistently observed that the tactic of influencing culture and culture war debates through synthetic engagement is becoming increasingly common, particularly in polarized consumer environments. A follow-up analysis by Cyabra (May 27th to June 3rd) revealed that this campaign not only persisted but intensified, with fake accounts at times making up nearly 40% of the online conversation. Similar deceptive tactics have been witnessed against other major brands, from fast food chains to tech companies.
This phenomenon lends significant credence to the “Dead Internet Theory,” which posits that the internet is increasingly composed of bot activity and automatically generated content, algorithmically curated to control populations and minimize organic human interaction. We believe that much of the ongoing “culture war” content online is, in fact, “bot farmed,” impacting both “pro” and “anti” narratives. This raises critical questions about the authenticity of online interactions and the true nature of public discourse in the digital age.