Their plans included psychological operations, debanking, and changing social media platforms’ Terms of Service
Dec 4, 2023
∙ Paid
John Kelly, CEO of Graphika (Left); John Brennan, former CIA Director (center); Rand Waltzman, RAND Corporation (right) (Getty Images)
During last Thursday’s Congressional hearing on the Weaponization of the Federal Government, Democratic members of Congress insisted that censorship efforts of groups like the Cyber Threat Intelligence League (CTIL), the Election Integrity Partnership (EIP), and the Virality Project (VP) were benign and not a violation of the First Amendment.
“It’s not the First Amendment!” said Rep. Dan Goldman, “It’s the [social media platforms’] Terms of Service…. And they are flagging it for the social media companies to make their own decisions. That is not the First Amendment. That is the Terms of Service.”
But the CTIL Files, a trove of documents that a whistleblower provided to Public and Racket, reveal that US and UK military contractors developed and used advanced tactics — including demanding that social media platforms change their Terms of Service — to shape public opinion about Covid-19, and that getting content removed was just one strategy used by the Censorship Industrial Complex.
The CTI League, which partnered with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), aimed to implement something called “AMITT,” which stood for “Adversarial Misinformation and Influence Tactics and Techniques.”
AMITT was a disinformation framework that included many offensive actions, including working to influence government policy, discrediting alternative media, using bots and sock puppets, pre-bunking, and pushing counter-messaging.
The specific “counters” to “disinformation” in AMITT and its successor framework, DISARM, include many we have observed in our study of the Censorship Industrial Complex:
- “Create policy that makes social media police disinformation”
- “Strong dialogue between the federal government and private sector to encourage better reporting”
- “Marginalize and discredit extremists”
- “Name and Shame influencers”
- “Simulate misinformation and disinformation campaigns, and responses to them, before campaigns happen”
- “Use banking to cut off access”
- “Inoculate populations through media literacy training”
For issues like the Russiagate hoax to the Hunter Biden laptop to Covid-19, organizations within the Censorship Industrial Complex have used many of DISARM’s offensive methods like tabletop exercises, psychological inoculation, propaganda messaging, and punishment of dissent. Even its extreme proposal of debanking was used against Canada’s Freedom Convoy.
Far from simply protecting the public from falsehoods, both government and non-profit actors within Censorship Industrial Complex have followed CTIL’s exact playbook and have waged a full-fledged influence operation against Americans.
This influence operation has deep ties to security and intelligence agencies, as is evidenced through many examples of collaboration. In one instance of such collaboration, supposedly independent “disinformation researchers” like Renée DiResta coordinated a 2020 election tabletop exercise with military officials.
Defense and intelligence funding supports much of the Censorship Industrial Complex. For instance, Graphika, which was involved in both EIP and VP, receives grants from the Department of Defense, DARPA, and the Navy.
Pentagon-affiliated entities are heavily involved in “anti-disinformation” work. Mitre, a major defense contractor, received funding to tackle “disinformation” about elections and Covid. The US government paid Mitre, an organization staffed by former intelligence and military personnel, to monitor and report what Americans said about the virus online, and to develop vaccine confidence messaging. This government-backed military research group, Public discovered, was present in the EIP and VP misinformation reporting system, and in election disinformation report emails to CISA.
The AMITT framework also includes many counters we have yet to find concrete evidence for, but which we suspect may have been attempted:
- “Infiltrate the in-group to discredit leaders”
- “Honeypot with coordinated inauthentics”
- “Co-opt a hashtag and drown it out (hijack it back)”
- “Dilute the core narrative – create multiple permutations, target/amplify”
- “Newsroom/Journalist training to counter influence moves”
- “Educate high profile influencers on best practices”
- “Create fake website to issue counter narrative”
Leave a Reply