The Jack Dorsey-led management had taken recourse to unethical means to constantly harass and then remove former US President Donald Trump, journalist and author Matt Taibbi says in the latest episode of the scandal published on the social medium, as fellow journalist Bari Weiss had announced her colleague would. Taibbi and Weiss are doing the job of receiving documents from the previous Twitter management on behalf of the Elon-Musk-led new management of the social media company, which the latter is studying to know how the firm was being run earlier.
“The world knows much of the story of what happened between riots at the Capitol” on 6 January and the removal of President Donald Trump from Twitter on 8 January, Taibbi wrote today.
But there was more to it, says Taibbi, as he goes: “… the erosion of standards within the company in months before J6 (6 January), decisions by high-ranking executives to violate their own policies, and more, against the backdrop of ongoing, documented interaction with federal agencies.”
It is no longer a matter of speculations, Taibbi says, as “the internal communications at Twitter between January 6th-January 8th have clear historical import. Even Twitter’s employees understood” right then that “it was a landmark moment in the annals of speech”.

After banning Trump, Twitter executives “started processing new power. They prepared to ban future presidents and White Houses – perhaps even Joe Biden”, Taibbi says. The “new administration,” says one exec, “will not be suspended by Twitter unless absolutely necessary.” He shared a screenshot with this tweet in the thread too:

Twitter executives removed Trump partly because of what an executive called the “context surrounding”, Taibbi shared. This executive cited actions by Trump and supporters “over the course of the election and frankly last 4+ years.” “In the end, they looked at a broad picture. But that approach can cut both ways,” Taibbi wrote.

Much of the internal debate leading to Trump’s ban “took place in those three January days,” the journalist whom the old management of Twitter handed over their documents said, “However, the intellectual framework was laid in the months preceding the Capitol riots.”
Before 6 January, Taibbi says, Twitter was a unique mix of automated, rules-based enforcement, and more subjective moderation by senior executives. As Bari Weiss reported, “the firm had a vast array of tools for manipulating visibility, most all of which were thrown at Trump (and others) pre-J6.”
Twitter Files 2: Conservative voices were marked, muzzled
“As the election approached, senior executives — perhaps under pressure from federal agencies, with whom they met more as time progressed — increasingly struggled with rules, and began to speak of ‘vios’ as pretexts to do what they’d likely have done anyway,” Tabbi wrote.
After 6 January, “internal Slacks show Twitter executives getting a kick out of intensified relationships with federal agencies. Here’s Trust and Safety head Yoel Roth, lamenting a lack of ‘generic enough’ calendar descriptions to concealing his “very interesting” meeting partners,” the journalist wrote and furnished the following piece of evidence:

“These initial reports,” Taibbi wrote, “are based on searches for docs linked to prominent executives, whose names are already public. They include Roth, former trust and policy chief Vijaya Gadde, and recently plank-walked Deputy General Counsel (and former top FBI lawyer) Jim Baker.” [Jim Baker’s antecedents appear in a link above]
A certain slack channel offers a unique window into the evolving thinking of top officials in late 2020 and early 2021, the scribe and author tweeted.
On 8 October 2020, executives of the older Twitter management opened a channel called “us2020_xfn_enforcement”, Taibbi informs. Up to 6 January, “this would be home for discussions about election-related removals, especially ones that involved ‘high-profile’ accounts.” The journalist said these were often called “VITs” or “Very Important Tweeters”.

These initial reports are based on searches for docs linked to prominent executives, whose names are already public. They include Roth, former trust and policy chief Vijaya Gadde, and recently plank-walked Deputy General Counsel (and former top FBI lawyer) Jim Baker, Taibbi says.
“There was at least some tension between Safety Operations — a larger department whose staffers used a more rules-based process for addressing issues like porn, scams, and threats — and a smaller, more powerful cadre of senior policy execs like Roth and Gadde,” the journalist tweeted.
The second group comprised “a high-speed Supreme Court of moderation, issuing content rulings on the fly, often in minutes and based on guesses, gut calls, even Google searches, even in cases involving the President (Trump)”, it is now known.

During this time, Twitter executives were clearly liaising with federal enforcement and intelligence agencies about the moderation of election-related content, Taibbi says. “While we’re still at the start of reviewing the Twitter Files, we’re finding out more about these interactions every day,” he wrote.
Policy Director Nick Pickles is asked if they should say Twitter detects “misinfo” through “ML, human review, and **partnerships with outside experts?*” The employee asks, “I know that’s been a slippery process… not sure if you want our public explanation to hang on that.”
Taibbi’s tweet


Pickles quickly asks if they could “just say “partnerships.” After a pause, he says, “e.g. not sure we’d describe the FBI/DHS as experts”, Taibbi shared with the following screenshot:

“This post about the Hunter Biden laptop situation,” Taibbi wrote, “shows that Roth not only met weekly with the FBI and DHS, but with the Office of the Director of National Intelligence (DNI):

Repeating the screenshot above, Taibbi wrote:
Roth’s report to FBI/DHS/DNI is almost farcical in its self-flagellating tone: “We blocked the NYP story, then unblocked it (but said the opposite)… comms is angry, reporters think we’re idiots… in short, FML” (f*ck my life).
Taibbi’s tweet
Some of Roth’s later Slacks indicate his weekly confabs with federal law enforcement involved separate meetings. Here, he ghosts the FBI and DHS, respectively, to go first to an “Aspen Institute thing,” then take a call with Apple, Taibbi shared.

Here, the FBI sends reports about a pair of tweets, the second of which involves a former Tippecanoe County, Indiana Councilor and Republican named John Basham claiming “Between 2% and 25% of Ballots by Mail are Being Rejected for Errors”, Taibbi wrote.

The FBI’s second report concerned this tweet by the Republican named above.

The FBI-flagged tweet then got circulated in the enforcement Slack. Twitter cited Politifact to say the first story was “proven to be false,” then noted the second was already deemed “no vio on numerous occasions”, the journalist shared.

The group then decided to apply a “Learn how voting is safe and secure” label because one commenter says, “it’s totally normal to have a 2% error rate.” Roth then gives the final go-ahead, Taibbi says, to the process initiated by the FBI:

This episode of Twitter Files is the first part of the third story [the first story was about hiding the scandal of Hunter, Joe Biden’s son, with a supplement saying a former FBI attorney was the censor, and the second story was about restricting the reach of users with ideologies that did not suit the then-Twitter management]. This story covers the period before the election leading up to 6 January.

“Examining the entire election enforcement Slack, we didn’t see one reference to moderation requests from the Trump campaign, the Trump White House, or Republicans generally. We looked. They may exist: we were told they do. However, they were absent here,” Taibbi said.
“I agree it’s a joke,” concedes a Twitter employee, “but he’s also literally admitting in a tweet a crime,” the journalist shared.
The group declares Huck’s an “edge case,” and though one notes, “we don’t make exceptions for jokes or satire,” they ultimately decide to leave him be, because “we’ve poked enough bears,” Taibbi wrote.
“Could still mislead people… could still mislead people,” the humour-averse group declares, before moving on from Huckabee, Taibbi quipped.
Roth suggests moderation even in this absurd case could depend on whether or not the joke results in “confusion.” This seemingly silly case actually foreshadows serious later issues, Taibbi says, showing another document where Roth admits what the group is doing is wrong.
In the documents, executives often expand criteria to subjective issues like intent (yes, a video is authentic, but why was it shown?), orientation (was a banned tweet shown to condemn, or support?), or reception (did a joke cause “confusion”?). This reflex will become key on 6 January, Taibbi wrote.
In another example, Twitter employees under Dorsey prepared to slap a “mail-in voting is safe” warning label on a Trump tweet about a postal screwup in Ohio, before realising “the events took place,” which meant the tweet was “factually accurate”, the journalist said, furnishing evidence to corroborate his claim.
“VERY WELL DONE ON SPEED” Trump was being “visibility filtered” as late as a week before the election. Here, senior execs didn’t appear to have a particular violation, but still worked fast to make sure a fairly anodyne Trump tweet couldn’t be “replied to, shared, or liked”.
“VERY WELL DONE ON SPEED”: the group is pleased the Trump tweet is dealt with quickly.
A seemingly innocuous follow-up involved a tweet from actor James Woods whose ubiquitous presence in argued-over Twitter data sets is already a Twitter Files in-joke. After Woods angrily quote-tweeted about Trump’s warning label, Twitter staff – in a preview of what ended up happening after J6 – despaired of a reason for action, but resolved to “hit him hard on future vio”, Taibbi wrote in the thread.


Here a label is applied to Georgia Republican congresswoman Jody Hice for saying, “Say NO to big tech censorship!” and, “Mailed ballots are more prone to fraud than in-person balloting… It’s just common sense,” the journalist shared.

Twitter teams went easy on Hice, only applying “soft intervention,” with Roth worrying about a “wah wah censorship” optics backlash:

Meanwhile, there are multiple instances of involving pro-Biden tweets warning Trump “may try to steal the election” that got surfaced, only to be approved by senior executives. This one, they decide, just “expresses concern that mailed ballots might not make it on time.”


“THAT’S UNDERSTANDABLE”: Even the hashtag #StealOurVotes – referencing a theory that a combo of Amy Coney Barrett and Trump will steal the election – is approved by Twitter brass, because it’s “understandable” and a “reference to… a US Supreme Court decision,” the journalist cited.


In this exchange, again unintentionally humorous, former Attorney General Eric Holder claimed the U.S. Postal Service was “deliberately crippled,” ostensibly by the Trump administration. He was initially hit with a generic warning label, but it was quickly taken off by Roth:


Later in November 2020, Roth asked if staff had a “debunk moment” on the “SCYTL/Smartmantic vote-counting” stories, which his DHS contacts told him were a combination of “about 47” conspiracy theories:

On 10 December, as Trump was in the middle of firing off 25 tweets saying things like, “A coup is taking place in front of our eyes,” Twitter executives announced a new “L3 deamplification (sic)” tool. This step meant a warning label now could also come with deamplification (sic):
Some executives wanted to use the new deamplification tool to silently limit Trump’s reach more right away, beginning with the following tweet:

However, in the end, the team had to use older, less aggressive labeling tools at least for that day, until the “L3 entities” went live the following morning.


The significance is that it shows that Twitter, in 2020 at least, was deploying a vast range of visible and invisible tools to rein in Trump’s engagement, long before J6. The ban will come after other avenues are exhausted, Taibbi wrote.
In Twitter documents, executives frequently refer to “bots,” for example, “let’s put a bot on that.” A bot is just any automated heuristic moderation rule. It can be anything: every time a person in Brazil uses “green” and “blob” in the same sentence, action might be taken, Taibi said with this screenshot:

In this instance, it appears moderators added a bot for a Trump claim made on Breitbart. The bot ends up becoming an automated tool invisibly watching both Trump and, apparently, Breitbart (“will add media ID to bot”). Trump by J6 was quickly covered in bots, the journalist said.


Twitter Files jargon explained
“There is no way to follow the frenzied exchanges among Twitter personnel” between 6 and 8 January “without knowing the basics of the company’s vast lexicon of acronyms” and Orwellian onwards, Taibbi said. He explained that to “bounce” an account is to put it in timeout, usually for a 12-hour review/cool-off, as in

Explaining another piece of jargon, Taibbi said, “Interstitial,” one of many nouns used as a verb in Twitterspeak (“denylist” is another), means placing a physical label atop a tweet, so it can’t be seen.
PII has multiple meanings, one being “Public Interest Interstitial,” i.e. a covering label applied for “public interest” reasons. The post below also references “proactive V,” i.e. proactive visibility filtering. [VF has been explained in the previous story of the series.]

“This is all necessary background to J6. Before the riots, the company was engaged in an inherently insane/impossible project, trying to create an ever-expanding, ostensibly rational set of rules to regulate every conceivable speech situation that might arise between humans,” Taibbi wrote.
“This project was preposterous yet its leaders were unable to see this, having become infected with (the) groupthing, coming to believe – sincerely – that it was Twitter’s responsibility to control, as much as possible, what people could talk about, how often, and with whom,” the journalist taking over the documents from the old Twitter management said.
The firm’s executives on day 1 of the 6 January crisis at least tried to pay lip service to its dizzying array of rules. “By day 2, they began wavering. By day 3, a million rules were reduced to one: what we say, goes,” Taibbi narrated from what he observed in the documents.
Other screenshots shared by Taibbi in the course of the tweet thread appear in the media gallery below:






Author Michael Shellenberger will share the second part of this third story tomorrow, Taibbi has promised.
You must log in to post a comment.