Identifying a large-scale astroturfing attack

One of the first indicators that you, your subject, or your brand is being targeted by a massive astroturfing attack (manipulation of discourse via bots) is to look at the accounts discussing your topic and identify those with a history of spamming.

To check if a large number of spam-heavy accounts are involved, you can start with a simple sort by “Activity” in the “User” tab.

2-1

 

The results here were straightforward: multiple pages of newly created accounts (at the time of data collection) had already tweeted thousands of times.

And this wasn’t just a small wave!

Even by page five of the results, new accounts were still appearing, having been created right in the middle of the January 2025 analysis period, posting nearly 1,000 times per day (roughly one post every two minutes).3

Worse: Clicking on these profiles revealed that a significant number of them had been deleted from the platform within weeks of the analysis.

This means these accounts posted thousands of messages, gained visibility, possibly attracted hundreds or thousands of followers, and then—just weeks later—were removed by Twitter/X’s moderation.14

 

Going beyond simple activity sorting

Beyond scrolling through pages of active accounts, Visibrain allows users to filter accounts based on their activity levels, such as the number of posts they publish per day.

For example, sorting by activity level “very high” revealed 33,000 accounts discussing the German elections, each averaging more than 20 tweets per day since their creation.

Compared to the 3,000 daily tweets from the most prolific bots, this might seem low—but in reality, it's already huge when compared to the behavior of an average Twitter/X user.6

To narrow the focus to the most suspicious accounts, a manual activity filter can be applied.

For instance, applying a filter revealed that 6,398 accounts discussing the German elections were posting more than 100 times per day—five times the threshold for being considered “highly active” on Twitter/X.

These numbers should be taken with caution. Yes, an ultra-engaged human user can physically post 100 times per day for months. However, the presence of thousands of such accounts—bot or human—already serves as a major red flag, indicating that the topic is not merely a point of discussion but a target for massive, coordinated messaging efforts.

Even without looking at accounts posting thousands of times per day, the presence of bots flooding conversations is undeniable.

7


Keys observations:

  1. The German elections attracted massive numbers of spam accounts, many posting thousands of tweets per day.
  2. A portion of these accounts were created specifically for this event, particularly in January 2025. They spammed about the German elections until they were detected and quickly removed by Twitter/X’s automated moderation due to their high content volume.

Identifying the narrative and political affiliation of spam accounts

How can we determine what these spam accounts are saying and whom they support?

The first step is to keep the activity filter in place and check the “Tweets” tab for visible content.

Here, the accounts posting more than 100 times per day overwhelmingly retweeted Elon Musk and the AfD (Alternative for Germany).

8

Digging deeper

We can analyze the bios of these spam accounts to understand how they present themselves, who they mention, or whom they support.

By keeping the activity filter in place, we can export a CSV file (Excel format) containing all the information on these spam accounts, including their bios.9


Using this data, we can perform statistical analysis in Excel or use online tools to automate part of the work.

Example:

The simplest method: ChatGPT (or equivalent).

  • Just copy and paste the "bio" column from the CSV file and ask specific questions.
  • The results won’t be perfect, nor 100% accurate or reliable, but this approach allows for a quick analysis of hundreds or thousands of bios in under a minute (with manual verification afterward).

11

12

13

Findings:

Using ChatGPT’s summary, we can quickly verify results in Excel.

For example, by counting how often specific words appear in the bios (e.g., MAGA, Trump, Musk, Crypto, Woke, AfD, etc.), we can pinpoint patterns.

Each of these keywords provides examples of bios and accounts worth verifying manually to ensure AI-generated analysis is accurate. Additionally, it allows us to track down specific accounts and review their tweets to determine whether the German elections were just a passing topic for them or their primary.

Third key observation

These spam accounts—whether bots or highly active users—share a common ideological stance, supporting Trump, the AfD, and Elon Musk.

The power of simple filtering


All of this information was uncovered using just a few filters on Visibrain.

These filters and techniques do not replace a full-fledged social data analysis study. A professional analyst would go much deeper in terms of evidence and methodology.

However, with just a few clicks, you can gather strong indicators that something is happening around your topic or brand, warranting further investigation (or even action).

In this case, just a few clicks and filters revealed that thousands of bots posted thousands of tweets in support of the AfD, primarily by amplifying messages from Elon Musk. And if spambots exists for other german political parties, they were statistically submerged in the mass bot support for AfD.

The full study provides additional levels of proof and precision, but the key takeaways remain clear even with these few tricks.