Can You Detect Bots On Social Media?

Bots are a common sight on the most popular social networks, but as their sophistication grows, so our ability to spot them diminishes.  A new paper from the University of Southern California suggests that a ‘human signature’ may be the most effective way of detecting bots in among human users.

“Remarkably, bots continuously improve to mimic more and more of the behavior humans typically exhibit on social media. Every time we identify a characteristic we think is prerogative of human behavior, such as sentiment of topics of interest, we soon discover that newly-developed open-source bots can now capture those aspects,” the researchers say.

Human behavior

The researchers examined how the behavior of humans and bots changed during a political event as it unfolded via Twitter.  Various factors were used to capture user behavior, including the propensity of the user to engage in social interactions and the volume of content they produced.

These interactions included things such as the length of each tweet, together with the number of replies, retweets, mentions and so on.  This helped to create a classification system to help inform their bot detection.

Machine learning was then used to train two different classifiers, the first of which included the features highlighted above, and a second without those.

The analysis revealed that humans displayed a number of trends that bots did not, such as a gradual increase in interaction during the course of a session.  This coincided with a decrease in the amount of content produced.

The researchers believe that this is largely because human users tend to grow fatigued as the session unfolds, and are therefore less likely to engage in complex tasks, such as writing original content.  An alternative suggestion is that we become exposed to more posts as the session progresses, thus making it more likely that we will interact with content than create our own.  Whatever the rationale, it was clear that this behavior distinguished human users from bot accounts.

A model based upon this classification system was then able to significantly outperform a baseline model for bot detection, suggesting that there is sufficient differentiation in how humans and bots behave to enable accurate detection of the latter.

“Bots are constantly evolving – with fast paced advancements in AI, it’s possi-ble to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms,” the researchers conclude.  “We are continuously trying to identify dimensions that are particular to the behavior of hu-mans on social media that can in turn be used to develop more sophisticated toolkits to detect bots.”

Facebooktwitterredditpinterestlinkedinmail