Automated Software Spread Covid Misinformation On Facebook

Misinformation during the Covid pandemic reached such levels that the World Health Organization branded the situation an infodemic.   While the use of automated bots has been long understood in terms of traditional misinformation, a new study from the University of California San Diego suggests they have been underestimated during Covid.

The researchers were able to identify a number of public Facebook groups that had a strong bot presence.  They analyzed the 300,000 or so posts made in those groups, with over 250,000 links shared.  In groups heavily influenced by bots, the same link would commonly be shared by multiple accounts all within seconds of each other.  Indeed, on average, the bots would give less than 5 seconds between shares, which compares to over 4 hours in groups not so infiltrated with bots.

Automatic influence

The researchers focused especially on links to a Danish study that aimed to analyze the effectiveness of facemasks, called DANMASK-19.

“We selected DANMASK-19 for our study because masks are an important public health measure to potentially control the pandemic and are a source of popular debate,” the researchers say.

The analysis found that 39% of all posts made to Facebook groups were of the study were to groups influenced by bots, with just 9% to those groups not influenced by bots so much.

Of the posts made to the bot-influenced groups, 20% claimed that masks actually made things worse, contrary to actual scientific evidence.  50% would promote various conspiracy theories alongside the posts.  Indeed, posts sharing the study were 2.3 times more likely to claim that masks were harmful and 2.5 times more likely to include conspiracy theories when made to groups influenced by bots.

An infodemic

“COVID-19 misinformation propaganda appears to be spreading faster than the virus itself,” the researchers say. “This is fueled by bots that can amplify misinformation at a rate far greater than ordinary users.”

The authors also argue that bots are having a pernicious effect on public health institutions as their messaging often seek to undermine public confidence in these institutions, whether through sending out inaccurate content or claiming falsehoods are from them.

“The amount of misinformation from bots we found suggests that bots’ influence extends far beyond our case study,” the researchers continue. “Could bots be fostering vaccine hesitancy or amplifying Asian discrimination too?”

What’s more, the authors believe that the impact of automated misinformation is significant as it has a strong likelihood of spilling over into organic conversations had via social media.

“Bots sharing misinformation could inspire ordinary people to propagate misinformed messages,” they explain. “For example, bots may make platforms’ algorithms think that automated content is more popular than it actually is, which can then lead to platforms actually prioritizing misinformation and disseminating it to an even larger audience.”

Addressing the problem

The problem is undoubtedly significant, not least as the entities behind such attacks are largely unknown.  Their impact is considerable, however, but the researchers believe solutions to the problem are at hand.

“Our work shows that social media platforms have the ability to detect, and therefore remove, these coordinated bot campaigns,” they say. “Efforts to purge deceptive bots from social media platforms must become a priority among legislators, regulators, and social media companies who have instead been focused on targeting individual pieces of misinformation from ordinary users.”

It’s likely that the infodemic won’t be countered until bot activity is eliminated on social media entirely.  Given the money involved in both the inflated user levels and activity levels, however, it remains to be seen quite how inclined the platforms are to tackle the issue.

Facebooktwitterredditpinterestlinkedinmail