How To Reduce Misinformation On Social Media

With the World Health Organization warning of an infodemic during the Covid crisis, the issue of misinformation was thrust into the spotlight like never before. Significant attention was given to how governments and social media platforms could control its spread.

Research from Duke University explores the delicate balancing act that must be struck between freedom of speech and censorship for the greater good, while also questioning whether this basic assumption is the only way forward.

“A tacit assumption has been that censorship, fact-checking and education are the only tools to fight misinformation,” the researchers explain.

Improving quality

The researchers examine various ways that the quality of information that is shared on social networks can be improved without beholding the networks themselves to determine what is true or false. The model they develop suggests that misinformation can be significantly reduced, with networks setting clear limits on how widely some messages can be shared without overly restricting users.

“We show that caps on either how many times messages can be forwarded (network depth) or the number of others to whom messages can be forwarded (network breadth) increase the relative number of true versus false messages circulating in a network, regardless of whether messages are accidentally or deliberately distorted,” the researchers explain.

“For example, Twitter could limit the breadth of sharing on its site by limiting how many people see any given retweet in their Twitter feeds.”

Growing popularity

While this approach is not yet widespread, the researchers explain that it has been used by Meta on the Facebook and WhatsApp platforms. For instance, on Facebook, limits were introduced on how many people or groups people users could forward messages to in a bid to limit the spread of misinformation around topics such as voting and Covid. WhatsApp has introduced similar limits.

While this method didn’t eliminate misinformation entirely, it did at least help to slow the spread of it throughout a network. This can be a useful approach until better methods are developed that target the heart of the problem.

“When misinformation spreads through a social network, it can cause harm,” the researchers explain. “Some people might start believing things that are false and that can harm them or others.”

It’s also important that platforms themselves address this issue as failing to do so can undermine the trust people have in the platform itself.

“If you limit sharing, you could also be limiting the spread of good information, so you might be throwing the baby out with the bathwater and that doesn’t really help you,” the authors conclude. “Our analysis explores how to strike that balance.”

Facebooktwitterredditpinterestlinkedinmail