View Details << Back

Is there a way to counter fake news on WhatsApp?

  A new research suggests that corrective messages may need to be frequent rather than sourced or sophisticated

If you were on WhatsApp in the months leading up to the 2019 general election in India, you likely came across a story claiming that cow urine cures cancer. Or perhaps you were forwarded a photo of electronic voting machines (EVMs), with a message stating they were being hacked. If you are a regular WhatsApp user, you have almost certainly borne witness to a seemingly unending barrage of misinformation. And this misinformation is abundant. India is now one of the largest and fastest-growing markets for digital consumers, with 560 million Internet subscribers in 2018, second only to China. However, the Internet - and WhatsApp in particular - are fruitful environments for the massive diffusion of unverified information and rumours.
Survey data measured online with a sample of over 5,000 Facebook users shows that belief in misinformation and rumours can be fairly high. More than 75% of the sample said that polygamy is very common in the Muslim population (this is inaccurate). A similar proportion stated that they believed drinking gaumutra (cow urine) can help build one's immune system (also not true). Survey data measured in-person with a sample of 1,200 paints a similar picture. About 48% of the sample believed in the power of gaumutra to cure terminal illnesses, while about 45% of the sample believed India hasn't experienced a single terror attack since 2014 (you guessed it - not true).
To combat misinformation disseminated through the platform, WhatsApp has encouraged user-driven fact checking. WhatsApp bought full-page advertisements in multiple Indian dailies ahead of the 2019 elections, exhorting users to fact-check fake news. To what extent should we expect such a strategy - so far the only known strategy to correct misinformation on encrypted discussion apps - to be effective?
In June 2019, we ran a study to test whether user-driven corrections work to counteract fake news on WhatsApp. Participants in our study saw different versions of a fictitious, but realistic, WhatsApp group chat screenshots. In it, a first user posts a rumour, which a second user subsequently corrects. The corrections in different versions varied in their level of sophistication. In some cases, the user cited a source and referred to an investigation by that source to correct the first user. In addition, these sources were varied too. The "correcting" user may, for instance, refer to an authoritative source, such as the Election Commission of India, to refute a claim about EVM hacking. Alternatively, they may have cited a fact checking service, such as Alt News. In other cases, the attempt to correct was extremely minimal, with the second user merely stating a phrase such as "I don't think that's true, bro", and providing no evidence as to why. Importantly, everyone in the group that received a correction was compared to a group of people where the second user did not attempt to correct the first user's information.
Results from the study show that participants who were exposed to a correction of any kind were significantly less likely to believe the false information posted by the first user, relative to those who do not receive a correction. But interestingly, the results also demonstrate that the degree of sophistication of the correction made no difference. Simply put, unsourced corrections such as "I don't think that's true, bro" achieved an effect comparable to that of corrections based on fact-checking by credible sources.
These findings have important implications. They suggest that corrective messages may need to be frequent rather than sourced or sophisticated, and that merely signalling a problem with the credibility of a claim (regardless of how detailed this signalling is) may go a long way in reducing overall rates of misinformation. For users, these results imply that expressing doubts in a group chat setting should be encouraged; for encrypted chat apps such as WhatsApp, they imply that creating a simple option to express doubt may be a complementary, cost-effective way to limit rates of belief in rumours.



© All rights Reserved. The south Asian, Published Weekly from New york.