Twitter lifted its ban on COVID misinformation—research shows this is a grave risk to public health

Twitter's decision to no longer enforce its , quietly posted on the site's rules page and listed as effective Nov. 23, 2022, has researchers and experts in public health seriously concerned about the possible repercussions.
Health misinformation is not new. A classic case is the misinformation about a purported but now disproven based on a discredited study published in 1998. Such misinformation has severe consequences for public health. Countries that had stronger anti-vaccine movements against diphtheria-tetanus-pertussis (DTP) vaccines in the late-20th century, for example.
As a , I believe that reducing content moderation is a significant step in the wrong direction, especially in light of the uphill battle social media platforms face in combating misinformation and disinformation. And the stakes are especially high in combating medical misinformation.
Misinformation on social media
There are three key differences between earlier forms of misinformation and misinformation spread on social media.
First, social media enables misinformation to .
Second, content that is sensational and likely to trigger emotions is , making falsehoods easier to spread than the truth.
Third, digital platforms such as Twitter in the way they aggregate, curate and amplify content. This means that misinformation on emotionally triggering topics such as vaccines can readily gain attention.
The spread of misinformation during the pandemic has been dubbed an by the World Health Organization. There is considerable evidence that COVID-19-related misinformation on social media . Public health experts have cautioned that misinformation on social media toward herd immunity, weakening society's ability to deal with new COVID-19 variants.
Misinformation on social media about vaccine safety. Studies show that COVID-19 vaccine hesitancy is driven by a .
Combating misinformation
The social media platforms' content moderation policies and stances towards misinformation are crucial for combating misinformation. In the absence of strong content moderation policies on Twitter, algorithmic content curation and recommendation are likely to boost the spread of misinformation by , for example, exacerbating partisan differences in exposure to content. Algorithmic bias in recommendation systems and racial disparities in vaccine uptake.
There is evidence that some less-regulated platforms such as Gab may and increase COVID-19 misinformation. There is also evidence that the misinformation ecosystem can lure people who are on social media platforms that invest in content moderation that originates on less moderated platforms.
The danger then is that not only will there be greater anti-vaccine discourse on Twitter, but that such toxic speech can spill over into other online platforms that may be investing in combating medical misinformation.
The Kaiser Family Foundation COVID-19 vaccine monitor reveals that public trust for COVID-19 information from authoritative sources such as governments , with serious consequences for public health. For example, the share of Republicans who said they trust the Food and Drug Administration fell from 62% to 43% from December 2020 to October 2022.
In 2021, a identified that social media platforms' content moderation policies need to:
- pay attention to the design of recommendation algorithms.
- prioritize early detection of misinformation.
- amplify information from credible sources of online health information.
These priorities require to develop best practice guidelines to address healthcare misinformation. Developing and enforcing effective content moderation policies takes planning and resources.
In light of what researchers know about COVID-19 misinformation on Twitter, I believe that the announcement that the company will no longer ban COVID-19-related misinformation is troubling, to say the least.
Provided by The Conversation
This article is republished from under a Creative Commons license. Read the .