Some politicians who share harmful information are rewarded with more clicks, study finds

What happens when politicians post false or toxic messages online? My team and I found evidence that suggests U.S. state legislators can by sharing unverified claims or using uncivil language during times of high political tension. This raises questions about how social media platforms shape public opinion and, intentionally or not, reward certain behaviors.
I'm a , and my team builds tools to study political communication on social media. In our latest study we looked at what types of messages made U.S. state legislators stand out online during 2020 and 2021—a time marked by the pandemic, the 2020 election and the Jan. 6 Capitol riot. We focused on two types of harmful content: low-credibility information and uncivil language such as insults or extreme statements. We measured their impact based on how widely a post was liked, shared or commented on on Facebook and X, at the time Twitter.
Our study found that this harmful content is linked to . However, the effects vary. For example, Republican legislators who posted low-credibility information were more likely to receive greater online attention, a pattern not observed among Democrats. In contrast, posting uncivil content generally reduced visibility, particularly for lawmakers at ideological extremes.
Why it matters
Social media platforms such as Facebook and X have become one of the . Politicians use them to reach voters, promote their agendas, rally supporters and attack rivals. But some of their posts get far more attention than others.
Earlier research showed that false information than factual content. Platform algorithms often push content that makes people angry or emotional . At the same time, uncivil language can and make people .
When platforms reward harmful content with increased visibility, politicians have an incentive to post such messages, because increased visibility can lead directly to greater media attention and potentially more voter support. Our findings raise concerns that platform algorithms may unintentionally reward divisive or misleading behavior.
When harmful content becomes a winning strategy for politicians to stand out, it can distort public debates, deepen polarization and make it harder for voters to find .
How we did our work
We gathered nearly 4 million tweets and half a million Facebook posts from over 6,500 U.S. state legislators during 2020 and 2021. We used machine learning techniques to determine causal relationships between content and visibility.
The techniques allowed us to compare posts that were similar in almost every aspect except that one had harmful content and the other didn't. By measuring the difference in how widely those posts were seen or shared, we could estimate how much visibility was gained or lost due solely to that harmful content.
What other research is being done
Most research on harmful content has focused on national figures or social media influencers. Our study instead examined state legislators, who significantly shape state-level laws on issues such as education, health and public safety but typically receive less .
State legislators often , which creates opportunities for misinformation and toxic content to spread unchecked. This makes their online activities especially important to understand.
What's next
We plan on conducting ongoing analyses to determine whether the patterns we found during the intense years of 2020 and 2021 persist over time. Do platforms and audiences continue rewarding low-credibility information, or is that effect temporary?
We also plan to examine how changes in moderation policies such as X's or Facebook's affect what gets seen and shared. Finally, we want to better understand how people react to harmful posts: Are they liking them, sharing them in outrage, or trying to correct them?
Building on our current findings, this line of research can help shape smarter platform design, more effective digital literacy efforts and stronger protections for healthy political conversation.
Provided by The Conversation
This article is republished from under a Creative Commons license. Read the .