News Warner Logo

News Warner

Some politicians who share harmful information are rewarded with more clicks, study finds

Some politicians who share harmful information are rewarded with more clicks, study finds

  • A study found that U.S. state legislators who post harmful information, such as low-credibility claims or uncivil language, receive more online attention and visibility on social media platforms like Facebook and Twitter.
  • The study suggests that platform algorithms may unintentionally reward divisive or misleading behavior, incentivizing politicians to post such messages to gain more visibility and support from voters.
  • Republican legislators who posted low-credibility information were more likely to receive greater online attention than Democrats, while posting uncivil content generally reduced visibility for lawmakers at ideological extremes.
  • The study highlights the importance of understanding how social media platforms shape public opinion and the need for smarter platform design, effective digital literacy efforts, and stronger protections for healthy political conversation.
  • Future research plans include analyzing whether the patterns found in this study persist over time, examining the impact of changes in moderation policies on what gets seen and shared, and understanding how people react to harmful posts.

The likes pour in for some politicians who post misinformation. J Studios/DigitalVision via Getty Images

What happens when politicians post false or toxic messages online? My team and I found evidence that suggests U.S. state legislators can increase or decrease their public visibility by sharing unverified claims or using uncivil language during times of high political tension. This raises questions about how social media platforms shape public opinion and, intentionally or not, reward certain behaviors.

I’m a computational social scientist, and my team builds tools to study political communication on social media. In our latest study we looked at what types of messages made U.S. state legislators stand out online during 2020 and 2021 – a time marked by the pandemic, the 2020 election and the Jan. 6 Capitol riot. We focused on two types of harmful content: low-credibility information and uncivil language such as insults or extreme statements. We measured their impact based on how widely a post was liked, shared or commented on on Facebook and X, at the time Twitter.

Our study found that this harmful content is linked to increased visibility for posters. However, the effects vary. For example, Republican legislators who posted low-credibility information were more likely to receive greater online attention, a pattern not observed among Democrats. In contrast, posting uncivil content generally reduced visibility, particularly for lawmakers at ideological extremes.

Why it matters

Social media platforms such as Facebook and X have become one of the main stages for political debate and persuasion. Politicians use them to reach voters, promote their agendas, rally supporters and attack rivals. But some of their posts get far more attention than others.

Earlier research showed that false information spreads faster and reaches wider audiences than factual content. Platform algorithms often push content that makes people angry or emotional higher in feeds. At the same time, uncivil language can deepen divisions and make people lose trust in democratic processes.

When platforms reward harmful content with increased visibility, politicians have an incentive to post such messages, because increased visibility can lead directly to greater media attention and potentially more voter support. Our findings raise concerns that platform algorithms may unintentionally reward divisive or misleading behavior.

Political misinformation has burgeoned in recent years.

When harmful content becomes a winning strategy for politicians to stand out, it can distort public debates, deepen polarization and make it harder for voters to find trustworthy information.

How we did our work

We gathered nearly 4 million tweets and half a million Facebook posts from over 6,500 U.S. state legislators during 2020 and 2021. We used machine learning techniques to determine causal relationships between content and visibility.

The techniques allowed us to compare posts that were similar in almost every aspect except that one had harmful content and the other didn’t. By measuring the difference in how widely those posts were seen or shared, we could estimate how much visibility was gained or lost due solely to that harmful content.

What other research is being done

Most research on harmful content has focused on national figures or social media influencers. Our study instead examined state legislators, who significantly shape state-level laws on issues such as education, health and public safety but typically receive less media coverage and fact-checking.

State legislators often escape broad scrutiny, which creates opportunities for misinformation and toxic content to spread unchecked. This makes their online activities especially important to understand.

What’s next

We plan on conducting ongoing analyses to determine whether the patterns we found during the intense years of 2020 and 2021 persist over time. Do platforms and audiences continue rewarding low-credibility information, or is that effect temporary?

We also plan to examine how changes in moderation policies such as X’s shift to less oversight or Facebook’s end of human fact-checking affect what gets seen and shared. Finally, we want to better understand how people react to harmful posts: Are they liking them, sharing them in outrage, or trying to correct them?

Building on our current findings, this line of research can help shape smarter platform design, more effective digital literacy efforts and stronger protections for healthy political conversation.

The Research Brief is a short take on interesting academic work.

The Conversation

Yu-Ru Lin receives funding from external funding agencies such as the National Science Foundation (NSF).

link

Q. What did researchers find about how U.S. state legislators’ online behavior affects their public visibility?
A. Researchers found that sharing unverified claims or using uncivil language can increase or decrease a legislator’s public visibility, depending on the type of content.

Q. Which type of harmful content was studied in the research?
A. The study focused on two types of harmful content: low-credibility information and uncivil language such as insults or extreme statements.

Q. Did the researchers find that all politicians who shared harmful content received more attention online?
A. No, the researchers found that Republican legislators who posted low-credibility information were more likely to receive greater online attention than Democrats.

Q. What is a concern raised by the study’s findings about social media platforms?
A. The study raises concerns that platform algorithms may unintentionally reward divisive or misleading behavior, which can distort public debates and deepen polarization.

Q. How did the researchers gather data for their study?
A. The researchers gathered nearly 4 million tweets and half a million Facebook posts from over 6,500 U.S. state legislators during 2020 and 2021 using machine learning techniques.

Q. Why is it important to study the online behavior of state legislators?
A. State legislators often escape broad scrutiny, which creates opportunities for misinformation and toxic content to spread unchecked, making their online activities especially important to understand.

Q. What are some potential implications of the study’s findings for social media platforms?
A. The study’s findings suggest that platform algorithms may need to be redesigned to prevent the spread of harmful content and promote healthier political conversation.

Q. Will the researchers continue to analyze the data from their study?
A. Yes, the researchers plan to conduct ongoing analyses to determine whether the patterns they found during 2020 and 2021 persist over time.

Q. What other research is being done on harmful content in social media?
A. Most research on harmful content has focused on national figures or social media influencers, but this study examined state legislators who are often overlooked in broader discussions of online behavior.

Q. How can the findings of this study contribute to shaping smarter platform design and promoting healthier political conversation?
A. The study’s findings can help shape smarter platform design, more effective digital literacy efforts, and stronger protections for healthy political conversation by identifying ways to prevent the spread of harmful content.