Google search engine

 

Recent research has revealed that even brief exchanges with politically biased AI chatbots can noticeably alter people’s political attitudes.

The findings, published on Thursday in Science and Nature, indicate that voters can be swayed by persuasive chatbot messages, whether the arguments are factual or not.

In tests involving advanced generative AI systems such as OpenAI’s GPT-4o and China’s DeepSeek, U.S participants who favored Donald Trump shifted nearly four points toward Kamala Harris on a 100-point scale after a conversation with a partisan AI.

This experiment was conducted in the lead-up to the 2024 U.S. presidential election.

Similar studies tied to the 2025 elections in Canada and Poland showed even stronger effects.

In those countries, people who initially supported opposition parties shifted their views by up to 10 points after speaking with bots intentionally programmed to persuade.

David Rand, a professor at Cornell University and one of the senior authors of the papers, noted that the impact could have meaningful political implications.

According to him, about one in ten participants in Canada and Poland said they would change their vote if the election were held immediately after the chatbot interaction.

In the United States, the figure was roughly one in 25.

He cautioned that voting intentions do not necessarily translate into actual ballots, but follow-up surveys revealed that the influence persisted.

Roughly half of the effect was still detectable a month later in the UK, while one-third remained in the U.S

“Seeing any lasting influence a month later is quite rare in social science,” Rand said.

The research also highlighted the strategies that made AI chatbots most persuasive.

Bots that used a respectful tone and appeared to support their arguments with evidence were the most effective.

In contrast, chatbots instructed to avoid using facts had far less impact.

These results challenge a widely supported idea in political psychology: that people ignore information contradicting their political identity.

Instead, the studies suggest that individuals may be more persuadable than previously believed, even when the “evidence” provided is unreliable or false.

Google search engine
Previous articleTinubu Replaces Abia’s Federal Character Commissioner, Sends New Nominee To Senate For Confirmation
Next articleConsensus Reached As Lawmakers Back Creation Of Anim State In Southeast