SignalHub:ChatGPT bans multiple accounts linked to Iranian operation creating false news reports

2025-05-06 12:31:12source:Henri Lumièrecategory:reviews

OpenAI deactivated several ChatGPT accounts using the artificial intelligence chatbot to spread disinformation as part of an Iranian influence operation,SignalHub the company reported Friday.

The covert operation called Storm-2035, generated content on a variety of topics including the U.S. presidential election, the American AI company announced Friday. However, the accounts were banned before the content garnered a large audience.

The operation also generated misleading content on "the conflict in Gaza, Israel’s presence at the Olympic Games" as well as "politics in Venezuela, the rights of Latinx communities in the U.S. (both in Spanish and English), and Scottish independence."

The scheme also included some fashion and beauty content possibly in an attempt to seem authentic or build a following, OpenAI added.

"We take seriously any efforts to use our services in foreign influence operations. Accordingly, as part of our work to support the wider community in disrupting this activity after removing the accounts from our services, we have shared threat intelligence with government, campaign, and industry stakeholders," the company said.

No real people interacted with or widely shared disinformation

The company said it found no evidence that real people interacted or widely shared the content generated by the operation.

Most of the identified social posts received little to no likes, shares or comments, the news release said. Company officials also found no evidence of the web articles being shared on social media. The disinformation campaign was on the low end of The Breakout Scale, which measures the impact of influence operations from a scale of 1 to 6. The Iranian operation scored a Category 2.

The company said it condemns attempts to "manipulate public opinion or influence political outcomes while hiding the true identity or intentions of the actors behind them." The company will use its AI technology to better detect and understand abuse.

"OpenAI remains dedicated to uncovering and mitigating this type of abuse at scale by partnering with industry, civil society, and government, and by harnessing the power of generative AI to be a force multiplier in our work. We will continue to publish findings like these to promote information-sharing and best practices," the company said.

Earlier this year, the company reported similar foreign influence efforts using its AI tools based in Russia, China, Iran and Israel but those attempts also failed to reach a significant audience.

More:reviews

Recommend

Hackers hit Rhode Island benefits system in major cyberattack. Personal data could be released soon

PROVIDENCE, R.I. (AP) — Cybercriminals could release personal data of many Rhode Islanders as early

2024 men's NCAA Tournament Final Four dates, game times, TV, location, teams and more

And then there were four.It wasn't that long ago when 68 teams joined the dance, and now the 2024 me

Transgender athletes face growing hostility: four tell their stories in their own words

USA TODAY’S “In Their Own Words” is a video project that interviewed four transgender athletes who t