OpenAI Disrupts Iranian Influence Campaign with ChatGPT-Generated Fake News

Friday, 16 August 2024, 20:25

OpenAI has thwarted an Iranian influence campaign that utilized ChatGPT to produce fake news articles. This operation aimed to shape public opinion on sensitive issues in the U.S. and has raised significant concerns about the use of AI in misinformation. The initiative, named 'Storm-2035,' generated content across multiple platforms but failed to gain traction.
LivaRava_Technology_Default_1.png
OpenAI Disrupts Iranian Influence Campaign with ChatGPT-Generated Fake News

OpenAI Disrupts Iranian Influence Operation

OpenAI announced it dismantled an Iranian influence campaign, termed Storm-2035, leveraging ChatGPT to create fabricated news stories targeting vulnerable populations in the United States.

The Campaign’s Tactics and Focus

  • The operation disseminated misleading narratives via five websites masquerading as legitimate news sources.
  • Topics included the U.S. presidential election, LGBTQ+ rights, and geopolitical tensions in Gaza.
  • Content also focused on high-profile political figures, misrepresenting their actions and statements.

Engagement and Impact

Despite its widespread attempts, OpenAI indicated the operation did not achieve significant engagement on social media platforms. Posts received few or no reactions, indicating a lack of effectiveness in shaping public discourse.

On the Breakout Scale utilized by the Brookings Institution, the campaign was rated a Category 2, suggesting minimal real-world impact.


This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.


Related posts


Newsletter

Subscribe to our newsletter for the most reliable and up-to-date tech news. Stay informed and elevate your tech expertise effortlessly.

Subscribe