OpenAI Disrupts Iranian Influence Campaign with ChatGPT-Generated Fake News
OpenAI Disrupts Iranian Influence Operation
OpenAI announced it dismantled an Iranian influence campaign, termed Storm-2035, leveraging ChatGPT to create fabricated news stories targeting vulnerable populations in the United States.
The Campaign’s Tactics and Focus
- The operation disseminated misleading narratives via five websites masquerading as legitimate news sources.
- Topics included the U.S. presidential election, LGBTQ+ rights, and geopolitical tensions in Gaza.
- Content also focused on high-profile political figures, misrepresenting their actions and statements.
Engagement and Impact
Despite its widespread attempts, OpenAI indicated the operation did not achieve significant engagement on social media platforms. Posts received few or no reactions, indicating a lack of effectiveness in shaping public discourse.
On the Breakout Scale utilized by the Brookings Institution, the campaign was rated a Category 2, suggesting minimal real-world impact.
This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.