Artificial Intelligence and Elections: The Struggle Against Deepfakes

Monday, 2 September 2024, 11:30

Artificial intelligence has emerged as a critical factor impacting elections globally, as deepfake technology poses risks to voter integrity. Detecting AI-generated content remains complicated, particularly in the Global South. The effectiveness of current detection tools is limited due to biases in training datasets, leading to challenges for journalists and researchers facing a surge of disinformation.
WIRED
Artificial Intelligence and Elections: The Struggle Against Deepfakes

Artificial Intelligence's Role in Elections

Artificial intelligence is reshaping the landscape of elections worldwide, presenting numerous challenges for detecting AI-generated content amidst rising disinformation. As the use of machine learning evolves, numerous countries struggle to effectively combat the influence of deepfake technologies, which distort public perception and voter trust.

The Detection Dilemma

Despite a surge in generative AI applications, tools designed to identify manipulated media are often biased towards certain demographics, rendering them less effective in regions like Bangladesh and Senegal. Sam Gregory, a program director for the nonprofit Witness, highlights the limitations of current models, which are primarily trained on Western datasets. This results in a high rate of false positives and negatives, jeopardizing the integrity of media verification.

Addressing Global Inequities

  • AI Detection Models: Bias Concerns - Many tools rely on data from predominantly English-speaking, Western populations, limiting their application globally.
  • Quality of Media - Inferior-quality video and audio from common-use devices in the Global South complicates the detection process.
  • Response Delay - Sending data to universities for verification introduces lag time that may facilitate the spread of misinformation.

Moving Beyond Detection

Understanding that detecting manipulated media is just one aspect, Sabhanaz Rashid Diya argues for a focus on nurturing a trustworthy information ecosystem. Funding should prioritize news organizations and institutions that empower communities and promote resilience against misinformation, rather than solely targeting detection technologies.


This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.


Related posts


Newsletter

Subscribe to our newsletter for the most reliable and up-to-date tech news. Stay informed and elevate your tech expertise effortlessly.

Subscribe