AI and Machine Learning: The US Government's Initiative for Ethical Algorithm Evaluation
AI Ethics and Cybersecurity: US Government's Bold Move
At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity groups to engage attendees in identifying flaws in generative AI platforms.
Participate in AI Evaluations
The US government supports this 'red-teaming' initiative, allowing citizens to contribute to the assessment of AI technologies.
- Red-Teaming Collaboration: In partnership with Humane Intelligence, a call for participation has been made for the nationwide evaluation of AI office productivity software.
- Widespread Involvement: Open to developers and the general public, the initiative aims to democratize AI evaluations.
- In-Person Event: Qualified participants will join a red-team/blue-team exercise at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia.
CEO of Tech Policy Consulting, Theo Skeadas, emphasized that the average person using these models lacks the means to assess their suitability. The goal is to empower users to evaluate AI for their personal and professional needs.
Expanding Participation in AI Evaluation
Participants will utilize NIST's AI risk management framework, known as AI 600-1, to measure the effectiveness of their evaluations.
Rumman Chowdhury, founder of Humane Intelligence, stated that NIST's ARIA leverages structured user feedback to enhance the understanding of AI applications. This partnership marks a step toward rigorous evaluations of generative AI and fosters collaborations with government and NGOs globally.
As Skeadas notes, involving diverse community members, including policymakers and civil society, is crucial in the journey toward transparency in AI practices.
This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.