Regulation on Artificial Intelligence and Cloud Computing: A New Era of Compliance
US Proposes New Reporting Regulations for AI and Cloud Computing
The US Department of Commerce’s Bureau of Industry and Security (BIS) plans to introduce mandatory reporting requirements for developers of advanced artificial intelligence models and cloud computing providers. The proposed rules would require companies to report on development activities, cybersecurity measures, and results from red-teaming tests, which assess risks such as AI systems aiding cyberattacks or enabling non-experts to create hazardous materials.
“This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security,” Gina M. Raimondo, secretary of commerce, stated.
Implications for Enterprises and Innovation
The proposed regulations follow a pilot survey by the BIS earlier this year and come amid global efforts to regulate AI. Following the EU’s landmark AI Act, countries such as Australia are introducing their own proposals to oversee AI development and usage.
According to Charlie Dai, VP and principal analyst at Forrester, enterprises will need to invest in additional resources to meet the new compliance requirements, which may include expanding compliance workforces and implementing new reporting systems.
Concerns about stifling innovation arise as Swapnil Shende, associate research manager at IDC, notes, “The proposed AI reporting requirements seek to bolster safety but risk stifling innovation.” This highlights the critical need for a balance between compliance and creativity in the evolving AI landscape.
This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.