Quantcast

Charleston Leader

Wednesday, December 18, 2024

Bipartisan VET AI Act aims at creating guidelines for third-party audits of artificial intelligence

Webp k6qihv0ijtd50eu3kxzzfdtg8kyx

Senator Shelley Moore Capito, U.S. Senator for West Virginia | Official U.S. Senate headshot

Senator Shelley Moore Capito, U.S. Senator for West Virginia | Official U.S. Senate headshot

U.S. Senators Shelley Moore Capito (R-W.Va.) and John Hickenlooper (D-Colo.), both members of the Senate Commerce, Science, and Transportation Committee, have introduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act.

The bipartisan bill directs the National Institute of Standards and Technology (NIST) to collaborate with federal agencies and stakeholders across industry, academia, and civil society to develop detailed specifications, guidelines, and recommendations. These would be used by third-party evaluators working with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested.

“This commonsense bill will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them. I look forward to getting this bill and our AI Research Innovation and Accountability Act passed out of the Commerce Committee soon,” Senator Capito said.

Currently, AI companies make claims about how they train, conduct safety red-team exercises, and carry out risk management on their AI models without any external verification.

The VET AI Act aims to create a pathway for independent evaluators—similar to those in the financial industry—to work with companies as neutral third parties. This would verify that their development, testing, and use of AI comply with established guardrails. As Congress moves towards establishing AI regulations, evidence-based benchmarks for independently validating AI companies’ claims on safety testing are expected to become more essential.

Specifically, the bill directs NIST, in coordination with the U.S. Department of Energy and National Science Foundation (NSF), to develop voluntary specifications and guidelines for developers and deployers of AI systems. These guidelines will cover internal assurance processes as well as collaborations with third parties on external assurance regarding verification and red-teaming of AI systems. Considerations will include data privacy protections, mitigations against potential harms from an AI system, dataset quality, governance processes throughout the development lifecycle.

Additionally, the bill proposes establishing a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking certification in conducting internal or external assurance for AI systems. NIST is also required to conduct a study examining various aspects of the ecosystem of AI assurance including current capabilities, methodologies used, necessary facilities or resources needed, and overall market demand.

Full text of the bill can be found here.

# # #

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS