[ad_1]
The Information Technology Industry Council has issued recommendations on fulfilling the responsibilities of the National Institute of Standards and Technology to support the safe and trustworthy development and use of artificial intelligence technology.
ITI said on Friday that it recommended that NIST work with international peers to develop a risk management framework for generating artificial intelligence systems to enhance consistency of approach and consider the role of developers and deployers in increasing transparency of artificial intelligence.
The global tech trade association also recommended that the agency ensure assessment and audit requirements are aligned with the risks posed by AI systems and understand the differences between cybersecurity red teams and AI red teams.
“We believe that generative AI RMFs or profiles, standards and guidance for AI red teams and model evaluation, and plans to participate in and advance international standards development are integral to advancing the development and deployment of trustworthy AI.” ITI In response to NIST’s request for information.
In December, NIST solicited industry feedback on developing AI security guidance and best practices as part of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.
Join the Potomac Officers Club 5th Annual Artificial Intelligence Summit On March 21, hear more from government and industry experts on cutting-edge artificial intelligence innovations. Register here.
[ad_2]
Source link