Singapore has launched what it touts as the world’s first AI governance framework and toolkit for companies to demonstrate responsible AI in an objective and verifiable manner.
A.I. Verify
Announced by Singapore’s Minister for Communications and Information Josephine Teo at the World Economic Forum Annual Meeting in Davos two weeks ago, A.I. Verify was designed to promote transparency between companies and their stakeholders through a combination of technical tests and process checks.
Developed by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC), it is currently available as a minimum viable product. Companies can leverage it to measure and demonstrate how reliable and safe their AI products or services are.
Specifically, developers and owners can determine the performance of their AI systems against a set of principles through standardized tests. This is measured through a set of open-source testing solutions bundled with the Toolkit, including a set of process checks for convenient self-assessment.
The Toolkit will generate reports for developers, management, and business partners, covering major areas affecting AI performance. The approach is to allow transparency covering areas such as transparency, safety, and resilience, as well as accountability and oversight of AI systems.
“If you put yourself in the shoes of an organization that deploys AI, even if they wanted to subscribe to this idea of trustworthy and reliable AI, even if they had the intention to be transparent with their stakeholders, how are they going to do that?” said Mrs. Teo at a media brief at the launch.
“And that's how the idea for A.I. Verify came about. We decided that it would be helpful to organizations that make use of AI if they had a toolkit that would help in self-assessment, and they can do it voluntarily.”
According to a Channel News Asia report, she acknowledged that regulations need to keep up with quickly moving developments in the digital space. However, any rule-setting will require the involvement of both policymakers and technology leaders.
“You have to talk to the people who are at the forefront of implementing the technology. Otherwise, you're going to set up rules and regulations that may not be viable, or they're not effective.”
More details about A.I. Verify can be found here.
Image credit: iStockphoto/IGOR SVETLICHNYI