Singapore Doubles Down on Responsible AI

Singapore today launched the AI Verify Foundation to harness the collective power and contributions of the global open-source community, aiming to develop AI testing tools for the responsible use of AI.

Speaking at the ATxAI conference at Capella Singapore, Mrs Josephine Teo, Singapore’s Minister for Communications and Information, officially announced the launch.

“[AI] is the power of human-like intelligence, potentially a very high form of it at far reduced costs. This is especially valuable for Singapore, where human capital makes all the difference,” said Teo.

“If we can harness this power to make it augmented intelligence to support rather than to replace people, our citizens will have a lot to gain.”

AI Verify Foundation

According to IMDA, the Foundation will help to foster an open-source community to contribute to AI testing frameworks, code base, standards, and best practices. Additionally, it seeks to create a neutral platform for open collaboration and idea sharing on testing and governing AI.

This includes supporting both the development and use of AI Verify. Launched last year, AI Verify is an AI governance testing framework and software toolkit that was developed by IMDA in consultation with companies from various sectors.

The AI Verify Foundation has seven pioneering premier members: the Infocomm Media Development Authority (IMDA), Aicadium (Temasek's AI Centre of Excellence), IBM, Microsoft, Google, Red Hat, and Salesforce. The pioneering premier members will guide the strategic directions and development of the AI Verify roadmap.

There are also more than 60 general members onboard, including well-known brands and tech firms such as Adobe, DBS, Meta, Huawei, SenseTime, and Singapore Airlines.

AI Verify toolkit

As part of the morning session, representatives from IBM, UBS, and Singapore Airlines were invited on stage to share their experiences using AI Verify.

“The packaging of AI verify was amazing. It came as a Docker container. So we just deployed it. And that means in the first 15 minutes, you're getting an evaluation report,” said Anup Kumar, distinguished engineer and APAC CTO for Data and AI at IBM.

Kumar lauded the toolkit for its intuitiveness, highlighting its feedback on possible bias, explainability, and robustness. “We didn’t have a lot of back and forth. We were given the tools, our data scientists deployed it, and then we have a report out,” he said.

Kumar did mention that different models necessitate distinct approaches. While the documentation is clear about the kind of models that are supported, he looks forward to getting more models supported as a member of the AI Verify Foundation.

“At the click of a button, the toolkit automatically generates a technical report that gives you detailed explanations, definitions, and illustrations that tell you how the model does in terms of its possible underlying bias, its robustness in the face of perturbation in its training data, as well as its feature importance that helps with the model explainability,” said Helen Wang, an AI model validation quantitative analyst at UBS.

Towards trustworthy AI

AI Verify has attracted the interest of over 50 local and multinational companies since its launch as a minimum viable product for its pilot last year, says IMDA.

AI Verify, now accessible to the open-source community, will provide benefits globally by offering a testing framework and toolkit in line with internationally recognized AI governance principles, such as those from the EU, OECD, and Singapore.

“The scale and pace of AI Innovation in this new modern technology era requires at the very core, foundational AI governance frameworks to be made mainstream in ensuring the appropriate guardrails are considered while implementing responsible AI algorithmic systems into applications,” noted Ashley Fernandez, the chief data and AI officer at Huawei.

“AI Verify Foundation serves this core mission and as we progress as an advancing tech society substantiates the need to advocate for the deployment of greater trustworthy AI capabilities,” said Fernandez.

You can access the open-source code for the AI Verify toolkit here.

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].​

Image credit: ATxSG