Ever since consumers started experiencing AI's wonders and potential horrors firsthand, responsible AI has become a hot-button topic.
It's still early days. We already have two different approaches to operationalizing responsible AI, and data scientists worry that AI innovation will be the victim.
Yet, as Eric Yu, SAS’s principal solutions architect, said on the sidelines of SAS Innovate 2023 in Singapore, it’s time to get the conversation among multiple stakeholders going.
The real test is in the operationalization
Responsible AI, or what is also known as ethical and accountable AI, is not rudderless. The call to make AI systems transparent, fair, secure and inclusive has principles backing it.
Under its responsible innovation charter, SAS has six: human centricity, inclusivity, accountability, transparency, robustness, and privacy and security.
Generative AI tools like ChatGPT add new challenges; by definition, they can generate new data. And then, we have the issue of pre-trained models that could have used biased data and left companies open to potential new lawsuits.
This was why SAS's Yu felt that the key to responsible AI lies in how we operationalize it. "In the past, there's been this movement for defining the principles and the values. But now, the objective is to identify how to operationalize those principles. It's about how you operationalize ethics in business processes and, at the same time, enable employees to take actions when appropriate," said Yu.
Start with your entire business
You can’t operationalize responsible AI without having all your stakeholders aligned. This is not just your senior management team or the AI team but should involve all business groups at the very least.
"We need to have teams that could think about the systems or the capabilities we're building out, and that requires collaboration from all parts of the organization," said Yu.
That's because successfully operationalizing responsible AI needs the active participation of the business teams. They need to understand how AI could impact their decisions and actions.
"So often, we talk about the whole organizational readiness to address responsible AI. We need active participation from within our company. But that's only one side," said Yu.
In his presentation at the one-day conference, Yu also urged companies to look at the impact on society.
“We need to be mindful of the concerns and the risks [on society]," said Yu. This matters because regulators are becoming increasingly interested in how AI products impact citizens and the rule of law.
Yu noted that it was up to "individual organizations to determine the risk appetites for the systems they're putting out into the market."
"For SAS, we were in all parts of the globe. We do business within all industry sectors of industries…and we then collect the necessary information to help our decision-makers determine whether we want to pursue those opportunities or not," said Yu.
The two approaches to managing responsible AI
SAS has a Data Ethics Practice, which the company characterizes as the "guiding hand" for its responsible AI efforts (SAS broadens the term to responsible innovation and includes its Data for Good efforts).
Within its latest SAS Viya iteration, the company is also making it easier for its customers to meet the responsible AI principles with bias detection, explainability, decision auditability, model monitoring, governance and accountability.
SAS also sees value in driving industry efforts. It recently hosted the meeting of the National Artificial Intelligence Advisory Committee (NAIC), which advises the U.S. President and the National AI Initiative Office.
The company also contributed to the Business Roundtable’s Roadmap for Responsible Artificial Intelligence, which laid out 10 core principles. The Business Roundtable reads like the who's who in the U.S. industry, with SAS prominently represented.
Recently, SAS made good on its responsible AI promises. It collaborated with Erasmus University Medical Center, a leading European academic hospital, and Delft University of Technology. The efforts culminated with the launch of the first Responsible and Ethical AI in Healthcare Lab (REAHL).
Yet, an opt-in and collaborative strategy is one way to go. The E.U. is taking another approach with its E.U. AI Act.
While many see this as a similar approach to privacy with GDPR, it's not. In fact, like the U.S. collaborative approach, it tries not to stifle innovation by focusing on the use case. But it also means that even if a company follows all the principles, the use case and the final application will matter.
“If you have the best intentions and have the most robust responsible AI office and oversight committees with more robust and safe applications and platforms, and yet the people who are actually implementing it are not in alignment with those values, then you're going to see issues come up,” said Yu.
The rise of explainability
The problem with a legal or principle-based framework is often precedence. Just look at the challenges lawyers had with discovery arising from conflicting judgments (even though they were dealing with predominantly structured data).
This is why Yu pointed to explainability as a significant issue in the coming years. Even here, there are challenges.
For example, Yu noted that we need to understand “who is asking for clarification on transparency.” He explained that data scientists with deep knowledge of mathematical models view AI models differently from business users. This is why Yu believed context would matter to explainability.
"That's a different level of higher-level extraction. And that's one of the things that we're trying to build into our SAS Viya platform, i.e., the ability to showcase and explain what these models are doing within the context of the role that you are in," he said.
More than tech chatter
Yu did not want to comment on which approach to responsible AI (industry collaboration or regulation) was better. While SAS is part of the U.S. collaborative approach, he acknowledged that countries like Singapore and China are also actively participating in developing legal frameworks.
"So it's going to be a balance. On the one hand, we see the value and the need for a regulatory perspective from the E.U. regions. But at the same time, you don't want to stop innovation. I think the E.U. is acknowledging that as well," said Yu.
What's clear is that it is the right time to start the conversation. Companies are still chasing after clear AI use cases with the wide availability of generative AI. Yet, we are talking about operationalizing responsible AI now after we've experienced generative AI and seen the value of broadening our AI use case.
Besides, the lessons we learn from operationalizing responsible AI will matter as we tackle general adversarial networks (one of the three popular approaches to generative AI) and slowly journey toward artificial general intelligence (AGI).
Winston Thomas is the editor-in-chief of CDOTrends and DigitalWorkforceTrends. He’s a singularity believer, a blockchain enthusiast, and believes we already live in a metaverse. You can reach him at [email protected].
Image credit: iStockphoto/wildpixel