The use of AI has gone wide and deep within enterprises. It is facilitating everything from virtual patient care at CVS to the development of Nestlé Nespresso's digital baristas. Forrester predicts that in 2022 the use of real-time systems infused with AI will increase by 20%, removing the latency between insight, decisions, and results.
As AI becomes the beating heart of the business, a new demand for responsible and ethical AI arises. At the recent Chief Digital Officer Hong Kong Summit, business leaders in analytics shared their journeys in AI and the rising concern of bias AI in Asia.
The journey of trust
To understand the issue of bias in AI, one should start with looking at the adoption journey from data analytics to immersive AI.
“The use of data is really a journey,” said Adam Wielowieyski, managing director, head of data & analytics at HKEX, Hong Kong’s market operator for trading equity, commodity, fixed income, and currencies. He said most enterprises started managing transactional data to identify patterns and insights. Then, with the rising data validity and integrity, businesses began to make decisions with the identified trends and patterns.
“Over time, you have more confidence in the data [and the algorithms], and you started to believe the answers that it gives you,” he said. “This is when organizations managed to move into a predictive state.”
Wielowieyski said HKEX is also going through a similar journey. It monitors activity data from different trading platforms to identify patterns and ensure a fair and orderly market. With the rising data trust, it is starting to use data to identify new products and business opportunities.
Another local business seeing a broadening use of data and AI is The Hong Kong and China Gas Company (Towngas). The company’s general manager, business analytics and e-development Queenie Chan, said AI supports its maintenance operations.
“We use AI and video analytics to determine the pipe corrosion level,” said Chan. With data becoming more trustworthy, Towngas has also developed the spare parts prediction engine using AI. It recommends to technicians the spare parts they would need for an on-site visit by predicting the possible maintenance issue of each household.
For global reinsurance provider Swiss Re, the use of data has always been the core of its business. The company develops risk assessment models for all lines of businesses—from the risk level of different drivers to business impact from climate change—to help insurance companies predict and manage risk.
“We collect data from smart cities, financial institutes, utility providers. All these data help us to understand the risk that individuals or corporations are exposed to,” said Yannick Even, global analytics business partner, Swiss Re. “That’s basically what we do, transforming data into risk assessment.”
The domino effect of bias AI
With the ease of consolidating data from different sources and building algorithms, organizations worldwide are increasingly relying on these algorithms to run mission-critical operations, noted Geoff Soon, managing director for South Asia at Snowflake.
Soon said the company has recently worked with the Philippines government to roll out COVID-19 vaccinations across a population of 110 million. “We used analytics to work out how you can get that final 10% of the population vaccinated,” said Soon.
Nevertheless, the level of trust towards data and analytics is not necessarily the same as towards algorithms and AI, particularly when making major business decisions. Wielowieyski said the complexity and intertwined algorithms are creating a new set of challenges.
“Our governance team is having massive pressure to ensure data quality and integrity. As soon as one tiny thing goes wrong, you can [completely change] the direction of decisions making,” he said. “Another part is about trusting the output of your black box and ensuring that you don’t have an inherent bias.”
“If the data changes during input, or if the model is doing something else [than what it was] designed for, things can get out of control really quickly,” added Even. “It’s very important to understand how to manage this live during the development cycle of an AI model to ensure that trust is not broken.”
Guides, governance, and data literacy
These challenges are not unique to HKEX and Swiss Re. Aiming to guide financial institutions to develop responsible use of AI, the Monetary Authority of Singapore (MAS) recently released five white papers.
The objective is to establish a framework to quantify ethical practices in addition to existing qualitative practices. It also offers a methodology to determine how much transparency is needed to explain and interpret predictions of machine learning models. Swiss Re, together with Accenture, participated in developing the Fairness Assessment Methodology and applied it to insurance predictive underwriting.
Even noted the company heavily relies on data, analytics, and AI to develop different risk models. Thus, tackling the issue of the “black box” in AI is a priority, and the company has developed a six-pillar global structure to ensure trust in AI.
These six pillars, including five enablement pillars and one analytic strategy pillar, help apply the algorithms to different business units. The five enablement pillars — data foundation, governance, people and culture, data analytics, platform, and partnership — are the foundations to ensure data integrity through enabling corporate governance and culture in the use of the latest technologies with the right partners.
In addition to the six-pillar structure, Even said Swiss Re also developed an advance analytic peer review framework. It manages the data integrity and AI model context throughout the ideation, development, production, and maintenance stages. He added this model is made possible with increased data literacy among business users.
“Our domain experts are the actuaries and underwriters, who understand the business we work in. But they also need to understand what the AI model does, how it works, and what needs to be tracked,” he said. “They need to understand data bias and data quality because if they don’t, [the AI model] becomes a black box.”
As AI became immersive at Swiss Re, Even added that the company and the industry started to transform. The AI models are beginning to reveal a new definition of transparency, intellectual property rights, and even questions around the company’s business model
“How do you get rid of old processes that were not very ethical or not very fair?” he said. “It’s really up to the executives to review these decision-making processes about being a data-driven organization.”
“I think when you arrive at this level, this is when the real value of AI starts to happen. This is still a long journey, and we are not done. No organizations are done with this journey,” Even concluded.
Sheila Lam is the contributing editor of CDOTrends. Covering IT for 20 years as a journalist, she has witnessed the emergence, hype, and maturity of different technologies but is always excited about what's next. You can reach her at [email protected].
Image credit: iStockphoto/kentoh