Fighting Bias in AI With Ethics Training

The use of artificial intelligence (AI) has skyrocketed over the last few years and set to further accelerate. According to a recent study, total AI investment reached USD77.5 billion in 2021, a sharp increase from USD36 billion in 2020.

But even as organizations rush to roll out machine learning (ML) at scale, the challenge of bias remained a thorny issue with no clear answer.

The problem of bias

As we noted before, AI brings along with it a set of risks in the form of ethical quandaries and skewed outcomes stemming from bias. While organizations don’t intentionally train ML models to manipulate outputs, the inequalities inherent in the world around us practically guarantee that the data used to build the models will end up perpetuating bias or amplifying prejudices in some way or other.

Ultimately, the fault can hardly be pinned on the data scientists building the models, the business analysts evaluating the results, or the executives acting on the insights. Yet ignoring bias is not an option either, due to the very real possibility of legal repercussions or other unintended consequences arising from biased AI models.

When confronted with complex ethical considerations and the potential of bias stemming from increasingly sophisticated AI systems, even technology giants such as Google and Microsoft have reportedly turned down or affixed special conditions to AI-centric projects.

Training must start in school

Ultimately, addressing bias cannot be a one-time measure but necessitates a holistic, cross-industry approach to mitigate.

According to Kevin Goldsmith, the chief technology officer for Anaconda – a highly popular data science platform, one way to prepare for the future of AI is to address social and ethical implications as part of formal education.

In a contributed piece on Datanami, Goldsmith noted that a recent study conducted by his firm found that only 17% of data science educators said they were teaching about ethics, and 22% said they were teaching about bias. Clearly, there is ample room for improvement.

Universities should look to more established professional fields for guidance, suggests Goldsmith. Much like medical ethics and how the Code of Medical Ethics has become required learning for those seeking professional accreditation as doctors and nurses, he called for the creation of dedicated centers in education to guide teaching on topics such as fairness and impartiality.

“More educational institutions should follow the University of Oxford in creating dedicated centers that draw on multiple fields, like philosophy, to guide teaching on fairness and impartiality in AI,” he wrote.

Continuous upskilling at work

Yet formal education on AI ethics can only be the first step. Realistically, employees must receive similar training in the workplace.

Unfortunately, the data from Anaconda’s 2021 “State of Data Science” shows that this is not happening: A majority (60%) of data science organizations have either yet to implement any plans to ensure fairness and mitigate bias in data sets and models or have failed to communicate these plans to staff.

“Boilerplate ethics training” is not adequate, says Goldsmith, and efforts should permeate the organization with regular assessments and be a boardroom priority.

Training on AI ethics should form the cornerstone of organizations’ long-term recruitment strategy, too. Goldsmith thinks this will create a feedback loop where companies rely on their training programs and hiring priorities to signal to universities about the kind of skills that they are looking for.

Finally, employees must know that AI ethics genuinely matters to the organization and is not a public relations ploy. For that to happen, Harvard Business Review says organizations should empower workers to “raise important questions at crucial junctures and raise key concerns to the appropriate deliberative body”.

The issue of bias and ethics in AI deployments is generating a swell of headlines and increased regulatory scrutiny around the world that is unlikely to fade away anytime soon. As organizations take their initial foray into AI, it is time to start a serious conversation about the implications of bias – and do something about it.

Or as Goldsmith summed up: “By offering training on these topics today, leaders can help build a workforce that is ready and able to confront issues that will only become more complex.”

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].​

Image credit: iStockphoto/wildpixel