AI's impact on productivity is undeniable. Yet, we seem to be short-sighted about the implications of AI ethics.
Rob LoCascio, chief executive of NASDAQ-listed customer engagement company LivePerson, argued that it is time we changed our approach.
At a roundtable on "ethical AI" that he hosted, LoCascio laid out the dilemma he saw for AI. There are two diverging points. It is seen as an existential threat or a transformer of lives.
There was one anecdote involving a call center employee which stood out (for this writer). She worked for a big U.S. company in the Dominican Republic. Her job was to sell BBQ grills over the phone to customers in the U.S.
She then received an AI Bot to help with her customer interactions. The result was spectacular, said LoCascio. She increased the number of grills she sold and doubled her salary. She also transformed from a call center worker to a "ChatBot manager."
“She deploys the AI, she watches it, and if it fails she steps in and improves it,” said LoCascio.
"She is managing it like an employee, and it is giving her scale on her time, and now she and her Bot both sell grills."
The example offers an example of a win-win in the implementation of AI. It is one where AI does not replace the human but augments the work of the human.
Asking the Hard Ethics Questions
While it is a good example of AI, LoCascio also warned the serious implications of having no rigorous global ethical frameworks.
There are already examples of some customer “bad experiences.” It was the result of corporates that had patronizing views of their customers. They believed that their technology – often created by teams of white males – was unassailable. In other words, people had to get used to it.
“This is a very egotistical and self-centered way of looking at how technology can help the world,” said LoCascio, whose company has helped to found a not-for-profit lobby group called EqualAI.
The organization includes in its leadership Wikipedia founder Jimmy Wales and Miriam Vogel. Both worked for the Obama administration on equal opportunity in U.S. federal employment.
The growth-at-all-costs strategy currently drives AI deployment. It sees ethics as a side matter. All this is leading, believed LoCascio, to a reckoning point.
"The first thing we will see is CEO's shutting this AI down because they will say they are just not using it,” he said.
"You are going to see this happen first, and then governments will step in. Because you need a level playing field, and this will never happen with self-regulation; you need government regulations.”
One sticky issue in the ethical debate is a matter of responsibility. Who is ultimately responsible for what the AI does when it behaves unethically? Is it the brand in whose name the action took place, or is it the vendor who created the AI and built the algorithm?
LoCascio used the analogy of a bullet or gun manufacturer whose product is used in a crime. To what extent are they responsible?
He noted that these are questions that companies and regulators need to address.
Avoiding the Upcoming Showdown
Some companies are developing bias auditing as part of their AI solutions. They see it as a commercial advantage. However, LoCascio said he believed this was short-sighted.
“If tech companies do not put aside their perspective on competing, then we are going to have problems,” he said.
"That can create years of slowdown. It will go to the lawyers and regulators, and you will see a big showdown and that will hurt us."
Collaboration between industry was the way forward. This requires companies to be proactive on AI ethics. It would help build trust with consumers, reassure regulators, and keep AI adoption moving forward.
“I see the huge opportunity for AI as something that can sit beside us to help humans understand the world we live in better,” said LoCascio.
"If the machine can make me do my work better, it can also help me interpret the reality that we live in, and I think that is where we should focus our energies with AI.”