Confronted with complex ethical considerations stemming from increasingly sophisticated AI systems and a lack of clear regulation on its use, technology giants such as Google and Microsoft are turning down or affixing special conditions to AI-centric projects deemed to be ethically risky, according to Reuters.
In an example cited by the report, Google Cloud considered using AI to help a financial firm decide who to lend money to but turned down the idea after “weeks” of internal discussions due to the possibility of inadvertently perpetuating bias around race and gender.
Microsoft reportedly spent more than two years internally debating the benefit of its voice mimicry tech to help restore the speech of those with impaired voices against the risk of political deepfakes, while IBM rejected a client request for an advanced facial recognition system.
Amid growing public scrutiny, firms such as Google, Microsoft, and IBM now have ethics committees that review new AI services right at the start.
In Google’s case, the ethics committee voted against the project after concluding that any AI system created to assess the creditworthiness of individuals would need to learn from past data and patterns. The risks of replicating discriminatory practices against marginalized groups were thought to be too high.
Moreover, the Google committee also enacted a blanket policy to skip financial services deals that revolve around establishing creditworthiness “until such concerns could be resolved”.
The discussions at these tech firms as they attempt to balance the lucrative use of AI with social responsibility underscores the growing concerns around the growing use of AI.
But despite having as many as 20 employees at various levels within each ethics committee, dissenters argue that such committees by virtue of who they work for cannot be truly independent or free of competitive pressures to approve projects.
Ultimately, as AI systems become more pervasive and easier to deploy, concerns over ethical use will only increase.
Last year, the Singapore Computer Society (SCS) launched a reference document for AI deployment entitled "AI Ethics & Governance Body of Knowledge”, or BoK, to aid in the responsible, ethical, and human-centric deployment of AI for competitive advantage. Built on Singapore’s voluntary Model AI Governance Framework, it incorporates the expertise of 60 individuals in the field of AI ethics and governance.
Image credit: iStockphoto/wildpixel