Everywhere you look, it’s all about AI. You might think that implies we, as an ecosystem, are near peak-AI now but we have a long way to go on that front yet.
My evidence? The massive interest in AI in the B2B market is now spreading to the B2C mass market and we are seeing the beginnings of “AI-washing”. This is marketing that implies brands and products involve artificial intelligence technologies, even when that is tenuous – as it has the potential to cast an innovative light on most brands.
AI-washing is gaining momentum as consumers are increasingly exposed to new features in, for example, digital photography, in retail stores or over many of the XaaS (anything-as-a-service) tools provided to the mass market. You’re likely to have come across it anytime you have logged a trouble ticket for your phone line.
Experience is Everything
Away from the marketing exuberance, Fujitsu is witnessing solid progress in enterprises integrating AI solutions within business processes, with a clear momentum from experimentation to production. Organizations with AI projects already under their belt have developed a mature understanding of the expertise and technology capabilities that are involved and a more realistic view of what can be achieved. If those experiments were hard fought, the risk-taking early adopters now have a competitive advantage in their industries.
These companies are starting to roll-out AI solutions into their production workflows. In the process, they are discovering the need for an end-to-end service as they move from so-called “petri dish experiments”, to integration into their business workflows and environments. They also face the first issues linked with data security. AI security breaches are now on the horizon and organizations are waking up to the need to secure the reliability of their AI-based systems, both from external attack as well as from internal biases. This has become a strategic topic in AI.
The Ethics of AI
Biases point to the underlying ethics of AI. Algorithms learn from data and data is sometimes based on human decisions that are not always fair. Being able to show that automatic decisions taken by AI systems are based on underlying fair appraisals is crucial from this point on.
So much so, that we expect to see a rising demand for regulation of AI. Indeed, the EU has recently issued a consultation paper, “Draft Ethics guidelines for trustworthy AI”, which notes: “Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an ‘ethical purpose’ and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm” – principles, which closely match Fujitsu’s focus on human-centric AI.
Transparency and ethics are one aspect of this debate, but AI in a GDPR context is still awaiting judicial clarification and holds the potential to disrupt the widespread development of AI technologies. This is about more than processes and Corporate Social Responsibility: for example, IDC predicts that algorithm opacity, decision bias, malicious use of AI, and data regulations will result in a doubling of spending in the next two years on relevant governance and compliance staff and what they call “explainability teams”. My own view is that justification and auditability are more relevant concepts here, within a context that privileges justice and due process.
Not that explaining AI is going to get any simpler. If you thought deep learning was complex, so far AI hype has been mostly linked with relatively “simple” machine learning or the more convoluted neural networks underlying deep learning. The next step is the rise of alternative algorithms such as Generative Adversarial Networks, in which one generation of is AI trained by another. Although these will not be rolled out in production environments before 2021, expect ethical and regulatory issues to reach an even higher state of complexity as a result.
And if that isn’t enough of a challenge for you, real-time data analysis will become the new field of experimentation. Most AI experimentation so far has involved use cases where it was realistic to process data offline, such as forensics or other lengthy processes that could be shortened offline. That’s changing, and we can see new avenues of AI experimentation in organizations requiring live analysis of data for safety issues and decision-making.
What data?
The unwritten assumption behind most of what you read about AI is that all organizations are sitting on a mountain of data that they are now about to dig into to extract actionable insights. But what about those that don’t have the required volume of data? Are they locked out of the opportunity to leverage AI? No – in fact in 2019 one of the things we expect to see more of is the use of synthetic AI model training data, using small amounts of actual data and large amounts of simulated data.
On the subject of unwritten assumptions, it’s never entirely clear who is going to be doing all this experimentation and building. The required skills and specific job-scope definitions emerging around AI are evolving as enterprises shift from experimentation to production. The people who are dealing with these real-world implementations are gaining a significant understanding of the technologies and applications, but there’s a bottleneck. According to industry analysts Forrester, two-thirds of AI decision makers struggle with finding and acquiring AI talent, and 83 percent struggle with retention. Ironically perhaps, some firms are now looking at tackling the AI talent shortage by applying AI to automate AI workflows and also to replace data scientist activities. However, the talent shortage goes beyond technical and data science requirements. Businesses need to consider an organizational change in order to adapt their teams, knowledge and cadence to integrate technology into their existing and next-generation systems. This comes with a big cultural implication for organizations, as it means decision makers, HR, legal, sales and marketing experts all need to become more AI-savvy.
Within this scramble for talent, architecture expertise is as important as AI expertise. Analysts at research firm IDC said that AI-based IT implementation project automation will drive new demand for consulting, technical and business services from firms, like Fujitsu, that provide deep industry and functional expertise. As AI moves into production, the need for architecture expertise to build environments capable of handling AI will become both integral, and paramount. Building an integrated, scalable platform that can be considered futureproof and does not create a bottleneck at a later stage will be a key requirement. Workload placement, for example, be it at the edge, on premises or in the cloud, is a crucial economic factor at a time when most deep learning algorithms still require enormous compute power to run at scale.
Mature Next Steps
I have outlined a number of areas where the recent tremendous excitement – during what has been the beginning of AI use in business – is morphing into a mature business-level realization of what is needed to bring the promise of the technology to fruition. This is the end of the beginning.
Laurent Heurtin, deputy head of Artificial Intelligence CoE, Fujitsu EMEA authored this article, which was originally published here.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.