4 Predictions for 2023: From the Great Correction to Practical AI

After years of irrational exuberance around artificial intelligence, with moonshot projects like self-driving cars, we are now entering into the era of the Great Correction - where companies are taking more realistic approaches to artificial intelligence (AI) and its attendant machine learning (ML) models, algorithms and neural networks. This notion of Practical AI is set to rise in 2023.

Under the umbrella of practicality, companies will strategically rethink how they use artificial intelligence, an attitudinal shift that will filter down to implementation, AI and machine learning model management, and governance. Here are my predictions for Practical AI in 2023:

1. Novelty applications will be out, practical applications will be in

Generative AI has been a big buzzword lately, with slick image generation capabilities grabbing headlines. While it isn’t a new technology, 2023 can be the year where practical uses of the technology take off.

A practical use case for Generative AI can be found in open banking – while it is rising in prominence in Asia Pacific, banks still face the challenge of collecting a corpus of data to build real-time, customer-aware analytics. Generative AI can be applied practically to produce realistic, relevant transaction data for developing real-time credit risk decisioning models. This could greatly benefit buy now pay later (BNPL) lenders in Asia Pacific, which are now exposed to high default rates due to inadequate analytics, jeopardizing open banking’s potential to better serve the underbanked in credit evaluation.

2. AI and ML development processes will become productionalized

Practical AI is incompatible with the modus operandi that many data science teams fall into. They are building and putting into production bespoke AI models without spending enough time on maximizing productivity. This leads to teams clawing back on the model, or worse, letting it run with unforeseen and/or unmonitored consequences.

To achieve production-quality artificial intelligence, the development processes themselves will need to be stable, reliable and productionalized. This comes back to model development governance, frameworks for which will increasingly be provided and facilitated by new artificial intelligence and machine learning platforms now entering the market. These platforms will set standards, provide tools and define application programming interfaces (APIs) of properly productionalized analytic models, and deliver built-in capabilities to monitor and support them.

We will see artificial intelligence platforms and tools increasingly become the norm for facilitating in-house Responsible AI development and deployments, providing the necessary standards and monitoring.

3. Proper model package definition will improve the operational benefits of AI

Productionalizing AI includes directly codifying, during the model creation process, how and what to monitor in the model once it’s deployed. Setting an expectation that no model is properly built until the complete monitoring process is specified will produce many downstream benefits, not the least of which is smoother artificial intelligence operations.

These benefits include enabling AI platforms to reduce model management struggles, and lead to machine learning models that are transparent and defensible.

4. There will be a handful of enterprise-class AI cloud services

Clearly, not every company that wants to safely deploy AI has the resources to do so. The software and tools required can simply be too complex or too costly to pull together in piece-parts. As a result, only about a quarter of companies globally have AI systems in widespread production.

To solve this challenge, 2023 will see the emergence of a handful of enterprise-class AI cloud services that will provide incredible, industry-specific leverage to speed-to-market, safely and responsibly. Readily accessible via API connectivity, these professional AI software offerings will allow companies to develop, execute and monitor their models and algorithms, while also demonstrating proper AI governance and recommending when to drop down to a simpler model to maintain trust in decision integrity.

Scott Zoldi, chief analytics officer at FICO, wrote this article.

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/tortoon