The Barrier to AI Has Never Been Lower. It Has Never Been Higher

Artificial intelligence is everywhere today. From voice assistants, text-to-image AI generators, and AI-powered recommendations at our favorite online retailer, we now encounter it practically on a daily basis – often without being aware of it.

Indeed, AI software is expected to grow 50 percent faster than the overall software market over the next two years, according to a recent report by analyst firm Forrester.

AI is easier than ever

As tech firms and researchers seek to democratize technology by releasing their research or models for free, the barrier to AI has never been lower. Indeed, we reported this week on how technology firms such as Microsoft and Adobe are bringing new AI-infused capabilities to mainstream apps such as Office365 and Photoshop.

But even those who want to develop their AI-powered apps will find that it is relatively easy to do a crash course in Python programming, install Jupyter Notebook, import data-handling libraries such as pandas and NumPy, and get the lowdown on machine learning with readily available online course materials.

Those impatient to get started, but with a modicum of technical knowledge can even jumpstart their journey by leveraging existing open-source AI tools or packages to transcribe audio right from your laptop, or perhaps put together an AI-powered motorized cannon designed to send ouch-inducing Lego bricks sliding underfoot as you walk by.

For enterprises with more resources, there are also AI-powered IoT systems such as the Nvidia Jetson AGX Orin Module for industrial or edge AI deployments. Measuring slightly large than a Raspberry Pi, a Jetson AGX Orin 32GB unit packs processing power that is unimaginable just a decade ago, capable of a staggering 200 trillion operations per second.

It is also harder

Ironically, it is also getting increasingly difficult to understand the latest AI developments. In a Reddit post titled “The current and future state of AI/ML is shockingly demoralizing with little hope of redemption”, an anonymous writer with about five years of experience as an AI/ML engineer lamented the state of AI today.

“Sometimes I wonder what the original pioneers of AI... would think if they could see the state of AI that we’ve gotten ourselves into. 67 authors, 83 pages, 540 [billion] parameters in a model, the internals of which no one can say they comprehend with a straight face, 6,144 TPUs in a commercial lab that no one has access to, on a rig that no one can afford, trained on a volume of data [that no human could] process in a lifetime.”

The post has generated over 300 comments to date, an incredible number given the highly specialized niche of machine learning.

The anonymous poster also highlighted how mainstream AI adoption brings along with it the need for responsible AI practices. Unfortunately, the current practice today is to paper over such concerns “with a single page” containing the same rudimentary – and rehashed – ideas on ethics, but no real attempt at a solution.

Are we on the wrong track?

But are we even on the right track? As I previously wrote, deep learning might be experiencing diminishing returns despite the prodigious resources poured into it, according to AI scientist and entrepreneur Gary Marcus. According to Marcus, deep learning is fundamentally a technique for recognizing patterns, suited for rough-and-ready results that struggles when a single mistake is intolerable.

And while some such as DeepMind founder Demis Hassabis believe that our current approach will invariably bring us closer to achieving artificial general intelligence (AGI), others are less sanguine.

In a recent interview, Yann LeCun, chief AI scientist of Meta says he “views with great skepticism” many of the most successful avenues of research in deep learning at the moment. He said of current efforts into AI: “I think they're necessary but not sufficient.”

But perhaps the greatest concern as articulated by our Reddit commentator is the unfettered use of AI. In essence, current generations of AI models are trained on swathes of data generated by humans. But if such systems are permitted to continually generate new data which is then reused for the next generation of AI models, what happens four or five generations down the road?

Another commentator, tongue-in-cheek summarized this concern using the facebook/bart-large-cnn model.

“Eventually we encounter this situation where the AI is being trained almost exclusively on AI-generated content. By the time that happens, what will we have lost in terms of the creative capacity of people, and will we be able to get it back?”

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].​

Image credit: iStockphoto/Radachynskyi