The Next Generation of AI Could Model Living Beings

The road to AI is paved with groundbreaking research and breakthroughs that culminated with a computer program that defeated the best players in the world at Go, a complex board game requiring a blend of intuition, creativity, and strategic thinking.

Today, the use of ML models in businesses is par for the course. Indeed, the ready availability of tools, algorithms, and pre-trained models means that the bulk of the work entails identifying the right datasets and preparing them for use.

Progress with AI continues apace, of course, regularly making the news such as when they are applied to predict code for programmers or to explain jokes. These are typically trained on extremely powerful hardware that can be described as supercomputers – in an age where an entry-level smartphone is already more powerful than a supercomputer from 1988.

The next great AI leap

But what’s next for AI? Sure, tech giants such as Google are already seeking to scale up AI processing further by automating the splitting of singular AI workloads between multiple racks of specialized AI chips. Elsewhere, more powerful AI hardware is on the horizon, promising to make trillion-parameter computer intelligence within reach very soon.

But as AI makes its way deeper into our lives, is there all – bigger, better models that predict better? And can we expect another great AI leap soon, or is the future of AI tied inextricably to the growth in computing prowess?

Earlier this year, I wrote about how AI scientist and entrepreneur Gary Marcus believes that the next level of progress in AI will materialize through “hybrid AI”, and will entail the melding of traditional deep learning with symbols used in software engineering.

This will help AI systems overcome their lack of understanding of how things work and was demonstrated when a non-AI system beat an AI-based one at the NetHack game. Well, two recent research studies appear to illustrate, if not validate, his point.

Learning from animals

Have you ever wondered why quadruped animals learn to walk so quickly? It turns out that animals are born with muscle coordination networks located in their spinal cord, according to a report on Tech Xplore.

Learning to precisely coordinate leg muscles and tendons take time, during which baby animals rely heavily on hard-wired spinal cord reflexes to help them avoid falling and hurting themselves. As more advanced and precise muscle control is practiced, the nervous system is eventually adapted to the young animal’s limbs, allowing it to keep up with the adults.

In a bid to model this, researchers built a four-legged robot the size of a dog and successfully trained it to walk in the short span of an hour. Crucially, the controlling computer that modeled a virtual spinal cord for our robot dog ran on an i7-based computer, while the robot itself ran on a Raspberry Pi drawing just five watts of power.

Drawing from development psychology

Imagine if I show you a pen in my hand before placing my hand behind my back. Does the stationery still exist? Common sense dictates yes, notes Susan Hespos, a professor at the psychology department at Northwestern University in the United States.

Even a two-month-old child understands this, which explains why showing them a magic trick where the rules of physics appear to be violated will result in them staring significantly longer at unexpected events – or bursting out in laughter.

Hespos pointed to a recent research study by the Google DeepMind team (We wrote about it here) that adopted development psychology concepts to bestow an intuitive understanding of physics to an AI model.

“[The] deep-learning model that started with a blank slate did a good job, but the model based on object-centered coding inspired by infant cognition did significantly better,” she wrote in The Conversation.

“The latter model could more accurately predict how an object would move, was more successful at applying the expectations to new animations, and learned from a smaller set of examples.” (For those taking notes, this was managed with an equivalent of just 28 hours of video).

I liked how Hespos summed up the study, which I think applies to the robot research, too. She wrote: “It’s clear learning through time and experience is important, but it isn’t the whole story.”

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].​

Image credit: iStockphoto/torwai