Deci Points to CPUs for AI

Data scientists should seriously consider using CPUs for machine learning, according to Israel-based AI specialist Deci. Last year, it announced that the development of a new set of imagine classification models that it says delivers more than twice the performance in runtime over the most powerful models available publicly – such as Google’s EfficientNets.

To be clear, deep learning models perform much slower on a CPU than on a GPU. This is why GPUs are traditionally used as the hardware of choice for ML processing, while CPUs are utilized for generic computing tasks.

But by closing the gap between CPU and GPU for convolutional neural networks (CNNs), Deci argues that it lowers the barrier to incorporating deep learning tasks, frees up precious GPU resources for other types of AI applications, and further democratizes the use of AI.

Of course, the previous announcement was made a year ago. Why should data scientists bother given the easing supply of GPUs given the current thaw in cryptocurrency?

CPUs for greater accessibility

In response to a query from CDOTrends, Yonatan Geifman, CEO and co-founder of Deci noted that CPUs are still far more readily available at scale for processing requirements and “tend to be generally more power efficient”.

“In regards to AI workloads, we are seeing new generation CPUs catching up with GPUs. Look at Intel’s 4th Gen Sapphire Rapids, for instance- by optimizing the AI models which run on Intel’s new hardware, we can enable AI developers to achieve GPU-like inference performance on CPUs in production for both Computer Vision and Natural Language Processing (NLP) tasks,” said Geifman.

Geifman says his firm’s AutoNAC engine will enable AI teams to easily design hardware-aware deep learning models that can deliver powerful and efficient inference, even on older generation CPUs.

“With these models, tasks that previously could not be carried out on a CPU because they were too resource intensive are now able to be undertaken. Additionally, these tasks will see a marked performance improvement: by redesigning better models the gap between a model’s inference performance on GPUs versus older generation CPUs is cut in half, without sacrificing the model’s accuracy,” he said.

Finally, Geifman says the ability to run AI models on CPUs could well allow AI applications to be deployed in far more places without the need for cost-prohibitive upgrades. In a nutshell, it can democratize AI and put it within the reach of even more organizations.

“CPU augmentation would allow startups with limited access to financial capital [to] be able to reap the benefits of Deep Learning inference without the risk of overspending. The practical use cases for this are endless- hospitals and medical centers, for instance, that wish to integrate AI into common procedures like X-rays, could do this while still being fiscally responsible.”

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].​

Image credit: iStockphoto/master1305