Nvidia GPUs Now Available in Cloudera Private Cloud

Nvidia GPUs from Cloudera Data Platform (CDP) are now generally available for enterprise customers, meeting the summer timeline outlined in an announcement earlier this year.

This means that CDP is now integrated with the RAPIDS Accelerator for Apache Spark. Created at Nvidia, RAPIDS is a suite of open-source libraries layered on top of Nvidia’s CUDA parallel computing architecture that enables GPU-acceleration of data science pipelines.  

With Apache Spark as the de facto unified engine for big data processing, data science, machine learning and data analytics workloads, the Nvidia integration makes it easier for Cloudera customers to integrate data science pipelines with GPU-accelerated processing to train models for data-driven insights.

Accelerate data pipelines

Hence Enterprises can leverage it to accelerate data pipelines and push the performance boundaries of data and machine learning (ML) workflows for faster AI adoption and deliver better business outcomes – without changing existing code.

The CDS GPU system runs as a private cloud, with an annual license pegged at USD7,500 per GPU for the latest generation of Nvidia Ampere GPUs. Depending on configuration, Nvidia GPUs such as the A30 and A100 will run on enterprise servers from Cisco, Dell, and others.

In a virtual press conference last week, Nvidia compared a modern CPU-only four node cluster that costs USD160,000 to the same configuration with two Nvidia A30 GPU for USD247,000. Nvidia says the latter offers exceptional speedups for a comparatively small increase in cost.

According to Joe Ansaldi, a technical branch chief for research, applied analytics and statistics at the US Internal Revenue Service, the IRS used CDP with Nvidia GPUs and saw a 10-time speed improvement in workflows at half the typical cost of such workflows.

Though at least one ML researcher spent USD29K to build a multi-GPU system for machine learning, enterprises jumping into the data science bandwagon typically prefer complete solutions that are more scalable. And enterprise demand appears to be surging.

“We run on more than 400,000 servers and have over five exabytes of data under management. Customers with machine learning and artificial intelligence work are screaming for GPUs,” said Sushil Thomas, vice president of machine learning at Cloudera.

Image credit: iStockphoto/metamorworks