Databricks Launches Real-Time ML for the Lakehouse

Databricks last week announced the launch of Databricks Model Serving to provide simplified production machine learning natively within the Databricks Lakehouse Platform on AWS and Azure.

The new service is designed to meet the needs of businesses who want to leverage AI/ML to uncover insights from their data, make accurate, instant predictions that deliver business value and drive new AI-led experiences for their customers. For example, a bank can quickly identify and combat fraudulent charges on a customer’s account or a retailer can gain the ability to instantly suggest complementary accessories based on a customer’s clothing purchases.

However, implementing real-time ML systems has remained a challenge for many organizations because of the burden placed on ML experts to design and maintain an infrastructure that can dynamically scale to meet demand.

The data and AI firm says Databricks Model Serving removes the complexity of building and maintaining complicated infrastructure for intelligent applications. This means organizations can now leverage the Databricks Lakehouse Platform to integrate real-time ML systems across their business without the need to configure and manage the underlying infrastructure.

Databricks Model Serving

Under the hood, Databricks Model Serving integrates with Lakehouse Platform capabilities such as Feature Store for automated online lookups to prevent online/offline skew, MLflow Integration for fast and easy deployment of models, and Unified Data Governance to manage and govern all data and ML assets with Unity Catalog.

In addition, deep integration with the Lakehouse Platform means businesses can access features such as data and model lineage, governance, and monitoring throughout the ML lifecycle, from experimentation to training to production.

Fully managed by Databricks, the service delivers a highly available, low latency service for model serving, giving businesses the ability to easily integrate ML predictions into their production workloads – with customers paying only for the compute they use. Businesses benefit from the reduced complexity of building and operating ML systems while benefitting from native integrations across the lakehouse.

Ultimately, organizations can manage the entire ML process in one place – from data preparation and experimentation to model training, deployment, and monitoring.

“Databricks Model Serving accelerates data science teams’ path to production by simplifying deployments, reducing overhead, and delivering a fully integrated experience directly within the Databricks Lakehouse,” said Patrick Wendell, the co-founder and vice president of engineering at Databricks.

“This offering will let customers deploy far more models, with lower time to production, while also lowering the total cost of ownership and the burden of managing complex infrastructure.”

Image credit: iStockphoto/Who_I_am