There are many scenarios where we need to build and run a large number of machine learning models. For examples: in Retail where a separate revenue forecast model is needed for each store and brand, in Supply Chain where Inventory Optimization is done for each warehouse and product, in Restaurant where demand forecasting models are needed across thousands of restaurants etc. This pattern is commonly referred to as Many Models. …


Now that your enterprise data science is on Azure, you can take advantage of cloud agility and a set of great tools to improve development productivity and simplify ML operationalization.

However, even in a great platform with built-in support for modern ML practices, there’re still practical challenges to address due to the unique nature of data science.

Some practical challenges are:


Azure Databricks and Azure Synapse Analytics are two flagship big data solutions in Azure. Many customers use both solutions. Databricks is commonly used as a scalable engine for complex data transformation & machine learning tasks on Spark and Delta Lake technologies, while Synapse is loved by users who are familiar with SQL & native Microsoft technologies with great support for high concurrency & low latency queries. When used together, output from Databricks pipeline is sent to Synapse for downstream analytics use cases. Most users would store Databricks’ output data in the high performance Delta Lake format but so far, Synapse…


While it has been known that training a Deep Learning model requires lots of data to produce good result, rapidly growing business data often requires deployed Deep Learning model to be able to process larger and larger dataset. It is not uncommon nowadays that Deep Learning practitioners find themselves operating in a big data world.

To solve the problem with a large dataset in training, distributed Deep Learning frameworks were introduced. At the inference side, machine learning models, particularly and deep learning models are usually deployed as Rest API endpoints and scalability is achieved by replicating the deployment across multiple nodes in frameworks such as Kubernetes.

These mechanisms usually requires a lot of engineering effort to set up correctly and is not always efficient, especially in very big data volume.

In this article, I’d like to present two technical approaches to address the two challenges of Deep Learning in Big data:

1. Parallelize large volume…

James-Giang Nguyen

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store