A tale as old as time: boy meets girl, girl asks boy to train a predictive model, boy succeeds. But when she discovers that he cannot operationalize said model, he is forced to go through a perilous journey into the strange and unfamiliar world of Azure Machine Learning deployment options.
Here’s my official description for the talk, which only after a while I realised that it reminds me of this.
A tale as old as time: boy meets girl, girl asks boy to train a predictive model, boy succeeds. But when she discovers that he cannot operationalize said model, he is forced to go through a perilous journey into the strange and unfamiliar world of Azure Machine Learning deployment options. He learns to face his fears and evaluate the advantages and disadvantages of Azure Machine Learning Studio and Azure Machine Learning services, all the while hoping to redeem himself in the process.
Will he succeed? Will he manage to find an option that allows him to deploy a trained model as a web service, scale it, and consume it easily? Let’s find out!
The resources used during the talk are available on GitHub.
- BmG.ipynb - the Jupyter notebook used to train and serialize the model
- iris.data.csv - the Iris data set, downloaded from here
- The Azure ML Studio experiment used to load the pickled model is available on the Azure AI Gallery
- conda_dependencies.yml - the Conda configuration file needed to create the Docker image in ML Service
- score.py - the interface for our model running on the Docker image
- input.json - input sample using the standard structure for Azure ML Studio; can also be used to invoke the web service deployed using Azure ML Service (this is why the code in score.py looks the way it does 🤓)
- During the talk I’ve demo-ed the code using Visual Studio Code, with the Azure Machine Learning and REST Client extensions
- I’m also linking to two tutorials, one for Machine Learning Studio and the other for Machine Learning Service, in case you want to learn more.
Thing is, I also wanted to wrap this in a fun story, and after a while it struck me that maybe, just maybe, I could draw a parallel between a junior data scientist that builds a model and then has absolutely no idea what to do with it, and a “the next day after they started living happily ever after” kind of thing. No, it’s not that big of a stretch, why would you think that? 🤨. But really, I wanted to highlight the need of paying attention to making models operational as opposed to just making them crazy-accurate. I figured that drawing this analogy would make people remember the idea, and anyways I thought it was a nice spin 🙃.
So I carried on, and told the story of P.C., a junior developer with a couple of TensorFlow beginner tutorials under his belt, who is assigned the highly important task of training a predictive model over his company’s data, model that needs to be consumed by the myriad of internal applications used within his company. Which he promptly does, but then has no idea what to do with said model. Motivated by the performance reviews season getting closer, he starts evaluating some of the options in the Azure cloud, which of course leads him to look at ML Service as oppose to ML Studio. Which gives me the vehicle needed to compare the two. It ends with the audience deciding which tool they liked best.