This is a more detailed version of my Boy meets Girl talk, created specially for Microsoft Ignite | The Tour Amsterdam 2019. Whereas Boy meets Girl was mostly focused on how to deploy a trained model using either Azure ML Service or ML Studio, here I wanted to create a more in-depth comparison of the two tools. This is what led me to the concept of having multiple rounds, with the audience voting for their favourite tool (truth be told, I think I just wanted another go at delivering something similar to my TypeScript versus CoffeeScript talk 🤓).

Once the concept was clear, I spent a significant amount of time just polishing the examples and making sure they’re as exhaustive as possible. And then, of course, another significant amount of time was spent just cutting out things because they didn’t fit with the rest of the story 🙄. All worth it of course, since it allowed me to also have a meaningful conversation with the audience (which was really really really active and involved), answering questions and going into more detail if necessary, instead of just rushing to go through all of the slides.

Recording

Resources

The resources used during the talk are available on GitHub, below is a quick rundown of what you’ll find there:

  • First things first, I used the training dataset from Kaggle’s Petfinder competition, available here, you will need this in order to be able to run the code.
  • A sample configuration file is available in aml_config, all you need to do is fill in your own subscription/workspace details here
  • Code for Round 1 - Look and Feel is available here, incuding the training script and the Jupyter notebook used for integrating with Machine Learning Service
  • Code for Round 2 - Analysing and Preparing Data is here, just a simple notebook with some very light data analysis
  • Code for Round 3 - Training and Evaluating Models is here, again just a simple training script and the corresponding Jupyter notebook
  • Last but not least, the code for Round 4 - Deploying and Consuming Models is here, where we also have the score.py and conda_dependencies.yml files needed to build the Docker image. And of course, the input.json file used for invoking the scoring web service (this uses the standard structure for Azure ML Studio, this is why the code in score.py looks the way it does)
  • The Machine Learning Studio experiments are available in the Azure AI Gallery: Round 1, Round 2, and Round 3. Since Round 4 was all about deploying the experiment as a web service, you can reuse the Round 3 experiment
  • The slides are available on Speaker Deck
  • I’m also linking to two tutorials, one for Machine Learning Studio and the other for Machine Learning Service, in case you want to learn more.