Consuming your Azure Machine Learning Model

Introduction

Azure Machine Learning (AML) is an exciting technology in the Microsoft Data Platform that is on the radar of more and more organisations. Azure Machine Learning enables organisations to take their first step to performing predictive analytics by offering a PaaS service with a simple but powerful user interface, allowing simple or complex workflows. There are many use cases for the technology, whether you are performing regression, classification or tackling clustering problems. This article focusses on creating a minimum viable product from your predictive models by turning them into a consumable production service.

Setup

Azure Machine Learning models are consumed by end users, using a web service. Azure Machine Learning Studio has the workflow to create a web service built into its interface.
There are three steps to the process, creating the predictive experiment, executing the predictive experiment and then deploying the web service.
To create the predictive experiment, click on “Set Up Web Service” from within AML Studio, and then click on “Predictive Web Service”. This will then start the following workflow;

  1. Create a trained model from training experiment
  2. Create a predictive experiment

Once this has completed, the next step is to run the predictive experiment. This process checks the predictive experiment is ready to be used by the web service.
When this completes, the web service can then be deployed. Within AML Studio, click on “Deploy Web Service”, which will create the new web service. The workflow generates the web service endpoint together with the primary and secondary WebAPI keys.

Deployment

The web service can be consumed in one of two modes; either request/response for single predictions or batch mode for multiple predictions. Request/response is suitable for streaming analytics or Logic Apps, whilst batch mode is suitable for asynchronous processing of bulk data, such as Data Lake Analytics or Data Factory pipelines.
AML Studio provides a simplified dashboard and configuration tabs to control usage and the configuration of the web service. There is also a link from the dashboard to the AML Web Services portal (currently in preview at the time of writing). The AML Web Services portal gives enhanced options for the end user to configure and consume the web service.

It provides a consumption examples for both request/response and batch execution. This generates the code to embed the web service in applications or analytics jobs. It generates customised code for;

  • https URLs
  • Excel workbooks that interact with the web service
  • C# code
  • Python code
  • R code
  • WebApp Template

It provides a consumption dashboard giving;

  • Request History over time
  • Total Batch requests
  • Total Request/Response requests
  • Average Compute Time
  • Average WebAPI Latency

It provides a batch request log of all operations (if configured).

It provides settings to;

  • Configure Logging
  • Enable sample data

It allows testing of the web service from within the portal;

  • Interactive testing of request/response – enter parameters to get an interactive prediction
  • Upload a file for batch prediction

It allows the setup of a Swagger API allowing;

  • Testing of the endpoint
  • Automatic generation of documentation for Endpoint

Consume

Once the web service has been deployed, it can now be used to generate predictive output. There are many ways of consuming this data, directly from within applications, via web URL, Azure Logic Apps, analytics jobs or interactively with Excel.
One example use case risual have deployed, is the consumption of a AML web services using Azure Data Factory Pipelines. This involves extracting data from an Azure SQL Data Warehouse, passing it through the AML web service and then returning the predictive data to the Azure SQL Data Warehouse source.

Conclusion

Azure Machine Models are consumed by using a web service. The process for creating the web service is simple, and Microsoft help you to consume the model by providing consumption information, custom code and a management dashboard to monitor its usage. Take the next step and turn your models into production services for your organisation to consume and benefit from.

About the author