Azure Machine Learning — An Overview

As a part of our series about exploring machine learning cloud services, in this post, we’re exploring Azure Machine Learning provided by Microsoft.

Training Overview

Azure Machine Learning provides several ways to train your models, from code first solutions using the SDK to low code solutions such as the visual designer.

Python SDK

You can interact with the service from an interactive Python session, Jupyter Notebooks, Visual Studio Code, or other IDE.

The Python SDK provides several ways to train models the most important one is Estimators which make it easy to train models using popular ML frameworks:

  • Scikit-learn
  • PyTorch
  • TensorFlow
  • Chainer

There is also a generic estimator that can be used with frameworks that do not already have a dedicated estimator class.

Also have also the Azure Machine Learning Pipelines, which define reusable machine learning workflows that can be used as a template for your machine learning scenarios.
Your Pipeline steps should focus on machine learning tasks such as data preparation, training, validation, deployment, and so on.
You can use various toolkits and frameworks, such as PyTorch or TensorFlow, for each step in your pipeline.

Azure Machine Learning Designer

The designer lets you train models using a drag and drop interface in your web browser.

Azure Machine Learning CLI

The machine learning CLI is an extension of the Azure CLI. It provides cross-platform CLI commands for working with Azure Machine Learning.

Compute Targets

Each of the training methods can use different types of Azure’s training compute targets.

Some compute targets are managed (until the date of this post it’s just Azure Machine Learning compute cluster) which means it is created and managed by Azure Machine Learning. This compute is optimized for machine learning workloads.
With this compute resource, you have:

  • Single- or multi-node cluster.
  • Autoscaling.
  • Automatic cluster management and job scheduling.
  • Support for both CPU and GPU resources.

Saving & Loading Models

Saving trained models and loading pre-trained models is possible in Azure ML. However, the model to load should be in iLearner format, and similarly, the saved model would be in iLearner format. For more information about loading and saving models, you can refer to this.
XX Amal: I’m not sure if we can handle this format outside Azure ML, I couldn’t find any resources about this. If we can’t, then it would be a huge downside.

Data Storage

On Azure, your training dataset and models are stored in either an Azure blob container or an Azure file share.

Deployment Overview

Trained machine learning models are deployed as web services in the cloud. Deployments can use CPU and GPU for inferencing.

When using a model as a web service, you provide the following items:

  • The model(s).
  • An entry script. This script accepts requests, uses the model(s) to score the data, and returns a response.
  • A conda environment file that describes the dependencies required by the model(s) and entry script.
  • Any additional assets such as text, data, etc. that are required by the model(s) and entry script.
  • You also provide the configuration of the target deployment platform. For example, the VM family type, available memory, and number of cores, etc.

Two kinds of inference are available:

  • Batch prediction: supported through ML pipelines.
  • Real-time web services (online prediction).

The summary of the available compute targets to deploy the model using Azure ML:

Compute targetUsed forGPU supportDescription
Azure Kubernetes Service (AKS)Real-time inferenceYesUse for high-scale production deployments. Provides fast response time and autoscaling of the deployed service. Cluster autoscaling isn’t supported through the Azure Machine Learning SDK.
Azure Functions(In preview) Real-time inferenceNoRun real-time inference on serverless compute.
Azure Machine Learning compute clusters(In Preview) Batch inferenceYesRun batch scoring on serverless compute.


On Azure Machine Learning, you pay ML surcharge for both training and inferencing which defers depending on the VM type you choose and its region.

For detailed information about the pricing, you can refer to this page.

You can also use their pricing calculator to estimate your ML service usage cost.

Do you know that we use all this and other AI technologies in our app? Look at what you’re reading now applied in action. Try our Almeta News app. You can download it from google play:

Further Reading

  1. Azure Machine Learning Documentation

Leave a Reply

Your email address will not be published. Required fields are marked *