Article

Machine Learning Operations - is not just about Artificial Intelligence it is about changing the way you do business

7 min read

Deloitte latest Tech Trends 2021 report mentions that only 8 percent of organisations achieve the anticipated return on investment from Machine Learning programs. At the same time, the MLOps market is expected to expand to nearly US$4 billion by 2025. In the UK alone, 71% of adopters expect to increase their investment in the next fiscal year, by an average of 26%.
 

Introduction to Machine Learning Operations (‘MLOps’)

Deloitte’s Tech Trends 2021 found MLOps to be the key emerging approach to scale Artificial Intelligence (‘AI’) applications, unifying data engineering, machine learning (ML) and DevOps (software development and IT operations). Better modelling practices such as MLOps goes hand in hand with building trustworthy and ethical AI. For more information on this, please read our Trustworthy AI framework.

Read on to find out more about MLOps and the benefits of scaling AI.

Moving from improvised learning to continuous learning

To adopt MLOps an organisation needs to align its data science capability with business as usual processes to enable ML systems to track shifts in business priorities and continue to deliver value. This means moving from improvised learning to semi-autonomous learning, and finally towards continuous learning. Each of these are detailed in Figure 1 below.

Figure 1. Moving from improvised learning to continuous learning

The Deloitte AI Institute UK has identified ten AI application dimensions that reflect the maturity of your system and are important to consider when moving from improvised learning to continuous learning. Hover over the image below to learn more:


Figure 2. MLOps 2.0 dimensions

The Deloitte AI Institute UK has identified ten AI application dimensions that reflect the maturity of your system and are important to consider when moving from improvised learning to continuous learning.


Data collection

Select and integrate the relevant data from various data sources for the ML task. Catalogue and organise the data in a way that allows for simple and fast analysis.

  • Ad-hoc data ingestion: Ingest data manually when there is an update on the original data source
  • Data pipeline: A set of actions have been created to ingest raw data from disparate source and move the data to destination
  • Realtime ingestion: Data collection process fully automated and raw data gets ingested whenever there’s an update

Data validation

Validate the input data fed into the pipeline. Check descriptive statistics, inferred schema and data anomalies to reduce errors in data.

  • Manual checks: Manually implement different checks to validate the input data
  • Data validation tests: A structured and defined process to check the input data
  • Realtime data triaging: Input data is validated automatically using a pre-defined method when it is put into the system.

Feature engineering

Add and construct additional variables or features to the dataset to improve machine learning model performance and accuracy.

  • Ad-hoc: Add/create feature data to the machine learning model from time to time as required
  • Feature store: An ML-specific data system that runs data pipelines that transform raw data into feature values, and stores/manages the feature data itself
  • Automated: New features automatically created from the dataset

Experimentation

Multiple experiments on model training can be executed before making the decision what model will be promoted to production.

  • Improvised trials: Implement one-off or randomised experimentation
  • Catalogued trials: Implement different sets of experimentation
  • Reproduceable pipelines: A pipeline that can be executed repeatedly to experiment on model training using structured and pre-defined method

Training

Implement different algorithms with prepared data to train various ML models.

  • Siloed training: An unstructured and one-off way of training the model
  • Scalable pipelines: A sets of method have been defined in the pipeline to train the model
  • Continuous training: Automatically and continuously retrain the model to adapt to changes that might occur in the data

Model validation

Confirm the model is adequate for deployment andthat its predictive performance is better than a certain baseline.

  • Offline validation: Model validated manually offline
  • Model validation suite: A set of model validation tests have been defined to perform the task
  • Automated validation tests: An automated process to perform the validation test before the model being deployed

Integration

The pipeline and its components are built, tested, and packaged when new code is committed or pushed to the source code repository.

  • Manual build and test: Manually build and test the pipeline when new code being committed and pushed
  • Automated build and test: The pipeline gets updated automatically when new code is committed and pushed
  • Continuous integration: Merging all developers’ working copies into a shared repository several times a day

Deployment

Deploy ML models to Cloud infrastructure and expose the API, which enables other workers to consume and use the model. It is the engineering task of exposing an ML model to real use.

  • Manual deployment: Manually deploy the model when required
  • Automated deployment: Deployment process gets automated so we can execute promptly when required
  • Continuous deployment: Every change that passes the automated tests is deployed to production automatically, making changes that are visible to the software users

Model management

Monitor the model’s predictive performance metrics (throughput, uptime, etc.) to potentially prompt a new iteration in the ML process.

  • Manual inventory: Manually collect the model predictive performance metrics
  • Model registry: A centralised model stores that stores model lineage, versioning and other configuration information.
  • Model monitoring suite: A centralised monitoring system that tracks the performance of ML models in production and collects/stores performance metrics to identify potential issues

Feedback

Collect feedback information to the model on how well it is performing.

  • Ad-hoc collection: Collect feedback when required
  • Feedback data stored: Feedback gets collected and stored
  • Feedback loop to pipeline: Feedback gets collected systematically and looped into pipeline for further improvements

Your MLOps potential based on AI applications and what you want out of your AI

The critical value generating areas to prioritise are determined by the business context and domain constraints which determine whatshould be prioritised in the ML system lifecycle. The four areas determine how close to continuous learning an organisation ought to progress across the 10 AI application dimensions discussed earlier in this article. The higher the demand in each of these four areas, the more value could be gained from continuous learning. Click on the boxes below to learn more.

Figure 3. MLOps value generating areas

Rate of data refresh

Rate of data refresh depends on how quickly the objective/observations change, where more frequent intervention needs to be applied to the data collection layer and the validation process when raw data gets refreshed often.

Materiality

Materiality refers to the impact a potential AI application would have on the business. Higher materiality leaves less room for mistakes and the need for a quicker response time to address the issue.

Model governance

Model governance refers to the business processes that govern model deployment and how heavily regulated an industry is. For example, a Financial Services organisation may need more rigorous model governance structures in place to adhere to local, national and international financial authority regulations.

Feedback latency

Feedback latency is the delay between the input, the model prediction and the outcome feedback where one needs to consider how costly a delay in this communication is to the business or decision process. More efficient operation processes are needed if low latency is required.

On-demand consumer applications

Regulated consumer applications

Regulated enterprise applications

Benefits of MLOps

MLOps enhances collaboration across the wide range of professionals, such as data scientists, engineers and IT professionals. who collectively develop, test and deploy ML applications. By streamlining and automating the AI lifecycle, an organisation will find itself realising value across:

MLOps has been found to help organisations across all aspects of productionalising models from automation of data preparation, model training and evaluation through to tracking model versions, monitor model performance and making models reusable. This should create specific business benefits, identified in our MLOps framework:

For further detail on how to start your journey to MLOps, read Deloitte AI Institute’s report “ML-Oops to MLOps”.
 

Summary

MLOps will become increasingly important to AI practices as it seeks to help tackle ever larger challenges. Deloitte has developed multiple assets to accelerate your MLOps journey and shorten the time to realise the benefits of enterprise AI on the Cloud.

To learn more about how Deloitte can support your MLOps journey, please get in touch.

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?