ML-Oops to MLOps has been saved
Perspectives
ML-Oops to MLOps
Cloud migration & operation
A blog post by Rohit Tandon, managing director, Strategy and Analytics, GM – ReadyAITM, Deloitte Consulting LLP; and Sanghamitra Pati, managing director, Strategy and Analytics, Deloitte Consulting LLP
As artificial intelligence (AI), machine learning (ML), and cloud technologies evolve, they are becoming more ubiquitous and affecting market microstructures. Massive proliferation of data, rapid advancements in data storage and computational availability, and the rise of auto-ML have accelerated AI and ML adoption to inform business decisions.
From smart manufacturing to finance transformation to omnichannel customer experience, today, AI and ML can be widely adopted and operationalized. These technologies can drive stronger outcomes through human and machine collaboration and help an organization realize scale with speed, using data with understanding, making decisions with confidence, and getting outcomes with accountability.1 This is the Age of With™.
ML-Oops…
Machine learning often conjures images of data scientists designing sophisticated algorithms and writing highly technical code. Yet, those of us who have been at the center of an applied AI practice solving real-world business problems at scale know that seasoned data scientists emphasize that there is a lot more to ML and modeling than the ML code or algorithm.
Success hinges on the combination of data, technique, process, and training. The focus for organizations that want to scale AI and ML should be to implement a set of standards and develop a framework to build production-capable AI and ML building blocks. This is the realm of ML operations (MLOps).
There is a chasm between ML and MLOps that can be tricky to scale, and MLOps can turn into ML-Oops. We’ve seen clients face significant challenges around MLOps due to several factors:
- Suboptimal processes can cause long lead times that render the models ineffective when they get into production.
- The effects of poor data quality are amplified when models are put into production.
- Inadequate monitoring and maintenance of models leads to faster degradation and decay.
- Governance and multilevel metrics vary across business units and functions, leading to unclear and suboptimal linkages to business outcomes.
…To MLOps
MLOps drives outcomes by focusing on the entire life cycle of design, implementation, and management of ML models.
MLOps aims to achieve the core principles of DevOps: automation (as opposed to siloed custom development); deployment (proliferation, as opposed to one-time use); process (integration, testing, and releasing); and infrastructure considerations.
Then MLOps builds and goes beyond DevOps:
- Successful MLOps requires a more diverse and cross-functional team.
- ML models are iterative and involve many experiments in their development phase, and they need to align to core business issues.
- In addition to the standard unit and integration testing, ML testing needs to validate ML models and retrain them.
- When models are in production, a lot can change. Data profiles will evolve and affect downstream processes, and revalidations of critical assumptions and parameters need to be incorporated.
A few illustrative and representative MLOps best practices deployed at our clients
- We designed an end-to-end MLOps process at a large pharmaceutical client and reduced its time for churning real-world data to develop analytical insights from four months to two weeks.
- With our effective versioning standards, a life sciences client, which relies on several hundred models, can now perform gradual, staged deployment across the life cycle.
- Automated testing has helped a health care client to quickly discover problems in the early phases of development, reducing testing time and overall response time for any new change in model, code, or data.
- With automated CI/CD pipelines, a retail client can train, build, and deploy ML and data pipelines daily (if not hourly), update them in minutes, and redeploy on thousands of servers simultaneously.
- Our end-to-end design and development of model operations for a consumer client highlights the degradation of model behavior well before time. With dedicated, centralized dashboards, the client can monitor all global pipelines.
- A framework of Docker and Kuberflow deployments enabled a pharmaceutical client to build environments once and ship its training and deployment quickly and easily at any time. The client can easily reproduce the working environment and orchestrate ML pipelines on Kubernetes.
How to make the journey
The path to MLOps and more effective ML development and deployment hinges on selecting the right people, processes, technologies, and operating models with a clear linkage to business issues and outcomes.
- Align people with common goals: Companies should invest in bringing AI practitioners and data scientists together into one practice while also investing in preconfigured solutions. Business and domain experts can build use cases around signature issues, data science experts can drive innovation in machine learning models, and data and ML engineers can use auto-ML tools to stitch together quick ML models.
- Embed automation and efficiency into the process: MLOps aspires to deliver reusable plugins and frameworks, automated data preparation and collaboration, and versioning of models. Thus, a data scientist can reuse or accelerate use cases based on models created in an as-is state.
The way forward
- Aligning metrics and measures to vision: Articulating the vision, conducting a readiness assessment, and integrating performance standards are critical.
- “The boat matters more than the rowing”:2 MLOps sits at the intersection of skills and process. It pulls together a range of skills and relies on automation, workflows, and systems to drive impact on a sustained basis. Focus on the process and the system.
- Design for innovation and change: Process-centricity can sometimes obscure the vision that innovation is at the core of AI and ML. The MLOps framework should promote innovation such that ML itself stays relevant and can be used in the future.
- Change management: That MLOps requires many teams is clear, but it also necessitates the consumption of models developed by others. This is not easy to implement and requires change management skills.
MLOps is central to industrialized AI3
As AI and ML are adopted enterprisewide, models need to be explainable in their construct; trustworthy in their genesis and underlying data; measurable in their impact; sustainable in their outcomes; scalable in their design; and self-correcting in their behavior.
ML is just like any other powerful tool. When used correctly, it can help build data-driven decisioning processes. On the flip side, incorrect deployment leads to damage to intended business outcomes. One major advantage of ML is its speed of analysis and insights on a huge scale, but if misdirected, models can cause suboptimal and even bad decisions at the same speed and scale. To avoid this, or what we call ML-Oops, we need to embed MLOps into all of our AI and ML efforts at scale during the design phase itself.
For more information on this topic, read our full report.
Endnotes:
2 Rolf Dobelli, The Art of Thinking Clearly (New York, Farrar, Straus & Giroux, 2013).
3 Deloitte, “MLOps: Industrialized AI,” in Tech Trends 2021.
Please visit our cloud services webpage and discover a full suite of service offerings and capabilities available to accompany you throughout your Cloud journey.
Related pages
Cloud services
Go to the future faster