AI Complexity Calculator The AI complexity calculator is designed to help you measure the complexity of an AI project and determine the key requirements for success
This online calculator is designed to measure the intrinsic complexity of an AI project. Each question is considering time complexity, techniques used, and risk of failure, three embedded factors contributing to the overall complexity of an AI project. Upon completion, you will receive an overall complexity score which represents the degree of complexity in your AI project, adjusted by mitigating factors that may increase or decrease the overall complexity level. Complexity score ranges from 1 to 10, where 1 represents the least complexity and 10 represents the most complexity. Along with the complexity score, you will also receive a summary of suggestions on requirements to succeed in your AI project and some helpful links for more detail. There are 27 multiple choice questions. Please select one answer for each question. Should take no more than 5 minutes. Discover your project complexity.
This interactive assement tool saves progress in your browser so you can complete it over time. To clear saved answers and start again, please press the "Reset Calculator" button or click here. Your answers are private and not sent to Deloitte unless you press the "Get in touch" button in the results section.
Get your results Add your details to download your PDF report.
This will submit your details, including your responses to us. For more information and to find out about how we use data, please see our privacy policy.
Thank you Thanks for sending us your details. One of our team will get back to you shortly.
This section is pertinent to the business and strategic nature and scope of your AI projects. Your organization’s sector according to the Global Industry Classification Standard (GICS). Overall strategic importance of AI projects to your organization, from simple incremental improvement to next-gen technology and business model. Guidance on expected financial impact of AI projects. When deploying machine learning models, what is the scale of audience that models are serving. This section is pertinent to aspects of data needed to build, test, and deploy machine learning models in your AI projects. Expected data format(s) and data source(s) used in model training. Type(s) of data used in model training. The level of correctly labelled data used in building supervised machine learning models. Data source considerations when it comes to training and testing your machine learning models. Level of domain expertise available in AI projects This section is pertinent to aspects of training machine learning models in your AI projects. How many models will need to be built. More models, especially of different types, will likely increase the overall complexity in AI projects. The type of task(s) you are trying to solve by machine learning. Machine learning models are trained with the accumulated data from time to time in a batch manner, or data received as a continuous flow (live stream) and need to adapt to change rapidly or autonomously? Types of resource and environment available to the AI projects team. This section is pertinent to aspects of performance verification and model reproducibility. Availability of existing performance metrics associated with verifying model performance using previously unseen data. Will Monte Carlo simulation be used to test model performance. Availability of existing protocols to ensure machine learning models are not developed in a vacuum but closely aligned with business objectives throughout project lifecycle. Consideration of additional requirements when building machine learning models. Reasons behind additional requirements when building machine learning models. These may infuse additional layer of complexity in AI projects. This section is pertinent to aspects of model deployment. In what environment will machine learning models be deployed. In what environment will predictions be made. The degree of change in the input data that are fed into machine learning models to make predictions. Availability of engineering support when it comes to model deployment. More software engineering expertise may increase the likelihood of model deployment success. This section is pertinent to aspects of model governance and security. Availability of existing protocols in how each machine learning model should be monitored and updated to reflect consistent accuracy and business objectives. Availability of existing documentation in AI ethics and how machine learning models should be built in such ways to ensure transparency, fairness, trust, and permission for all participants and comply with regulatory agencies. Plan to include a broader audience, both business and technical, developers and end users, when it comes to machine learning model development. Availability of existing security protocols in how to ensure model safety and avoid external attacks, as well as contingency plans for when attacks occur. Has your organization successfully deployed complex machine learning models on a consistent basis?
Futher insight Understanding AI The Hidden Power of AI The Future of AI Becoming an AI fuelled organisation AI is helping to make better software Explaining Explainable AI How countries are pursuing an AI advantage What global consumers think about self-driving and electrified vehicles Digital Reality Robots are coming AI Dossier Analytics & AI Deloitte AI Institute UK Infusing data analytics and AI iDeal—defining M&A data analytics Insights Services ConvergeHEALTH Miner™ Data and analytics at scale in the life sciences industry Performance analytics TrueVoice Artificial Intelligence and Machine Learning Digital Underwriter Forensic Analytics Data Management Strengthening Our AI Foundations Modernising Big Data Management Techniques Data management barriers to AI success ROI from AI The importance of strong foundations Defining and implementing a data strategy Data mastery A foundational pivot for digital transformation Value-based data risk management Master data management for pharma product data and information Modern data governance to facilitate participant-centric research for childhood cancer Is your data ready? Cloud Cloud complexity management Application Modernization Cloud Managed Services Deloitte and Google Cloud Deloitte Cloud Institute™️ Cloud Analytics IT Transformation and agility with Cloud Cloud services for Enterprise Developing strategies for Cloud Build, Create, Migrate Architecture Modernizing MDM for a data-driven business Cloud Native Development Enable Your Solution Implementing 5G and Edge Computing Capabilities Application Modernization Application Discovery Services Digital Controllership™ Services Enterprise Technology & Performance Technology Consulting Managing risk & cyber security with Cloud Delivering business outcomes with Cloud Core business applications in Cloud Model Training Testing, Training, Tuning voice Assistants ML Training Concepts AI Model Bias Unintended Bias in AI No Data? No Problem. The Data Landscape Strengthening Our AI Foundations Metaverse and Internet Regulation Intellectual Property and Artificial Intelligence Trustworthy Open Data for Trustworthy AI Model Interpretability & Reproducibility Kortical Trustworthy AI Enhancing ALM Five vectors of progress – Machine Learning Regulatory perspective Trustworthy AI – Assure offering AI and Pharma How Cloud enables access to game-changing technologies AI and automated decision making AI maturity curve Model Deployment Machines with purpose Reinforcement Learning Affective Computing Human Experience Platforms Internet Of Things Augmented Reality Data Model Development Natural Language Processing in Government Data Driverless Vehicles Predictive Modelling AI Impact The AI-powered bank – what impact will AI technologies have on a bank’s operating model? Using AI and automated decision-making to improve clinical care in the NHS Conversational AI Cloud AI enabled analysis for cancer treatment development The future of intelligence analysis Intelligent clinical trials Explaining explainable AI A moral license for AI The window for AI competitive advantage is narrowing Bringing AI to the device Edge AI chips come into their own AI-augmented government Getting ahead of the risks of artificial intelligence
Get in touch to discuss these results You can submit your results to Deloitte and we will get back to you to discuss your projects in detail and how we can help deliver your AI projects successfully.
Clear business goals and objectives Well defined and quantifiable success criteria Good data management and governance Easy access to good quality data for statistical analysis and model training A small data analytics and data science team Appropriate tools Manual building ML / statistical models Manual model testing Basic framework for model verification and reproducibility
Clear business goals and objectives Well defined and quantifiable success criteria Good data management and governance Easy access to good quality data for statistical analysis and model training A small data analytics and data science team Appropriate tools Basic capability in data wrangling, data transformation, and feature engineering Able to build relatively complex statistical models for business use cases Manual building ML / statistical models Manual model testing and updating Basic framework for model verification and reproducibility Basic model interpretability and analysis of feature importance
Clear business goals and objectives Able to prioritize multiple ML use cases against business priorities Well defined and quantifiable success criteria for each projects Good data management and governance Strong data science team Appropriate tools and architecture to build complex machine learning models using open source libraries and frameworks Strong capability in data wrangling, data transformation, and feature engineering Able to run entire data pipeline manually Able to manually build relatively complex ML models using offline batch learning Manual model testing and monitoring Able to manually detect performance deterioration and retrain ML models Strong in model interpretability, reproducibility, and analysis of feature importance
Clear business goals and objectives Senior stakeholders involvement Able to prioritize multiple ML use cases against business priorities Well defined and quantifiable success criteria for each projects Able to store, manage, and process large amount of data for ML training Strong data science team Strong support in best coding practice from ML engineering Appropriate tools and architecture to build complex machine learning models using open source libraries and frameworks Strong capability in data wrangling, data transformation, and feature engineering Able to build feature stores for reusability Able to run entire data pipeline manually Able to manually build relatively complex ML models using offline batch learning Strong capability in model deployment Manual model testing and monitoring Able to manually detect performance deterioration and retrain ML models Strong in model interpretability, reproducibility, and analysis of feature importance
Clear business goals and objectives Senior stakeholders involvement Able to prioritize multiple ML use cases against business priorities Well defined and quantifiable success criteria for each projects Able to store, manage, and process large amount of data for ML training Strong data science and ML engineering team Excellent coding practice Appropriate tools and architecture to build complex machine learning models using open source libraries and frameworks Strong capability in data wrangling, data transformation, and feature engineering Able to build large feature stores for reusability Able to perform continuous training of the models by automating the ML pipeline Able to perform automated data and model validation steps to the pipeline Able to automatically build relatively complex ML models using offline batch learning Able to automate the process of using new data to retrain models in production Strong capability in model deployment in live environment Able to manually detect performance deterioration and retrain ML models Strong in model interpretability, reproducibility, and analysis of feature importance
Clear business goals and objectives Senior stakeholders involvement Able to prioritize multiple ML use cases against business priorities Well defined and quantifiable success criteria for each projects Able to store, manage, and process large amount of data for ML training Strong data science and ML engineering team Modularized code for components and pipelines Excellent coding practice Appropriate tools and architecture to build complex machine learning models using open source libraries and frameworks Strong capability in data wrangling, data transformation, and feature engineering Able to build large feature stores for reusability Able to perform continuous training of the models by automating the ML pipeline Able to perform automated data and model validation steps to the pipeline, as well as pipeline triggers and metadata management Able to automatically build complex ML models using both offline batch learning and online learning Able to automate the process of using new data to retrain models in production Ability to deploy ML models in live environment Able to manually detect performance deterioration and retrain ML models Strong in model interpretability, reproducibility, and analysis of feature importance
Clear business goals and objectives and large scale implementation Very senior stakeholders involvement Able to prioritize multiple ML use cases against business priorities Well defined and quantifiable success criteria for each projects Able to store, manage, and process large amount of data for ML training Strong capability in big data processing Strong data science and ML engineering team Modularized code for components and pipelines Excellent coding, testing, experimentation practice Appropriate tools and architecture to build complex machine learning models using open source libraries and frameworks Strong capability in data wrangling, data transformation, and feature engineering Able to perform continuous training of the models by automating the ML pipeline Able to perform automated data and model validation steps to the pipeline, as well as pipeline triggers and metadata management Able to automatically build complex ML models using both offline batch learning and online learning Able to automate the process of using new data to retrain models in production Ability to deploy ML models in live environment Large scale model deployment severing millions of customers Able to manually detect performance deterioration and retrain ML models Strong in model interpretability, reproducibility, and analysis of feature importance
Clear business goals and objectives and large scale implementation Very senior stakeholders involvement Able to prioritize multiple ML use cases against business priorities Well defined and quantifiable success criteria for each projects Able to store, manage, and process large amount of data for ML training Strong capability in big data processing Strong data science, ML engineering, DevOps team Modularized code for components and pipelines Excellent coding, testing, experimentation practice Appropriate tools and architecture to build complex machine learning models using open source libraries and frameworks Ability to build new tools and libraries for internal use Strong capability in data wrangling, data transformation, and feature engineering Able to build large feature stores for reusability Able to perform automated data and model validation steps to the pipeline, as well as pipeline triggers and metadata management Able to automatically build complex ML models using both offline batch learning and online learning Able to automate the process of using new data to retrain models in production Ability to deploy ML models in live environment Large scale model deployment severing millions of customers Able to perform automatic performance monitoring and retraining ML models Build a robust automated CI/CD/CT system Strong in model interpretability, reproducibility, and analysis of feature importance
Clear business goals and objectives for worldwide deployment Very senior stakeholders involvement Able to prioritize multiple ML use cases against business priorities Well defined and quantifiable success criteria for each projects Able to store, manage, and process large amount of data for ML training Strong capability in big data processing Strong data science, ML engineering, DevOps team Modularized code for components and pipelines Excellent coding, testing, experimentation practice Appropriate tools and architecture to build complex machine learning models using open source libraries and frameworks Strong ability to build new tools and large libraries for internal use Strong capability in data wrangling, data transformation, and feature engineering Able to build large feature stores for reusability Strong capability in simulation testing Able to perform automated data and model validation steps to the pipeline, as well as pipeline triggers and metadata management Able to automatically build complex ML models using both offline batch learning and online learning Able to automate the process of using new data to retrain models in production Ability to deploy ML models in live environment Worldwide model deployment severing hundreds of millions of customers concurrently Able to perform automatic performance monitoring and retraining ML models Build a robust automated CI/CD/CT system Strong in model interpretability, reproducibility, and analysis of feature importance Robust protocols to deal with external attacks
Clear business goals and objectives for worldwide deployment Teams of very senior stakeholders involved Able to prioritize multiple ML use cases against business priorities Well defined and quantifiable success criteria for each projects Able to store, manage, and process large amount of data for ML training Strong capability in big data processing Strong capability in edge computing Strong data science, ML engineering, DevOps, and software engineering team Modularized code for components and pipelines Excellent coding, testing, experimentation practice Strong capability in edge computing for deploying high-performing models on edge devices Appropriate tools and architecture to build complex machine learning models using open source libraries and frameworks Strong ability to build new open source tools and libraries Strong ability to build complex hybrid models Strong ability in simulation testing Strong capability in deploying models on edge devices Strong capability in data wrangling, data transformation, and feature engineering Able to build large feature stores for reusability Able to perform automated data and model validation steps to the pipeline, as well as pipeline triggers and metadata management Able to automatically build complex ML models using both offline batch learning and online learning Able to automate the process of using new data to retrain models in production Ability to deploy ML models in live environment Worldwide model deployment severing hundreds of millions of customers concurrently Able to perform automatic performance monitoring and retraining ML models Build a robust automated CI/CD/CT system Strong in model interpretability, reproducibility, and analysis of feature importance Robust protocols to deal with external attacks
A local real estate company is looking to use machine learning to forecast property prices. The primary objective of the project is to build a machine learning model which can help assist the company's existing price setting mechanism. All data used to train the model are tabular data from national database, which consists of features of property such as property size, number of bedrooms, property location, historical sale prices, etc. All 5000 rows of data are in relatively good quality. The company's small team of data scientists build simple regression models and provide insight on property prices to the sales team. Models are regularly updated according to customer feedback and latest sale prices as the real estate company expects higher near-term volatility in house prices amid heightened inflation expectation and interest hikes from the central bank.
A local retail company wanted to personalize offers based on customer sales patterns to entice engagement and more revenue. Their primary objective is to build a model which can categorise their subscribed customers by online sales so they can be added to email lists more aligned to their needs. Data used to train the model includes items bought, date of purchase, time. All data is sourced from a local database containing sales history. The data has only been collected recently, with the appointment of a new analytics lead causing some sparsity in sales data. The company's team of data scientists built a simple classification model which assigned past data to given tags including bold, neutral, smart, casual. Using python code, the models returned a list of customers with their given category assigned. Models are updated quarterly, with updates in email lists.
A local food company wants to understand what products their customers like best for better layout optimization and increased customer satisfaction. The objective is to build an AI solution which can cluster items by shopping patterns. Data used for the model will be tabular, including sales information such as item, price, date of purchase. All data is stored in a local database and is of good quality. The small team of data scientists have implemented an association rule learning model which checks the dependencies of different food items, allowing the company to shift their stock to better suit their customers. Models are updated regularly to consider changing sales habits, specifically during seasonal periods.
A pharmaceutical company wants to find a better way to optimize their sales data to predict patterns and pre plan stock. The objective is to implement a predictive model which the team can analyse and use alongside ordering stock. Data used for the model will be tabular, including sales information such as drug sold, date of purchase, time. All the data is pulled from a local database of collective sales data. The data is in good quality and requires little preprocessing efforts. The company only have a small team of data scientists, who created multiple regression models to handle the large number of different types of drugs they were using. The team also categorized their data, to group drugs with similar uses e.g. allergies, nausea, sleep related. The results provide predictive figures used by management. Models are updated during seasonal changes which impact customer sales habits.
A national retail bank is looking to use machine learning to better understand customer's intrinsic credit risk level and propensity to default. The primary objective of the project is to build a machine learning model that can help reduce the bank's exposure to total toxic debt in the loan business. All data used to train the model are tabular data from the bank's internal database consisting of customer characteristics such as location, occupation, debt payment history, past transactions, etc. Most historical data are in good quality but contain lots of missing values. The company's team of data scientists have tried several tree-based models and picked one that gives the best result. Because of regulatory requirement, it is imperative for the bank not to use features that are sensitive or perceived prejudices in the models, and that all models need to be explained and audited. All models are built on on-prem servers and predictions are made manually to assist loan decisions. Models are updated infrequently however the bank expects change in near-term economic outlook that may influence customers' default risk amid heightened inflation expectation and interest hikes from the central bank.
A national retail company is looking to implement sentiment analysis to gather how their customers feel about their products. The project aim will be to create a classification model which can detect the emotions of written reviews. All data used to train the model are tabular data, pulled together by data scientists from uploaded reviews on their site. The data will include review information such as text, product, date etc. A team of data scientists create knowledge-based models to train the tool to detect words which are strongly related to a particular emotion. All models are built using on-prem servers and data scientists manually use the data to create visualisations and reports on customer happiness and satisfaction. Models are updated frequently to expand and grow training data.
A marketing client wants to incorporate an AI solution to automatically pull data surrounding the client's brand. The idea is to automate the company's media tracking to better understand the reach and engagement from their online media. All data is generated from external sources using an API to web scrape articles that reference the brand. This may include images, article title, date of publication, URLs. The API returns high quality data, to be analysed by data scientists. The team of data scientists built an object detection model to parse through any images, which could in turn detect the frequency of the company's logo. This model is built using RCNN architecture. The data is pushed to a local database where the team can deliver insights and further reporting. The automatic API allows continuous data influx, with results delivered less than 24 hours after an article is published. The model is updated monthly to consider any notable shifts in the brand and ensure results are the most up to date.
A national technology firm is looking to use AI to better protect their employees. The objective of the project is to create a model which can classify whether an email is a phishing scam. This will drive data protection and reduce risks of data leaks. Data used to train the model is a collection of example emails, domain of email, number of links, phrases which are common etc. This will be both internal and external sources for maximum reliability. A team of data scientists use multilayer perception neural networks to detect whether words detected in an email are more similar to that of spam or a genuine email. All models are built on the cloud and deployed on employee computers. Models are updated regularly to update new sources/phrases as the style of phishing emails shifts.
A national retailer is looking to use machine learning to sell targeted goods and services to their customers. The primary objective of the project is to build an online recommendation system that can help increase the retailer's online revenue and market share. All data used to train the model are from the retailer's internal database consisting of customer characteristics, past purchases, product reviews etc. Most historical data are stored in different cloud databases and in relatively good quality. The team of data scientists have built neural collaborative filtering models using both explicit feedback and implicit feedback. Model explainability is a preference rather than a requirement. All models are built on the cloud and deployed using live data to make real-time recommendations with minimal human involvement. Models are manually retrained and updated weekly based on changes in customer purchasing behaviors and latest trends in social media. The retailer doesn't expect major shift in social and economic environment in the near term.
A national legal client wants to reduce manual efforts sorting through and classifying a range of claims. Data used for model training is inclusive of types of claims and expected results/classification. 1000s of claims a week had to previously be managed manually, which was used to feed into the modelling systems for classification purposes. Data is relatively good quality with minimal preprocessing required. A team of data scientists utilized Natural Language Processing and neural networks, needed for the digital engineer to understand the incoming claims and then return a result based on previous data. In this way, human error is reduced. All models are built on the cloud and deployed using live data to make real-time classifications with minimal human involvement. Models are automatically retrained with new, incoming data, though the model is tested for efficiency every week.
A global food supply company is looking to utilize data to guide their sales and marketing activities. The primary objective is for the client to better understand who their best customers are. The data fed into the model was tested with Sales teams to fine tune using real world data, considering gaps in current sales data. A large team of data scientists, ML engineers, business analysts, solution architects and business change specialists built predictive models to prioritize customers/restaurants into three tiers. The team were able to implement prioritization models without full training data, with a fully developed app and analytics system stored on Azure Cloud. The model is retrained and managed regularly, using the built tool for managing updates to consider market changes.
A gas distribution company wants to manage their operations more effectively, through the implementation of an analytics engine which can provide useful insights for more informed business decisions. Data used to train the model include real-time error reports, weather, leakages, cost and asset replacements. This tool requires a mix of structured and unstructured data, requiring complex pre-processing efforts. A team of data scientists use multiple machine learning models and the implementation of business rules to direct the client on the best strategy for asset replacements. All models are built on the cloud for automatic training and updates.
A global e-commerce company is looking to build a chatbot to improve marketing, sales, and customer service. The primary objective of the project is to deploy an interactive chatbot in the overall customer journey when they shop online. Data used to train the model are product data on the company's website and customer data, such as characteristics, demographics, previous transactions, type of products considered and purchased, and all other types of online and offline interactions. The team of data scientists and ML engineers build a comprehensive chatbot that is able to provide an exact answer based on understanding the user's query as well as engage in a flow-based conversation when users gradually express their opinions or requests over the course of conversation. All models are built on the cloud and deployed on the company's website. Chatbot interacts with customers without human involvement. Models are monitored for performance drift and updated based on changes in customer purchasing behaviours and latest trends in social media.
A healthcare client wants to find a way to detect chest abnormalities to increase the accuracy of diagnoses. The objective of the project is to create a solution which can localize and classify chest abnormalities from radiographs. Data used to train the model will include a collection of radiographs pre-classified. A large data science team developed a model which could process an X ray image, detect any abnormalities as well as it's location. The solution uses two computer vision architectures and image augmentation techniques for enhanced accuracy. The team built the solution using scalable AWS architecture which can store the files, process the images and train the models. The architecture can be easily replicated for multiple use cases.
A global streaming platform wanted to increase customisation to their viewers. The idea was to build a predictive model which can automatically ingest user data and transform this into valuable recommendations. Data used to train the model includes shows watched by the viewer, reactions, day, time, searches, actions e.g. pause/fast forward and re-watch rates. External data such as current trends are also collected to shape personalisation. A large team of data scientists use multiple algorithms to create predictions and reorder content visible. Algorithms include continue watching ranker which analyses which shows have been watched but not completed, as well as a video similarity ranker which can recommend content similar to what the viewer watched previously. All models are built on the cloud and deployed on the streaming platform. Data is collected real time from continual watching patterns to update and retrain the model automatically.
A global retail company is looking to improve customer service calls by detecting the emotion of the caller using AI. The project idea is to create a tool for employees which generates the customer's emotions and ways you can steer the conversation in the right direction for improved satisfaction. Data used to train the model will include thousands of voice clips and words associated with an emotion, previously recorded calls, external sources on current trends etc. A large team of data scientists created a series of models, including a speech recognition model which could detect the influx of recorded data to pull key words with an associated emotion. A separate model was implemented to deliver conversation suggestions to the employee real time based on trends and previously successful calls. All models are built on the cloud and deployed locally on staff devices for real time use on call. Calls continue to be recorded for automatic updates and retraining of the model.
A global technology company is looking to reinvent social media with the use of AI to share ideas and visions with your friends. The objective would be to use voice to generate anything the consumer wants in a virtual world. Data used to train the model are from a variety of sources, including a range of images to simulate the world and what people will want to envision, and phrases/text to humanize AI and bridge the gap between human-AI understanding. Large groups of data scientists and engineers work to create a more natural communication between human and AI, implementing affective computing strategies. A hybrid of models work together for improved accuracy, with one objective being understanding speech (Natural language processing), before a model can automatically transform this into an image (computer vision). Rigorous testing and simulation is required to eliminate racial bias/prejudice, any misunderstanding of accents and errors in image generation. Models are deployed on the cloud, where they are automatically retrained and updated on a regular basis.
A multinational tech supplier is looking to incorporate a product which can act as a personal assistant, combining IoT technology to refine user's home experience. The idea of the project would be to create a device which can understand and respond to your asks. Data used to train the model include recorded commands and the outcome, external data e.g. web scraping through google to return results of questions. A large group of scientists and engineers came together to develop a variety of models tailored to this device's needs. Models include natural language processing to dissect speech into a sequence of words, to be analysed before a response can be given. Separate models allow interconnectivity with home items, to allow the consumer to direct the device to e.g. turn the tv on. A model is also used for sentiment analysis, where the device can detect if the answer they have provided has created happiness, anger etc. to fine tune results in the future. The model is automatically retrained based on saved recordings of the customer's voice, used to analyse and return a command/response.
A multinational e-commence company is looking to build the its own delivery infrastructure which enables the next generation home delivery solution using autonomous drones. The primary objective of the project is to test the feasibility of the idea by building and testing an autonomous flying system in drones. Data used to train the model are from a variety of different sources and in different formats, such as cameras, weather data, data from sensors, simulation data, etc. The team of data scientists and engineers have assembled and trained a hybrid of different models, each of which has own objectives, such as identifying surrounding objects, calculating quickest path in real-time, reacting to obstacles, and making real-time directional decisions etc. Extensive simulation has been carried out to calibrate model stability. Model explainability is not necessary; however, issue-free autonomous navigation is one of the key success metrics. All models are built on the cloud and deployed locally in each delivery drones where data are processed locally, and decisions are made locally in real time without human intervention. Because there is strict local requirement in collecting and using personal data, the company have to comply with regulations to allow data be audited, and where applicable, anonymize the collected data.
A global motor company wants to transform their product range to introduce self driving cars. At current, they are looking for a technological boost in their brand to expand their reach further. Data used to train the data is from a variety of sources, needed for the accuracy of this project considering the impact of driving decisions. This could include data on speed limits, road camera footage, location, and car sensor information. The large data science teams have combined multiple models all with different purposes for the functioning of the self driving vehicle. Examples include neural networks for live object detection, with a total of 50 neural networks operating the system. Autonomy algorithms are built to teach the car direction and movement, creating the core functioning of the car. The software is built on cloud and deployed in AI chips. The model makes key decisions real time without human intervention during use. The model is constantly being retrained with new scenarios and experiences.
Resources required to succeed Below are the resources required to succeed at this complexity level.
Swipe to explore other complexity levels.
Case studies Below are case studies at this complexity level. Swipe to explore other complexity levels.
AI Complexity Calculator Evaluation
The Deloitte AI Institute Team UK Deloitte AI Institute UK Lead & Chief AI Officer Deloitte AI Institute UK Chief of Staff