Conquering AI Risks


Conquering AI Risks

The age of pervasive AI is here. Since 2017, Deloitte’s annual State of AI in the Enterprise report has measured the rapid advancement of AI technology globally and across industries. In the most recent edition, published in July 2020, a majority of those surveyed reported significant increases in AI investments, with more than three-quarters believing that AI will substantially transform their organization in the next three years. In addition, organizations are witnessing the benefits brought by AI investments: improved process efficiency, better decision-making, increased worker productivity, and enhanced products and services. These possible benefits have likely driven the growth in AI’s perceived value to organizations—nearly three-quarters of respondents report that AI is strategically important, an increase of 10 percentage points from the previous survey. However, a growing unease threatens this rising trendline: Fifty-six percent of surveyed organizations say they plan to slow or are already slowing AI adoptions because of concern about emerging risks—a remarkable level of apprehension considering AI’s acknowledged benefits and strategic importance. 

Deloitte AI Institute, with in-depth research and analysis, summarizes these concerns into three main categories: confidence in the AI decision-making process, ethics of AI and data use, and marketplace uncertainties, and designs Trustworthy AI framework to help organizations develop safeguards that include addressing the six key dimensions at every stage of AI lifecycle to navigate AI risks. 

China is the second largest AI powerhouse in the world, with over 20,000 AI companies by 2020. Chinese organizations have a greater responsibility to develop responsible and trustworthy AI, and it is urgent to carry out related risk management work. Five national ministries issued "the Guidelines for the Establishment of a National Next-Generation Artificial Intelligence Standardization System", which has clearly incorporated security and ethics into the AI standard system structure.  These two elements expand across other elements including the supporting products and technologies, software and hardware infrastructures, key technologies, products and services and industry applications to complete the AI compliance system. It is highly likely to see more standards and regulations coming out on data, algorithms, models, management and services, products and applications, and assessment and evaluation in the next three years. Organizations should anticipate the capabilities needed to respond to regulatory shifts, so that they can be ready to both participate in the creation of regulations and respond well when they are enacted.

Did you find this useful?