AI at a crossroads: building trust as the path to scale

Press releases

AI at a crossroads: building trust as the path to scale

Less than one in ten organisations across Asia Pacific have governance structures necessary to ensure trustworthy AI

Published date: 15 January 2025

A recent Asia Pacific report co-developed by Deloitte Access Economics and Deloitte’s AI Institute AI at a crossroads: building trust as the path to scale provides insights to C-suite and technology leaders on how they can develop effective AI governance as their adoption of AI accelerates, and with it, increasing challenges related to risk management.

The report is based on a survey of nearly 900 senior leaders across 13 countries in the Asia Pacific region, whose responses were assessed against Deloitte’s AI Governance Maturity Index[1] to identify what good AI governance looks like in practice. Good AI governance enables teams to adopt AI more effectively, build customer trust and creates paths for business to value and scale.

Key findings from the report:

  • Organisations with more mature AI governance frameworks report a 28% increase in staff using AI solutions and experience nearly 5% percent higher revenue growth
  • 91% of organisations surveyed are categorised as having only ‘basic’ or ‘in progress’ governance structures, indicating a significant need for improvement in AI governance practices
  • The most pressing concerns associated with AI usage are security vulnerabilities (86%), surveillance (83%), and privacy issues (83%)
  • A quarter of organisations have experienced an increase of incidents (e.g. data breaches) related to AI in the past financial year, despite two in five organisations lacking a reporting mechanism for queries or incidents related to AI use in the workplace
  • 45% of businesses say that enhanced AI governance has positively impacted their reputation among customers
  • Only 58% of employees have the skills and capabilities to use AI responsibly

Navigating the risks from rapid AI adoption

The report highlights that investment in AI is projected to increase fivefold by 2030[2], reaching US$117 billion in the Asia-Pacific region alone, further emphasising the need for robust governance frameworks. Behind the rapid pace of adoption are employees, who often outpace their leaders. A previous Deloitte study on Generation AI[3] found that more than two-in-five employees were already using generative AI at work, with young employees leading the way.

However, with this rapid growth comes significant risk, with nearly 900 senior leaders surveyed revealing concerns related to security vulnerabilities (86%), surveillance (83%), and privacy issues (83%) as the most common concerns.

While AI solutions offer powerful productivity solutions, these can lead to data breaches, loss of reputation and business and regulatory fines if the risks of these tools are not managed properly. The global average cost of a data breach reached nearly USD$5 million in 2024, a 10% increase from the previous year[4]. For large organisations, this cost can be significantly higher.

Commenting on the report, Dr Elea Wurth, Lead Partner Trustworthy AI Strategy, Risk & Transactions Deloitte, Asia Pacific and Australia says, “Effective AI governance is not just a compliance issue; it is essential for unlocking the full potential of AI technologies. Our findings reveal that organisations with robust governance frameworks are not only better equipped to manage risks but also experience greater trust in their AI outputs, increased operational efficiency and ultimately greater value and scale.”

Actions to build Trustworthy AI

Developing trustworthy AI solutions is essential for senior leaders to successfully navigate the risks of rapid AI adoption and fully embrace and integrate this transformative technology. Trustworthy AI provides a level of certainty that the technology is ethical, lawful and technically robust and provides confidence in senior leaders to use AI solutions throughout their organisation.

Deloitte’s Trustworthy AI Framework outlines seven key dimensions that are necessary to build trust in AI solutions which are transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, responsible, and accountable. This framework and criteria should be applied to AI solutions from ideation through to design, development, procurement and deployment.

Dividends from good AI Governance

The survey reveals that organisations with mature AI governance frameworks report a 28% increase in staff using AI solutions, deploying AI in more than three areas of the business. These businesses achieve nearly 5 percent higher revenue growth compared to those with less established governance.

Organisations in technology, media and telecommunications, financial and insurance services and professional services are highlighted in the report as they are more generally 'Ready' for trustworthy AI, while Government and Public Sector and Life Science and Health Care organisations face challenges in being flexible and quick to respond to new concerns emerging around the use of AI technologies.

Building Foundations for Trustworthy AI

Effective AI governance is critical for organisations when integrating AI solutions into their operations and business models. The report highlights four high-impact actions organisation leaders can take to improve their AI governance.

Key recommendations from the report include:

  • Prioritise AI Governance to realise returns from AI: Continuous evaluation of AI governance is required across the organisation’s policies, principles, procedures and controls. This should include monitoring of changing regulations for specific locations and industries to remain at the forefront of AI governance standards. 
  • Understand and leverage the broader AI supply chain: Organisations need to understand their own use of AI as well as interactions with the broader ‘AI supply chain’—including developers, deployers, regulators, platform providers, end users, and customers—to gain a comprehensive understanding of their governance needs. Regular audits need to occur throughout the AI solution lifecycle.
  • Build risk managers, not risk avoiders: Developing employees' skills and capabilities can help organisations better identify, assess, and manage potential risks, thereby preventing or mitigating issues rather than avoiding them altogether. The ‘people and skills’ pillar of the AI Governance Maturity Index often receives the lowest scores among organisations, highlighting a critical area for improvement.
  • Communicate and ensure AI transformation readiness across the business: Organisations should be transparent about their long-term AI strategy, the associated benefits and risks, and provide training for teams on using AI models while reskilling those whose roles may be affected by AI. Practical steps include scenario planning for high-risk events, developing narratives to convey the technology's impact, and conducting crisis exercises to test readiness for potential challenges.

“The erosion of consumer confidence and damage to brand reputation can have lasting effects, making it essential for businesses to effectively manage AI and cybersecurity. Consumers prefer companies that align AI use with ethical standards like transparency, with 45% of those surveyed believing strong governance enhances their organisation's reputation. However, our research shows that organisations are tending to overestimate their readiness in terms of AI governance. Urgent action is required by senior leaders to enhance their current AI governance practices to unlock the benefits of AI, as well being prepared for emerging AI regulations which will impact future business success”, says Deloitte Asia Pacific’s Consulting Businesses Leader Rob Hillard.

Deloitte China AI Institute Managing Partner Roman Fan says, “After its emergence, Gen AI generated a total revenue of about USD3 billion within a year, while SaaS took nearly a decade to reach the same level. It is notable that in terms of speed and scale, the development of Gen AI differs from traditional AI. However, behind the rapid growth, as AI becomes increasingly indistinguishable, even the Turing Test is no longer an immutable standard, concerns about the potential risks brought by Gen AI are also increasing. In a survey by Deloitte, 78% of leaders said more governmental regulation of AI is needed. In addition to regulatory oversight at the government level, companies should also be more proactive, establishing an enterprise-level Gen AI governance system, forming a risk matrix from underlying models to upper-level applications and building a strong safeguard for business transformation and upgrading.”

Deloitte China Trustworthy AI Partner Silas Zhu adds, “Just a couple of years ago, it seemed very challenging to achieve AGI (artificial general intelligence) when we were mostly promoting RPA (robotic process automation) as rule-based automation. Gen AI technology, which is much closer to AGI, came to the public much earlier than we could have imagined. The rapid evolution of AI has alerted us to the potential risks associated with its use, including the possibility of misuse with malicious intention. Just like the concerns raised by Issac Asimov, who in his science fiction 80 years ago developed the ‘three laws of robotics’ to prevent AI from harming human beings, there is a need for a reliable AI governance model to ensure safe development and deployment of AI. Deloitte’s Trustworthy AI framework is one such comprehensive model designed to systematically identify and mitigate potential risk in a dynamic way that adapts to the ongoing evolution of AI, drawing on practical research from diverse industries across the globe.”

Methodology

This report was co-developed by Deloitte Access Economics and the Deloitte AI Institute to provide insights to Asia-Pacific C-suite executives and tech leaders, on how they can improve their governance structures and organisation settings to develop more trustworthy AI solutions.

Deloitte has created a Trustworthy AI Framework that identifies seven dimensions necessary for organisations to have trust in their AI solutions – transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, and responsible and accountable.

Deloitte’s AI Governance Maturity Index identifies what good AI governance looks like in practice. This index contains a set of criteria based on five key pillars to assess AI governance within an organisation and was applied to the responses of by nearly 900 surveyed senior leaders from Australia, China, India, Indonesia, Japan, Malaysia, New Zealand, Philippines, Singapore, South Korea, Taiwan, Thailand, and Vietnam. A range of industries, organisation sizes and public sector organisations were included in the responses.

The survey questions aimed to understand the maturity level of AI governance across organisations, identify key enablers of effective AI governance and assess the benefits to organisations from having these arrangements in place.

 

Note:

[1]This Index, based on 12 key indicators across five pillars (organisational structure, policy and principles, procedures and controls, people and skills and monitoring, reporting and evaluation), assesses an organisation’s AI governance maturity.

[2] Ibid

[3] 2024 Gen Z and Millennial Survey

[4] IBM (2024), “Cost of a data breach Report”

Did you find this useful?