Building trust for successful AI scaling has been saved
Perspectives
Building trust for successful AI scaling
Artificial Intelligence (AI) has become essential for driving business capabilities. The advent of Generative AI (GenAI) has made the technology accessible to everyone and not just domain experts. The tremendous growth of these technologies has led to challenges, such as the need to establish consistent and reliable governance capabilities for the ethical use of AI. Increased lawsuits, regulatory fines, reputational damage and destruction of shareholder value are some of the other challenges. To overcome these challenges, trustworthy models must be created for AI to run and scale.
Risks with creating trustworthy AI
Hallucination is a significant challenge with AI models, which results in AI models generating incorrect or misleading results. While the data might look real, models may take a wrong turn, creating false results. This significantly affects user trust. The other challenges include biased training data generating biased output, the high cost of AI models, unethical use of data, IP infringement and malicious behaviour.These challenges must be addressed for AI to grow by ensuring fairness, explainability, robustness, transparency and privacy across the entire AI lifecycle. Moreover, trustworthiness cannot be an afterthought but a strategic consideration across the AI lifecycle. But how do we ensure this?
Responding to AI risks for AI to scale
The good news is that most enterprises are aware of these risks and keen on mitigating them while driving value from GenAI to the fullest. They are more confident about using AI while ensuring its responsible use aligns with the organisation’s goals and objectives. Organisations are addressing the risk and ethical challenges of AI-generated output by improving data quality, focusing on setting up AI governance and aligning the team around GenAI opportunities. Moreover, countries across the globe are designing and implementing legal frameworks to govern ethical applications and social benefits of AI, such as the General Data Protection Regulation (GDPR) in the EU and Cybersecurity Law and the New Generation AI Development Plan in China. This is pushing companies to be more compliant while using these technologies.
Aligning leadership to the cause of trustworthy AI
It is essential to align the entire practice and its stakeholders around the AI processes, strategies and business priorities to drive maximum value. The full potential of AI can only be realised when everybody in the system is held accountable for its outcomes and watchful of its side effects that could potentially impact trust. While AI models can be trained to be transparent and explainable, the team’s oversight can help catch errors or biases in data, ensuring accuracy. It is not just the technical programmers; everyone in the organisation must know enough about AI to ask the right questions and flag potential problems. Building that base-level AI literacy is a must. It is also essential to provide accessible, non-technical explanations of GenAI to evaluate its capabilities and associated risks. It should be a collective effort of the organisation to ensure that AI models have seven key dimensions: transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, and responsible and accountable. At its foundation, AI governance encompasses all the above stages and is embedded across technology, processes, and employee training.
Building ethical framework and governance policies
Establishing clear governance frameworks can help ensure accountability in AI development and deployment. Through the lifecycle of collecting, building, training, and using AI models, it is crucial to help support traceability, reduce hallucination, and increase AI models' trust, accuracy, and reliability. The way forward is implementing an AI ethics board and a strong AI policy to review and oversee AI projects. A Deloitte study1 suggests that companies are hiring for specialised roles such as AI ethics researcher (53 percent), compliance specialist (53 percent) and technology policy analyst (51 percent).
Building AI policies
Understanding risks and establishing ethical guidelines can be a complex process. It requires knowledge and expertise across a wide range of disciplines. It has been noted that there is a strong connection between AI ethics and important business tenants such as revenue growth, brand reputation and marketplace trust. For organisations to have a mature AI policy, it needs to have maturity in terms of technical and operational aspects. Robust infrastructure, embedding security and running models are as important as establishing strong governance through a cross-organisational approach to ensure that AI tools are correctly handled, governed and monitored. Using a multidimensional AI framework to help organisations develop ethical safeguards across seven key dimensions is crucial in managing the risks and capitalising on the returns associated with AI.
Moreover, designing AI policy must consider governance and regulatory compliance throughout the AI lifecycle from ideation to design, development, deployment and machine learning operations (MLOps). It is not just MLOps; LLMops also require a thoughtful and responsible deployment for AI innovations. A robust AI system can only be created if the practices used to build and run it are strong enough. Enterprises can feel confident scaling AI capabilities to their full potential by considering an ops-first approach while building trustworthy AI. AI policy designing includes three phases:
Explore: It involves exploring the current AI landscape, such as the existing AI policy framework and current and future AI use cases implemented at an organisation. AI maturity assessment is done to assess the current utilisation of AI technologies and identify potential associated risks and compliance issues. At this inflexion point, any existing AI policies must be scrutinised more closely strategically to address potential risks and implement necessary safeguards before moving to the next stage.
Collect: This step involves gathering data on the existing AI landscape and comparing it with the leading industry standards. Some crucial steps include identifying and collecting relevant data on current AI practices, benchmarking with industry standards regarding best practices related to AI deployments and reviewing existing policies for legal and regulatory compliance.
Design: The last step is to design a comprehensive AI policy that requires consideration and guardrails for ongoing use case developments. This may include drafting AI policies that address identified gaps and integrating a framework that promotes fairness, transparency and accountability. The policy is further refined and finalised based on feedback and validation from stakeholders.
A one-size-fits-all approach may not work
AI use cases may differ significantly across organisations and functions within an organisation. This means that the ethical implications of AI may vary based on how AI is being used. It is, therefore, impossible to have a universal solution for creating trustworthy AI. The approach to developing an ethical framework and AI policies should be unique for each organisation. Each function must identify what trustworthy AI means for them and then design regulations and guidelines governing them. Organisations need to determine the proper use cases for them and ensure they achieve their desired outcome while safeguarding trust and privacy. Interestingly, organisations realise the importance of creating trust in AI systems by mitigating potential risks. According to a Deloitte report, 60 percent of respondents reported that their organisation effectively integrates GenAI while mitigating potential risks.
Conclusion
Trustworthiness cannot be restricted to AI systems but is an essential criterion for an organisation to meet. Therefore, the people in an organisation must be assigned clear ownership and responsibility for various AI-related risks. Companies should have clearly defined procedures that reflect the risks of AI and ways of addressing them. Moreover, inclusivity and equity should be included in an organisation’s AI ethics framework to build fair AI. Government bodies also have the onus of defining policies and laws for the overall use of AI.