Article

Adopting generative AI outside of a regulatory framework

While jurisdictions are still working on their respective regulatory frameworks for AI, what can organizations do to continue adopting the technology with an eye towards future compliance requirements?

11 December 2023

By: Anna Marie Pabellon

EVEN if you are not following the chaotic boardroom movements in one of the world's most prominent AI companies, it is easy to get a grasp of the push and pull around the use of artificial intelligence as entire industries continue to discover its transformative potential. There is so much buzz around the technology because of the opportunities and risks it brings.

Just a couple of months ago, the Philippines' defense chief ordered all military personnel to refrain from using generative AI apps that, seemingly innocuously, create images of human faces. The memorandum said such apps pose "significant privacy and security risks." At the same time, the government has stepped up efforts to support the development of AI technology, including meeting with Silicon Valley officials to explore opportunities to scale up AI use in the Philippines.

Thankfully, policymakers and regulators across the globe are responding by reassessing regulatory frameworks to see if they are still fit to mitigate new technological risks. In the Asia-Pacific, strategies include laying down principles that provide high-level guidelines for effectively managing risks associated with AI use; establishing guidance and creating tools that can support the implementation of the AI principles; drafting legislation; and crafting national strategies based on AI being a strategic priority and promoting the use of trustworthy AI.

The Philippines, like South Korea and Vietnam, has taken the legislation route as it seeks to pass a law creating an Artificial Intelligence Development Authority that will be responsible for the development of a national AI strategy and framework. While the country waits for this body to become a reality and begin its work, however, organizations cannot afford to hit the pause button on their own AI plans.

In my work with financial services firms, I have seen companies make great strides in the use of generative AI, which can carry more challenging risk management requirements compared to traditional AI. To confidently continue on this path even as they anticipate the development of a regulatory framework, these firms have to establish their own AI governance frameworks as early as possible. What would those look like in practice?

Here are some use cases for generative AI, along with the considerations FSI leaders should keep in mind as they build a robust governance framework:

Research-based report generation. When onboarding customers, banks undertake labor-intensive tasks based on know your customer (KYC) standards. This usually involves extensive manual research that covers everything from economic analysis to adverse media checks. Generative AI can enhance the efficiency of this task by performing initial data searches and meta-analysis and by providing summaries for customer relationship managers.

It is important, though, to take necessary precautions to protect the sensitive information that the AI will gather by making sure, for example, that there is no information leakage and by regulating access to the model and its underlying data. Officers using generative AI also have to be mindful that there is always a risk of missing relevant information, which could affect meta-analysis and decision-making.

Regulatory bot. Navigating the rules for a heavily regulated industry such as the financial services sector can be time-consuming, with human interpretations prone to variations that could lead to oversights. Generative AI can create a user-friendly, comprehensive directory of regulations and guidelines that can then be used for providing timely responses to compliance queries.

While this would significantly reduce the manual burden of navigating through all the regulations, ambiguous data and historical misinterpretations could lead to the generative AI churning out misleading insights. Also, there are instances when regulatory requirements are ambiguous, in which case human interpretation is necessary.

Customer service in the virtual space. Customer expectations around the services that can be offered online/remotely will only grow as businesses continue to prioritize digitalization, which is why FS firms have turned to automation solutions such as chatbots. Generative AI can offer more personalized, VR-driven customer interactions, and with real-time access to data can greatly enhance service quality and speed.

In harnessing this capability, business leaders have to be mindful that chatbots can commit errors, in which case human stakeholders should be accountable. In training chatbots, leaders should also be aware that datasets may contain latent biases, such as semantic deficiencies in some languages but not others, which could lead to negative customer impressions. It is also important to be upfront with customers that they are interacting with a chatbot and to be transparent about how their inputs and information are stored and used.

Even as jurisdictions are playing catch-up to establish guardrails around AI and its rapid developments, organizations can begin the tough but necessary work of regulating their own use of AI, not only to make sure they are compliant when government rules finally go up, but also as part of their duty of care to stakeholders. After all, their use of AI will benefit no one if, along the way, they lose customer trust.


As published in The Manila Times on 11 December 2023. The author is the Risk Advisory Leader of Deloitte Philippines.

Did you find this useful?