Posted: 25 May 2023 4 min.

Potent Artificial Intelligence models should be an ecosystem of technology, users and society

Topic: New Tech

As Artificial Intelligence (AI) technology is becoming more and more potent, authorities are calling for increased regulation. But in fact, openness, transparency, and safety around AI models are in everyone’s interest.

When you stumble across a blog called “Jailbreaking ChatGPT while we can” and another one asking “Can we really make AI do anything we want?”, you know that the internet is up to something. That something is of course AI models whose prevalence and popularity is almost exploding at the moment, but with that, also concern for their accuracy, obscurity and possible exploitation to access information.

Recently, Deloitte and IBM welcomed more than 300 risk management professionals to our Risk Management 2.0 Summit at the Deloitte Head Quarters in Copenhagen. They deal with risk in the broadest sense – from reputational to financial risk, from IT and cyber to operations – but what we are seeing is that AI models in fact touch upon all these types of risk. If AIs are followed blindly this can have financial, operational and security-related consequences because they are poorly understood or even manipulated to extract proprietary information e.g. trade secrets. The latter is not a bug, but - when safeguards are not implemented – even a feature by design, and this should be a major concern for anyone tasked with protecting the organisation and safeguarding its assets.

The promises of AI
To dial back the clock, AI, machine learning and deep learning are not new technologies, and for years they have helped companies automate processes, solve complex tasks, structure huge amounts of data, as well as inform operational and strategic decisions.

In Deloitte, we have been part of ambitious Danish and Nordic AI projects in recent years: From automating insurance claims assessments to combatting money laundering, from predicting train traffic to quality control in industrial production and even to monitor natural habitats. Recently, a client tasked us to write an AI model to efficiently scan 12 million pieces of communication to retrieve contractual obligations – just to give you an idea of what AI can do well.

Besides some of these ground-breaking solutions, what’s also igniting companies’ AI ambitions is the fact that AI technology is now being built into core technology, for example the SAP S/4HANA® suite, allowing virtually any company to start taking advantage of AI.

So, the promise of AI is evident, and even with the current popularity, we are only at the beginning of the AI journey, which is why we should do everything we can to make this journey safe, for companies, public institution, societies and for humanity alike.

Can you trick an AI model?
Going back to the safety concerns surrounding AI models and the questions of “can we really make AI do anything we want?”, the answer is mostly yes. In fact, some will say that AI is the worst spy in the world given that it will tell you a lot, if not all, of its secrets if you are patient, somewhat clever and persistent.

People are already tricking chatbot AI models by using various methods to set the models free of their built-in constraints or alternatively trying to coerce it to do something by being someone else. When this happens, the model can not only provide you with false information or inappropriate content; it can also provide you with authentic data that you were not supposed to see. In the past we have already seen examples of simple chatbots that were manipulated into having strongly controversial opinions. At Deloitte we have succeeded ourselves in manipulating an AI model to reveal real contact information and names of people. For example, an AI model might not be allowed to tell you how to rob a bank, but we found that if you ask it to write a theatrical play about two people robbing a bank, it will promptly take you through all the steps. It is just a matter of creativity, especially at this early stage when the technology is far from mature and needs more control.

How can this happen in the first place? It is because the model has learnt the input data by heart, or what is known as being strongly overfitted to the training data, which is typically stored safely and out of reach for the public after training. This overfitting is a design feature, because the users would like to access that information instead of looking it up on Wikipedia, say. Think therefore of chatbot AI as data that literally speaks to you – often synthetizing new texts but that can also output original texts.

Finally, let us address the issue of any intellectual reasoning taking place within chatbot AI models. The internet is currently overflowing with examples that AIs are not doing any intelligent reasoning. If you ask these models e.g. about the mathematical prime factorization of the number 1022117, or about creating two antonyms from the letters in “Stolichnaya”, or play chess they typically fail. This comes from the design choice of the model being taught to simply predict the next work in a long input sequence of words by learning which input words to pay special attention to. This works well for composing prose but not for reasoning as attention plays a part in reasoning but does not encompass the full process. We may not even fully know how to make a machine truly reason.

What can we do about AI safety?
So, in many ways AI is the wild west of modelling, and for that reason it is vital for companies and public institutions to use AI wisely and in a transparent manner while also being aware of the inherent risk.

Here are some of the most important considerations:

  1. First, AI needs to be treated as a cyber risk because that is essentially what it is, given that people are likely to try and attack the model and trick it either to cause damage or for simple amusement. A compromised AI model can lead to data breaches, identity theft or financial fraud – and these harmful risks cannot be taken too seriously.
  2. Following this, documentation is vital for any AI model to understand the mathematical methods that is behind the model. Let us not forget, AI is always a physical calculation in a computer located somewhere on this planet, and if you don’t understand this process and the data being used, you’re essentially building a very large black box instead of a transparent and well-understood model.
  3. Third, take the forthcoming EU AI act seriously and accept that it’s in everyone’s interest that this very potent technology doesn’t get out of control, and that it can’t be misused by people with bad intentions. It’s not necessarily a large investment to be compliant, but rather a conscious choice to always be prepared to provide the necessary documentation.
  4. And finally, AI models are meant to be designed by experts – and sometimes also used by experts who are not just following the output blindly. When people with no experience in modelling are suddenly using AI for expert knowledge without any guidance, that is a major risk to any organisation.

As our American colleagues write in Deloitte’s latest Tech Trend report: Think of deploying AI like onboarding a new team member. We know generally what makes for effective teams: openness, a sense of connection, the ability to have honest discussions, and a willingness to accept feedback to improve performance. Implementing AI with this framework in mind may help the team view AI as a trusted co-pilot rather than a brilliant enigmatic sphinx. When applications are transparent, resilient, and dependable, they can become a natural trusted part of the workstream.

Essentially, AI is an ecosystem where everyone has something at stake. Bring developers and experts together to create the right solutions, while also taking into consideration the users and society at large. Always get a second, independent opinion on the robustness of the algorithm that was used to build the artificial intelligence. Does is still behave like it did when it was created, or has it started to spill secrets when it is subjected to various stress tests? What are the consequences if the model is compromised or misused? And are you ready to take on that risk?

P.S. the antonyms that can be formed by the letters in “Stolichnaya” are “holy” and “satanic” #stolichnayafail

 

Click here so see Jacob Bock Axelsen talk about AI Governance: Key Activities and Management: https://vimeo.com/819841723

Forfatter spotlight

Jacob Bock Axelsen

Jacob Bock Axelsen

CTO

Jacob Bock Axelsen (Snr Manager) is CTO in Deloitte Risk Advisory and is an expert in mathematical modeling and a specialist in artificial intelligence. Jacob is educated in mathematics- economics (BSc), biophysics (MSc) and physics (PhD) with nine years of research experience abroad. His scientific background has proven useful in advising both private companies and public institutions on AI, AI governance, Quantum Computing, Organizational Network Analysis, Natural Capital Management and much more. After six years in Deloitte he has developed a strong business acumen. He holds the IBM Champion title for the fourth year in a row and is part of Deloitte’s global quantum computing initiative.

$(document.head).append(''); $(document.head).append('