Posted: 14 May 2024 4 min.

How AgentAI will speed up workflows and why security teams should care

Topic: New Tech

If you work with process automation, you also know that Robotic Process Automation (RPA) offerings can automate repetitive tasks that are rule-based – like data base entry, form filling, and invoice processing. RPA is the use of software robots to mimic human actions like clicking a button or entering data into fields. Even though RPA can handle simplistic tasks and increase efficiency, the technology is not ideal for decision-making. RPA cannot simulate human reasoning. But as we have all experienced the last couple of years, Generative AI (GenAI) solutions can.

One of the limitations of GenAI, however, is that its cognitive level is equivalent to its training data. It can connect a lot of dots and create beautiful and convincing language that comes off as very persuasive. But if you create a prompt that forces the LLM out of its comfort zone in terms of knowledge and “reasoning”, it will often start lying in an attempt to create good answers for its user.

The next development within GenAI is adding an orchestration layer on top of the LLM that can call internal and external ‘agents’ – sources of information, software tools, databases etc. – on the fly and thereby provide better cognitive connections, better answers, better decision-making support for its users. We can call this orchestrating GenAI technology AgentAI. Some examples of varying generality include watsonx Orchestrate, Amazon Q and Microsoft Copilot.

The Aleph
How does AgentAI solutions work? To inform you on that, allow me to make a little detour into the world of literature.

“The Aleph” is a short story by the Argentine writer and poet Jorge Luis Borges. In the story, the Aleph is a point in space that contains all other points. Anyone who gazes into it can see everything in the universe from every angle simultaneously without distortion or confusion.

Obviously, The Aleph is a piece of fiction but the general purpose of creating AgentAI orchestration has beautiful analogies to The Aleph. Think of AgentAI as a single point of contact application to all things digital that helps you sort and perform daily tasks. Like writing emails, creating presentations, working as an editor, producing computer aided design, execute laboratory procedures, or perform legal work. All without the need for manually opening a bunch of software applications all the time or manually search for data. The application can do the work for you automatically.

The AgentAI application’s capabilities does not come out of thin air. It learns from you and your preferred ways of solving tasks. When the technology has “seen” a particular task performed in a particular way enough times, it can perform the task like you would and ask if you are happy about it when it is done. You could even work up a list of tasks for your new assistant and let it run silently in the background, while you work on more demanding or risky tasks.

Do not forget security
Sounds cool, right? It is. But AgentAI is also something that should be used with the right security model in place before the technology is set loose.

In that sense, the implementation of AgentAI is no different than the implementation of GenAI. In search of optimization and efficiency benefits, companies have been eager to explore possibilities with LLM’s. But if you do not have the right guardrails set up for what GenAI can and cannot access, and for what it can and cannot expose internally and externally, you can easily end up compromising data security. It is the same with AgentAI.

We have already seen one example of a targeted GenAI attack in the form of a cyber worm, Morris II. The worm used the context of a human tasking an GenAI with e-mail access. A malicious self-replicating prompt was able to extract information and spread to similar GenAI mail agents. Depending on the prompt, you can only imagine what kind of information such an attack can retrieve, and/or what kind of digital poison it can inject directly into your databases.

In Deloitte, we work with industry partners on both AgentAI and the proper security models that make it safe to use the new technology. Reach out if you would like to know more.

Forfatter spotlight

Jacob Bock Axelsen

Jacob Bock Axelsen


Jacob Bock Axelsen (Snr Manager) is CTO in Deloitte Risk Advisory and is an expert in mathematical modeling and a specialist in artificial intelligence. Jacob is educated in mathematics- economics (BSc), biophysics (MSc) and physics (PhD) with nine years of research experience abroad. His scientific background has proven useful in advising both private companies and public institutions on AI, AI governance, Quantum Computing, Organizational Network Analysis, Natural Capital Management and much more. After six years in Deloitte he has developed a strong business acumen. He holds the IBM Champion title for the fourth year in a row and is part of Deloitte’s global quantum computing initiative.

$(document.head).append(''); $(document.head).append('