Part 4: Five technologies trends that leap-frog Artificial Intelligence  

Discussing five technology trends

Jurriaan Tressel, Johan van der Veen & Thomas Heeneman – June 2017 – Deloitte

Having now reviewed the meaning of AI, zooming in on the techniques that are associated with it, and some of its most appealing business applications in:

Let us now elaborate on a few broader technology developments that are leap-frogging the adoption of AI. Trends or factors of growth that bring AI to more people and organizations, of which some of the more important trends are described in this article.

1. Cloud

One of the first trends that enables the rapid growth of AI is cloud computing. As explained in our previous article, AI techniques are based on complex mathematical models and require large amounts of training data (examples) to learn their intelligent capabilities. Therefore building, improving and running AI applications requires immense computing power. Cloud technology offers that in a flexible and scalable environment at relatively low-cost and without huge initial investments.

In addition, the IT infrastructure of large corporates is often too big and too rigid to experiment with AI applications across and within the enterprise platforms. AI cloud services such as Amazon AWS AI, Microsoft Cortana, IBM Bluemix/Watson, Google Cloud Machine Learning and HPE HavenOnDemand allow you to quickly build and run applications. With cloud ‘Analytics as a Service’ (AaaS) offerings, organizations can experiment with AI and begin to build intelligent applications without harness existing IT infrastructure.

2. Big Data

Second amongst the recent accelerators for AI is big data, or more specifically: large, fast, and/or unstructured data. Think of all the information in images, text, sensor data or other data generated by for example mobile devices. Right now, 80% of all company data is unstructured and increases much faster in size than its structured counterpart. In recent years, technology has become widely available to capture, store and process that data. Therefore, many companies have invested in building “Data Lake” platforms manage their Big Data.

The potential for applications of this unstructured data is huge and yet largely untapped. AI techniques make it possible to process and analyze unstructured data, allowing business to obtain valuable insights from the information and improve their decision making. It can potentially find patterns and complex relationships by shifting through billions of observations. For example, in the process of assessing insurance claims, intelligent AI applications can automatically understand natural language from texts and analyze images such as photographs. By using these techniques, AI applications have the potential of detecting fraud earlier, improve the quality and consistency of the claim assessment and make the process more efficient. This is only one out of many example use cases.

While on the one hand AI is the solution to analyze large amounts of unstructured data, AI on the other hand needs big data in order to become ‘intelligent’. As already mentioned in the previous section and our previous article, AI applications need to be trained with many examples. Think of this as personal development: during your life, you learn from experience (examples given by your environment), which enable you to perform certain tasks and get better, leading to more expertise. This can be illustrated by an example of Tesla's Fleet Learning, where all Tesla’s continuously share the data with the central intelligence system, such as map data: where is driven when - and at what speed. Once a significant number of cars report on a changing condition (due to i.e. poor road conditions, works, etc.) the system is updated allowing others cars to learn to anticipate on the changed condition.

Hence the relationship between AI and big data is two-fold: big data is a prerequisite for AI and AI is the solution to process unstructured data and derive insights from it.

3. APIs

Perhaps the easiest way to start building intelligent applications is by using Application Programming Interfaces (APIs). An API is a piece of out-of-the-box functionality that can be called from another program or App. If for example, your app requires face recognition, you can call an API rather than program it yourself. Many of the large technology firms offer APIs in the field of computer vision, speech recognition, and natural language processing (NLP) or other cognitive domains on their cloud platforms. Intelligent APIs are pre-trained and pre-configured models for a certain task and serve as gateways to AI applications. This can be illustrated with the Visual Recognition API from IBM. When this API is called and receives an image of a car, it recognizes the car, and perhaps other objects that are on the provided image. The recognized objects are returned to the user with a certain amount of confidence for each class of objects.

Furthermore, APIs do not need to be used as standalone services. They can serve as building blocks for combined intelligent applications. For example: building a speech translator assistant requires a speech-to-text API, a translation API and a text-to-speech API to return the translation in the other language. This modular characteristic of APIs makes them very useful for a wide range of AI applications.

Using cognitive APIs is the easiest and quickest way to start building and integrating an AI application.
One of the business applications that has gained quite a lot of ground is the use of speech recognition in health care. Many physicians are working with an electronic health record (EHR) to document patient information, however this has been said to delay the consults and to restrict the patient narrative. Using speech recognition, patient documentation can be recorded in a flexible and fast manner, which allows the physician to pay more attention to the patient. This solution is already offered by different vendors such as Nuance and M*Modal.

The rapid improvement of speech recognition accuracy offers many opportunities in the near future. Having all our soft- and hardware voice controlled might not be as far away as many people think.  

4. Open Source

Although APIs are a great way to start building AI applications, they are very specifically aimed at achieving a certain task. When you need to perform a machine learning task that isn’t available through an API, you can build one yourself. This requires knowledge, complex algorithms and frameworks.

Today, increasingly more AI algorithms and frameworks are available as Open Source, meaning they are publicly available and often at no license cost. Consequently, developers of AI applications can rely on the knowledge and previous work of a large user base. That makes this trend our fourth accelerator of the AI uptake. An example of open source AI software is TensorFlow from Google. TensorFlow is an open source machine learning library with many different algorithms and frameworks. Open source training sets are available these days which allow for loading Intelligence directly into your AI application.

A simple comparison between an API and the underlying open source framework can be illustrated by the following example. Google’s Speech API can be used to develop applications that receive audio from its user and convert it to plain text. To do this, Google’s API uses techniques such as deep learning and is trained with millions of examples. Next to this API, Google also made the source code for the (deep learning) neural network itself publicly available. However, with only the source code, no audio can be converted to text. The model needs to be trained with many examples, but can be set very specifically according to the needs of the user. This can gain the competitive advantage organizations are looking for.

As a final note on open source, AI applications can also use a combination of APIs and open source technologies to rapidly build up an application from multiple pre-built modules, which speeds up the development process and uses the best of both worlds. Altogether, open source AI technologies broaden the user base and the set of use cases for AI, and create competition for large commercial AI vendors so that prices remain low. At the same time, some of the large vendors take Open Source AI code and package it into a commercial application, adding support, maintenance, training etc.

5. IoT and standardization

The last important accelerator for AI is Internet of Thing (IoT), meaning all the (mobile) devices, vehicles or sensors that are connected to the internet. All these devices together generate a massive amount of fast, semi-structured data, which can ‘feed’ and improve AI applications. A few examples of these are: self-driving vehicles, homes with intelligent thermostats, intelligent pacemakers from patients that provide doctors with real-time insights or a parking garage that recognizes your car.

As more and more devices are connected, standardization of data flows, -formats and -services is needed, so that these devices can interact properly. Standardization in IoT is still in development. Two important building blocks for standardization in the AI field are the JSON data format and REST principles for APIs.

JSON (JavaScript Object Notation) is a data format that is easy to write and understand for humans and easy to generate and parse for computers. REST stands for Representational State Transfer and is an architecture/approach for web services which allows easy evolution of API driven services.

With more data generated by IoT devices and easier development and integration due to standardized formats and principles, AI solutions are emerging more broadly.


Recent technological trends drive the broad adoption of AI. With the cloud as platform and APIs as building blocks for intelligent applications, AI is available for more people and organizations than ever before. The rise of unstructured data opens opportunities for AI technologies which can give companies a competitive advantage. Also, AI applications are better than in the past thanks to better availability of training data. Moreover, with open source AI technologies, customized AI solutions can be developed quickly using pre-built modules and using wisdom of the crowd. Finally, IoT will drive standardization further which allows for even more data becoming available and better integration of devices.

With the recent technological trends described in this article, we believe that the right time has come for organizations to start developing their first AI use cases.

Hade du nytta av den här informationen?