Inside the Internet of Things (IoT) has been saved
Explore the inner workings of the Internet of Things in this deep dive into some of the technologies that make it possible.
If you’ve ever seen the “check engine” light come on in your car and had the requisite repairs done in a timely way, you’ve benefited from an early-stage manifestation of what today is known as the Internet of Things (IoT). Something about your car’s operation—an action—triggered a sensor,1 which communicated the data to a monitoring device. The significance of these data was determined based on aggregated information and prior analysis. The light came on, which in turn triggered a trip to the garage and necessary repairs.
In 1991 Mark Weiser, then of Xerox PARC, saw beyond these simple applications. Extrapolating trends in technology, he described “ubiquitous computing,” a world in which objects of all kinds could sense, communicate, analyze, and act or react to people and other machines autonomously, in a manner no more intrusive or noteworthy than how we currently turn on a light or open a tap.
One way of capturing the process implicit in Weiser’s model is as an Information Value Loop with discrete but connected stages. An action in the world allows us to create information about that action, which is then communicated and aggregated across time and space, allowing us to analyze those data in the service of modifying future acts.
Although this process is generic, it is perhaps increasingly relevant, for the future Weiser imagined is more and more upon us—not thanks to any one technological advance or even breakthrough but, rather, due to a confluence of improvements to a suite of technologies that collectively have reached levels of performance that enable complete systems relevant to a human-sized world.
As illustrated in figure 2 below, each stage of the value loop is connected to the subsequent stage by a specific set of technologies, defined below.
The business implications of the IoT are explored in an ongoing series of Deloitte reports. These articles examine the IoT’s impact on strategy, customer value, analytics, security, and a wide variety of specific applications. Yet just as a good chef should have some understanding of how the stove works, managers hoping to embed IoT-enabled capabilities in their strategies are well served to gain a general understanding of the technologies themselves.
To that end, this document serves as a technical primer on some of the technologies that currently drive the IoT. Its structure follows that of the technologies that connect the stages of the Information Value Loop: sensors, networks, standards, augmented intelligence, and augmented behavior. Each section in the report provides an overview of the respective technology—including factors that drive adoption as well as challenges that the technology must overcome to achieve widespread adoption. We also present an end-to-end IoT technology architecture that guides the development and deployment of Internet of Things systems. Our intent, in this primer, is not to describe every conceivable aspect of the IoT or its enabling technologies but, rather, to provide managers an easy reference as they explore IoT solutions and plan potential implementations. Our hope is that this report will help demystify the underlying technologies that comprise the IoT value chain and explain how these technologies collectively relate to a larger strategic framework.
Most “things,” from automobiles to Zambonis, the human body included, have long operated “dark,” with their location, position, and functional state unknown or even unknowable. The strategic significance of the IoT is born of the ever-advancing ability to break that constraint, and to create information, without human observation, in all manner of circumstances that were previously invisible. What allows us to create information from action is the use of sensors, a generic term intended to capture the concept of a sensing system comprising sensors, microcontrollers, modem chips, power sources, and other related devices.
A sensor converts a non-electrical input into an electrical signal that can be sent to an electronic circuit. The Institute of Electrical and Electronics Engineers (IEEE) provides a formal definition:
An electronic device that produces electrical, optical, or digital data derived from a physical condition or event. Data produced from sensors is then electronically transformed, by another device, into information (output) that is useful in decision making done by “intelligent” devices or individuals (people).7
The technological complement to a sensor is an actuator, a device that converts an electrical signal into action, often by converting the signal to nonelectrical energy, such as motion. A simple example of an actuator is an electric motor that converts electrical energy into mechanical energy. Sensors and actuators belong to the broader category of transducers: A sensor converts energy of different forms into electrical energy; a transducer is a device that converts one form of energy (electrical or not) into another (electrical or not). For example, a loudspeaker is a transducer because it converts an electrical signal into a magnetic field and, subsequently, into acoustic waves.
Different sensors capture different types of information. Accelerometers measure linear acceleration, detecting whether an object is moving and in which direction,8 while gyroscopes measure complex motion in multiple dimensions by tracking an object’s position and rotation. By combining multiple sensors, each serving different purposes, it is possible to build complex value loops that exploit many different types of information. For example:
Sensors are often categorized based on their power sources: active versus passive. Active sensors emit energy of their own and then sense the response of the environment to that energy. Radio Detection and Ranging (RADAR) is an example of active sensing: A RADAR unit emits an electromagnetic signal that bounces off a physical object and is “sensed” by the RADAR system. Passive sensors simply receive energy (in whatever form) that is produced external to the sensing device. A standard camera is embedded with a passive sensor—it receives signals in the form of light and captures them on a storage device.
Passive sensors require less energy, but active sensors can be used in a wider range of environmental conditions. For example, RADAR provides day and night imaging capacity undeterred by clouds and vegetation, while cameras require light provided by an external source.11
Figure 4 provides an illustrative list of 13 types of sensors based on the functions they perform; they could be active or passive per the description above.
Of course, the choice of a specific sensor is primarily a function of the signal to be measured (for example, position versus motion sensors). There are, however, several generic factors that determine the suitability of a sensor for a specific application. These include, but are not limited to, the following:12
Any of these factors can impact the reliability of the data received and therefore the value of the data itself.
There are three primary factors driving the deployment of sensor technology: price, capability, and size. As sensors get less expensive, “smarter,” and smaller, they can be used in a wider range of applications and can generate a wider range of data at a lower cost.13
Even in cases where sensors are sufficiently small, smart, and inexpensive, challenges remain. Among them are power consumption, data security, and interoperability.
Information that sensors create rarely attains its maximum value at the time and place of creation. The signals from sensors often must be communicated to other locations for aggregation and analysis. This typically involves transmitting data over a network.
Sensors and other devices are connected to networks using various networking devices such as hubs, gateways, routers, network bridges, and switches, depending on the application. For example, laptops, tablets, mobile phones, and other devices are often connected to a network, such as Wi-Fi, using a networking device (in this case, a Wi-Fi router).
The first step in the process of transferring data from one machine to another via a network is to uniquely identify each of the machines. The IoT requires a unique name for each of the “things” on the network. Network protocols are a set of rules that define how computers identify each other. Broadly, network protocols can be proprietary or open. Proprietary network protocols allow identification and authorization to machines with specific hardware and software, making customization easier and allowing manufacturers to differentiate their offerings. Open protocols allow interoperability across heterogeneous devices, thus improving scalability.26
Internet Protocol (IP) is an open protocol that provides unique addresses to various Internet-connected devices; currently, there are two versions of IP: IP version 4 (IPv4) and IP version 6 (IPv6). IP was used to address computers before it began to be used to address other devices. About 4 billion IPv4 addresses out of its capacity of 6 billion addresses have already been used. IPv6 has superior scalability with approximately 3.4x1038 unique addresses compared to the 6 billion addresses supported by IPv4. Since the number of devices connected to the Internet is estimated to be 26 billion as of 2015 and projected to grow to 50 billion or more by 2020, the adoption of IPv6 has served as a key enabler of the IoT.
Network technologies are classified broadly as wired or wireless. With the continuous movement of users and devices, wireless networks provide convenience through almost continuous connectivity, while wired connections are still useful for relatively more reliable, secured, and high-volume network routes.27
The choice of a network technology depends largely on the geographical range to be covered. When data have to be transferred over short distances (for example, inside a room), devices can use wireless personal area network (PAN) technologies such as Bluetooth and ZigBee as well as wired connections through technologies such as Universal Serial Bus (USB). When data have to be transferred over a relatively bigger area such as an office, devices could use local area network (LAN) technologies. Examples of wired LAN technologies include Ethernet and fiber optics. Wireless LAN networks include technologies such as Wi-Fi. When data are to be transferred over a wider area beyond buildings and cities, an internetwork called wide area network (WAN) is set up by connecting a number of local area networks through routers. The Internet is an example of a WAN.
Data transfer rates and energy requirements are two key considerations when selecting a network technology for a given application. Technologies such as 4G (LTE, LTE-A) and 5G are favorable for IoT applications, given their high data transfer rates. Technologies such as Bluetooth Low Energy and Low Power Wi-Fi are well suited for energy-constrained devices.
Below, we discuss select wireless network technologies that could be used for IoT applications. For each of the following technologies, we discuss bandwidth rates, recent advances, and limitations. The technologies discussed below are representative, and the choice of an appropriate technology depends on the application at hand and the features of that technology.
Introduced in 1999, Bluetooth technology is a wireless technology known for its ability to transfer data over short distances in personal area networks.28 Bluetooth Low Energy (BLTE) is a recent addition to the Bluetooth technology and consumes about half the power of a Bluetooth Classic device, the original version of Bluetooth.29 The energy efficiency of BLTE is attributable to the shorter scanning time needed for BLTE devices to detect other devices: 0.6 to 1.2 milliseconds (ms) compared to 22.5 ms for Bluetooth Classic. 30 In addition, the efficient transfer of data during the transmitting and receiving states enables BLTE to deliver higher energy efficiency compared to Bluetooth Classic. Higher energy efficiency comes at the cost of lower data rates: BLTE supports 260 kilobits per second (Kbps) while Bluetooth Classic supports up to 2.1 Mbps.31
Existing penetration, coupled with low device costs, positions BLTE as a technology well suited for IoT applications. However, interoperability is the persistent bottleneck here as well: BLTE is compatible with only the relatively newer dual-mode Bluetooth devices (called dual mode because they support BLTE as well as Bluetooth Classic), not the legacy Bluetooth Classic devices.32
Although Ethernet has been in use since the 1970s, Wi-Fi is a more recent wireless technology that is widely popular and known for its high-speed data transfer rates in personal and local area networks.
Typically, Wi-Fi devices keep latency, or delays in the transmission of data, low by remaining active even when no data are being transmitted. Such Wi-Fi connections are often set up with a dedicated power line or batteries that need to be charged after a couple of hours of use. Higher-cost, lower-power Wi-Fi devices “sleep” when not transmitting data and need just 10 milliseconds to “wake up” when called upon.33 Low Power Wi-Fi with batteries can be used for remote sensing and control applications.
Introduced in 2001, WiMAX was developed by the European Telecommunications Standards Institute (ETSI) in cooperation with IEEE. WiMAX 2 is the latest technology in the WiMAX family. WiMAX 2 offers maximum data speed of 1 Gbps compared to 100 Mbps by WiMAX.34
In addition to higher data speeds, WiMAX 2 has better backward compatibility than WiMAX: WiMAX 2 network operators can provide seamless service by using 3G or 2G networks when required. By way of comparison, Long Term Evolution (LTE) and LTE-A, described below, also allow backward compatibility.
Long Term Evolution, a wireless wide-area network technology, was developed by members of the 3rd Generation Partnership Project body in 2008. This technology offers data speeds of up to 300 Mbps.35
LTE-Advanced (LTE-A) is a recent addition to the LTE technology that offers still-higher data rates of 1 Gbps compared to 300 Mbps by LTE.36 There is debate among industry practitioners on whether LTE is truly a 4G technology: Many consider LTE a pre-4G technology and LTE-A a true 4G technology.37 Given its high bandwidth and low latency, LTE is touted as the more-promising technology for IoT applications; however, the underlying network infrastructure remains under development, as described in the challenges below.
Weightless is a wireless open-standard WAN technology introduced in early 2014. Weightless uses unused bandwidth originally intended for TV broadcast to transfer data; based on the technical process of dynamic spectrum allocation, it can travel longer distances and penetrate through walls.38
Weightless can provide data rates between 2.5 Kbps to 16 Mbps in a wireless range of up to five kilometers, with batteries lasting up to 10 years.39 Weightless devices remain in standby mode, waking up every 15 minutes and staying active for 100 milliseconds to sync up and act on any messages; this leads to a certain latency.40 Given these characteristics, Weightless connections appear to be better- suited for delivering short messages in widespread machine-to-machine communications.
Networks are able to transfer data at higher speeds, at lower costs, and with lower energy requirements than ever before. Also, with the introduction of IPv6, the number of connected devices is rising rapidly. As a result, we are seeing an increasingly diverse composition of connected devices, from laptops and smartphones to home appliances, vehicles, traffic signals, and wind turbines. Such diversity in the nature of connected devices is driving a wider-scale adoption of an extensive range of network technologies.
Even though network technologies have improved in terms of higher data rates and lower costs, there are challenges associated with interconnections, penetration, security, and power consumption.
The third stage in the Information Value Loop—aggregate—refers to a variety of activities including data handling, processing, and storage. Data collected by sensors in different locations are aggregated so that meaningful conclusions can be drawn. Aggregation increases the value of data by increasing, for example, the scale, scope, and frequency of data available for analysis. Aggregation is achieved through the use of various standards depending on the IoT application at hand. According to the International Organization for Standardization (ISO), “a standard is a document that provides requirements, specifications, guidelines or characteristics that can be used consistently to ensure that materials, products, processes and services are fit for their purpose.”49
Two broad types of standards relevant for the aggregation process are technology standards (including network protocols, communication protocols, and data-aggregation standards) and regulatory standards (related to security and privacy of data, among other issues).
We discuss technology standards in the “Enabling technology standards” discussion later in this section. The second type of standards relates to regulatory standards that will play an important role in shaping the IoT landscape. There is a need for clear regulations related to the collection, handling, ownership, use, and sale of the data. Within the context of expanding IoT applications, it is worthwhile to consider the US Federal Trade Commission’s privacy and security recommendations dubbed the Fair Information Practice Principles (FIPPs) and described below.50
It is unclear, as of now, who will design, develop, and implement any regulatory standards specifically tailored to IoT applications. There is discussion about the appropriateness of existing guidelines and whether they are adequate for evolving IoT applications. For example, the US Health Insurance Portability and Accountability Act (HIPAA) governs the protection of medical information collected by doctors, hospitals, and insurance companies.51 However, the act does not extend to information collected through personal wearable devices.52
We just discussed the issues related to regulatory standards. Technology standards, the second type, comprises three elements: network protocols, communication protocols, and data-aggregation standards. Network protocols define how machines identify each other, while communication protocols provide a set of rules or a common language for devices to communicate. Once the devices are “talking” to each other and sharing data, aggregation standards help to aggregate and process the data so that those data become usable.
Traditional ETL tools aggregate and store the data in relational databases, in which data are organized by establishing relationships based on a unique identifier. It is easy to enter, store, and query structured data in relational databases using structured query language (SQL). The American National Standards Institute standardized SQL as the querying language for relational databases in 1986.55 SQL provides users a medium to communicate with databases and perform tasks such as data modification and retrieval. As the standard, SQL aids aggregation not just in centralized databases (all data stored in a single location) but also in distributed databases (data stored on several computers with concurrent data modifications).
With recent advances in easy and cost-effective availability of large volumes of data, there is a question about the adequacy of traditional ETL tools that can typically handle data in terabytes (1 terabyte = 1012 bytes). Big-data ETL tools developed in recent years can handle a much higher volume of data, such as in petabytes (1 petabyte = 1000 terabytes or 1015 bytes). In addition to handling large volume, big-data tools are also considered to be better suited to handle the variety of incoming data, structured as well as unstructured. Structured data are typically stored in spreadsheets, while unstructured data are collected in the form of images, videos, web pages, emails, blog entries, documents, etc.
Apache Hadoop is a big-data tool useful especially for unstructured data. Based on the Java programming language, Hadoop, developed by the Apache Software Foundation, is an open-source tool useful for processing large data sets. Hadoop enables parallel processing of large data across clusters of computers wherein each computer offers local aggregation and storage.56 Hadoop comprises two major components: MapReduce and Hadoop Distributed File System (HDFS). While MapReduce enables aggregation and parallel processing of large data sets, HDFS is a file-based storage system and a type of “Not only SQL (noSQL)” database. Compared to relational databases, NoSQL databases represent a wider variety of databases that can store unstructured data. Data processed and stored on Hadoop systems can be queried through Hadoop application program interfaces (APIs) that offer an easy user interface to query the data stored on HDFS for analytics applications.
Depending on the type of data and processing, different tools could be used. While MapReduce works on parallel processing, Spark, another big-data tool, works on both parallel processing and in-memory processing.57 Considering storage databases, HDFS is a file-based database that stores batch data such as quarterly and yearly company financial data, while Hbase and Cassandra are event-based storage databases that are useful for storing streaming (or real-time) data such as stock-performance data.58 We discussed select big-data tools above; other tools exist with a range of benefits and limitations, and the choice of a tool depends on the application at hand.
At present, the IoT landscape is in a nascent stage, and existing technology standards serve specific solutions and stakeholder requirements. There are many efforts under way to develop standards that can be adopted more widely. Primarily, we find two types of developments: vendors (across the IoT value chain) coming together to an agreement, and standards bodies (for example, IEEE or ETSI) working to develop a standard that vendors follow. Time will tell which one of these two options will prevail. Ultimately, it might be difficult to have one universal standard or “one ring to rule them all” either at the network or communication protocol level or at the data-aggregation level.
In terms of network and communication protocols, a few large players have at hand a meaningful opportunity to drive the standards that IoT players will follow for years to come. As an example, Qualcomm—with other companies such as Sony, Bosch, and Cisco—has developed the AllSeen Alliance that provides the AllJoyn platform, as described earlier.59 On similar lines, through the Open Interconnect Consortium, Intel launched the open-source IoTivity platform that facilitates device-to-device connectivity.60 IoTivity offers its members a free license of the code, while AllSeen does not. However, AllSeen-compliant devices are already available, while devices compliant with IoTivity are expected to be available by the second half of 2015.61 Both platforms are comparable but not interoperable, just as iOS and Android are.62
Concurrently, various standards bodies are also working to develop standards (for network and communication protocols) that apply to their geographical boundaries and could extend well beyond to facilitate worldwide IoT communications. As an example, the ETSI, which primarily has a focus on Europe, is working to develop an end-to-end architecture called the oneM2M platform that could be used worldwide.63 IEEE, another standards body, is making progress with the IEEE P2413 working group and is coordinating with standards bodies such as ETSI and ISO to develop a global standard by 2016.64
In terms of data aggregation, relational databases and SQL are considered to be the standards for storing and querying structured data. However, we do not yet have a widely used standard for handling unstructured data, even though various big-data tools are available. We discuss this challenge below.
For effective aggregation and use of the data for analysis, there is a need for technical standards to handle unstructured data and legal and regulatory standards to maintain data integrity. There are gaps in people skills to leverage the newer big-data tools, while security remains a major concern, given the fact that all the data are aggregated and processed at this stage of the Information Value Loop.
Extracting insight from data requires analysis, the fourth stage in the Information Value Loop. Analysis is driven by cognitive technologies and the accompanying models that facilitate the use of cognitive technologies.70 We refer to these enablers collectively as “augmented intelligence” to capture the idea that systems can automate intelligence—a concept that for us includes notions of volition and purpose—in a way that excludes human agency but nevertheless can be supplemented and enhanced.
In the context of the value loop, analysis is useful only to the extent that it informs action. “Analytics typically involves sifting through mountains of what are often confusing and conflicting data—in search of nuggets of insight that may inform better decisions.”71 As figure 12 illustrates, there are three different ways in which analytics can inform action.72
At the lowest level, descriptive analytics tools augment our intelligence by allowing us to work effectively with much larger or more complex data sets than we could otherwise easily handle. Various data visualization tools such as Tableau and SAS Visual Analytics make large data sets more amenable to human comprehension and enable users to identify insights that would otherwise be lost in the huge heap of data.
Predictive analytics is the beginning of keener insight into what might be happening or could happen, given historical trends. Predictive analytics exploits the large quantity and increasing variety of data to build useful models that can correlate seemingly unrelated variables.73 Predictive models are expected to produce more accurate results through machine learning, a process that refers to computer systems’ ability to improve their performance by exposure to data without the need to follow explicitly programmed instructions. For instance, when presented with an information database about credit-card transactions, a machine-learning system discerns patterns that are predictive of fraud. The more transaction data that the system processes, the better its predictions should become.74 Unfortunately, in many practical applications, even seemingly strong correlations are unreliable guides to effective action. Consequently, predictive analytics in itself still relies on human beings to determine what sorts of interventions are likeliest to work.75
Finally, prescriptive analytics takes on the challenge of creating more nearly causal models.76 Prescriptive analytics includes optimization techniques that are based on large data sets, business rules (information on constraints), and mathematical models. Prescriptive algorithms can continuously include new data and improve prescriptive accuracy in decision optimizations. Since prescriptive models provide recommendations on the best course of action, the element of human participation becomes more important; the focus shifts from a purely analytics exercise to behavior change management. We discuss this more in the “Augmented behavior” section.
With advances in cognitive technologies’ ability to process varied forms of information, vision and voice have also become usable. Below, we discuss select cognitive technologies that are experiencing increasing adoption and can be deployed for predictive and prescriptive analytics.78
Availability of big data—coupled with growth in advanced analytics tools, proprietary as well as open-source—is driving augmented intelligence. Typical intelligence applications are based on batch processing of data; however, the need for timely insights and prompt action is driving a growing adoption of real-time data analysis tools.
Limitations of augmented intelligence result from the quality of data, human inability to develop a foolproof model, and legacy systems’ limited ability to handle unstructured and real-time data. Even if both the data and model are shipshape, there could be challenges in human implementation of the recommended action; in the next section, on augmented behavior, we discuss the challenges related to human behavior.
In its simplest sense, the concept of “augmented behavior” is the “doing” of some action that is the result of all the preceding stages of the value loop—from sensing to analysis of data. Augmented behavior, the last phase in the loop, restarts the loop because action leads to creation of data, when configured to do so.
There is a thin line between augmented intelligence and augmented behavior. For our purpose, augmented intelligence drives informed action, while augmented behavior is an observable action in the real world.
As a practical matter, augmented behavior finds expression in at least three ways:
The enabling technologies for both M2M and M2H interfaces prompt a consideration of the evolution in the role of machines—from simple automation that involves repetitive tasks requiring strength and speed in structured environments to sophisticated applications that require situational awareness and complex decision making in unstructured environments. The shift toward sophisticated automation requires machines to evolve in two ways: improvements in the machine’s cognitive abilities (for example, decision making and judgment) discussed in the previous section and the machine’s execution or actuation abilities (for example, higher precision along with strength and speed). With respect to robots specifically, we present below an overview of how machines have ascended this evolutionary path:
In the late 1940s, a few non-programmable robots were developed; these robots could not be reprogrammed to adjust to changing situations and, as such, merely served as mechanical arms for heavy, repetitive tasks in manufacturing industries.99 In 1954, George Devol developed one of the first programmable robots,100 and in the early 1960s, an increasing number of companies started using programmable robots for industrial automation applications such as warehouse management and machining.101
This period witnessed key developments related to the evolution of adaptive robots.102 As the name suggests, adaptive robots embedded with sensors and sophisticated actuation systems can adapt to a changing environment and can perform tasks with higher precision and complexity compared to earlier robots.103 During this period, robotic machines that could adapt to varying situations were used to identify objects and autonomously take action in applications such as space vehicles, unmanned aerial vehicles, and submarines.104
The development of an open-source robot operating system (developed by the Open Source Robotics Foundation) in 2006 was an important driver enabling the development and testing of various robotic technologies.105 As robots’ intelligence and precision of execution improved, they increasingly started working with human beings on critical tasks such as medical surgeries. Following the US Defense Advanced Research Project Agency’s competition for developing autonomous military vehicles in 2004, many automakers made a headway into military and civilian autonomous vehicles.106 Even though the underlying technology is available, legal and social challenges related to the use of autonomous vehicles are yet to be resolved.
With the availability of big data, cloud-based memory and computing, new cognitive technologies, and machine learning, robots in general seem to be getting better at decision making and are gradually approaching autonomy in many actions. We are witnessing the development of machines that have anthropomorphic features and possess human-like skills such as visual perception and speech recognition.107 Machines are automating intelligence work such as writing news articles and doing legal research—tasks that could be done only by humans earlier.108
Improved functionality at lower prices is driving higher penetration of industrial robots and increasing the adoption of surgical robots, personal-service robots, and so on. For situations where a user needs to take the action, machines are increasingly being developed with basic behavioral-science principles in mind. This allows machines to influence human behaviors in effective ways.
Other examples abound showing how the IoT can influence human behavior to achieve normative outcomes. The larger point, though, is that the IoT may augment human behavior as much as it augments mechanical behavior. And the interplay between the IoT and human choice will likely only evolve and become more prominent in the years ahead.
There are challenges related to machines’ judgment in unstructured situations and the security of the information informing such judgments. Interoperability is an additional issue when heterogeneous machines must work in tandem in an M2M setup. Beyond the issues related to machine behavior, managing human behaviors in the cases of M2H interfaces and organizational entities present their own challenges.
To manage these augmented behavior changes in organizations, decision makers could give the new technologies a fair chance to contribute in their decision-making process by setting aside their biases. At the same time, data scientists and developers could focus on two objectives: continuously improving the statistical tools and the algorithms to bring the machine’s decision-making ability closer to reality, and making it easier for business users to comprehend the results through means such as easy-to-use visualization tools. In the current state of affairs, augmented behavior has the potential to grow, with an increasing number of successful use cases over time.
The Information Value Loop can serve as the cornerstone of an organization’s approach to IoT solution development for potential use cases. To transform ideas and concepts discussed earlier in the report into the concrete building blocks of a solution, we posit an end-to-end IoT technology architecture to guide IoT solution development. This architecture links strategy decisions to implementation activities. It can serve as a playbook for establishing the vision for an IoT solution and for converting that vision into tangible reality. The Information Value Loop informs and is present in each phase of this development, whereby ideas are made progressively more specific, and tactical decisions remain consistent with the overall strategic goals. The process of turning ideas into IoT solutions is shown in figure 22.
Our architecture for guiding the development and deployment of IoT systems consists of the following views:
Each of the views is detailed below.
In the business view, the Information Value Loop stages are utilized to examine the flow of information which guides strategic decisions for the use case at hand. These decisions further help define the overall IoT strategy. An example of how value can be realized using IoT in health monitoring is shown in figure 23.
Prior to the IoT, the patient could wear a heart monitor, but the monitor’s data would usually be communicated to the external world using written records that had to be carried each time. This represented a blockage at the "communicate" step (see arrow "A").
With the introduction of the IoT, data can now be communicated between a patient and the physician using network connections. However, there is still a bottleneck associated with the ability of the smart systems to interface with existing electronic health record (EHR) systems in order to aggregate data. Alleviating this bottleneck is key to IoT applications in the health care industry.
In the “Meet Isabel’ scenario of figure 23, the bottleneck associated with data aggregation and use can be addressed by the “integration” layer wherein standards for sensor management, data transfer, storage, and aggregation come together in an integral fashion. In the earlier part of this report, we discuss “standards” as they relate to specific technologies that fall under the integration layer described further in the functional view.
The functional view categorizes the components of an IoT system across the five value loop stages and five functional layers—sensors, network, integration, augmented intelligence, and augmented behavior. It serves as a guide to the functional considerations and technology choices of an IoT solution (see figure 24).
As discussed earlier in the report, sensors create the data that are sent downstream to subsequent layers of the architecture. Network is the connectivity layer that communicates data from the sensors and connects them to the Internet. The integration layer manages the sensor and network elements, and aggregates data from various sources for analysis. The augmented intelligence layer processes data into actionable insights. Finally, augmented behavior encapsulates the actions or changes in human or machine behavior resulting from these insights. The augmented behavior layer includes an edge computing sub-layer defined by local analysis (near the source of data) and action without the need for human intervention. Aligned with these layers and the value loop stages are standards for sensor management and data management and use, as well as security considerations including end-point protection, network security, intrusion prevention, and privacy and data protection.
The usage view sets up the technical solution by describing the user’s journey through all the steps of the use case being implemented. This view would include the key actors, that may be users and/or machines, and the activities involved. The usage view also describes the use case from the point of view of user needs and system capabilities. Figure 25 illustrates a typical IoT use in a brick-and-mortar store.
The implementation view delves deeper into specific technology choices and the vendor solutions that are used to deploy those choices. It leverages the high-level component view from the functional architecture to frame the specific system implementation.
Our IoT reference architecture describes parameters and benchmark criteria that can be used to identify the best mix of product solutions for an IoT implementation across different layers. Figure 26 shows a representative implementation view for the retail use case example described earlier.
The specifications view captures the final translation of the various viewpoints described above as part of the IoT Reference Architecture into ground-level deployment. It crystalizes the functional requirements and specific technology choices identified earlier into detailed specification definitions that describe how all the selected components must be linked to work together. A sample specifications view is shown in figure 27.
Together, all the myriad viewpoints that comprise the Deloitte IoT Reference Architecture form an end-to-end blueprint for realizing an IoT system from strategy through implementation.
The Internet of Things is an ecosystem of ever-increasing complexity, and the vocabulary of its language is dynamic. As we stated at the outset, our intent in presenting this primer is not to answer every question that a reader may have about the IoT. No single resource could ever hope to achieve that end about anything as elaborate as the IoT. Rather, in developing this report, our objective was to provide a useful top-down reference to assist readers as they explore IoT-driven solutions. In using this primer, the reader should come away with a better understanding of what the IoT is as well as the elements that comprise its constituent parts within a strategic framework.
At this relatively nascent stage, the IoT ecosystem is fragmented and disorganized. Over time, the IoT ecosystem should undergo a streamlining and organizing process and a “knitting together” of its individual pieces. Because the IoT will play an increasingly important role in how we live and run our businesses, Deloitte is undertaking an IoT-focused eminence campaign. This primer will serve as a foundational resource for the campaign that will include thoughtware examining the IoT both from industry- and issue-specific perspectives.
Deloitte’s Internet of Things practice enables organizations to identify where the IoT can potentially create value in their industry and develop strategies to capture that value, utilizing IoT for operational benefit.
To learn more about Deloitte’s IoT practice, visit http://www2.deloitte.com/us/en/pages/technology-media-and-telecommunications/topics/the-internet-of-things.html.
Read more of our research and thought leadership on the IoT at http://dupress.com/collection/internet-of-things/.
Actuator: a device that complements a sensor in a sensing system. An actuator converts an electrical signal into action, often by converting the signal to non-electrical energy, such as motion. A simple example of an actuator is an electric motor that converts electric energy into mechanical energy.
Analytics: the systematic analysis of often-confusing and conflicting data in search of insight that may inform better decisions.
Application program interfaces (API): a set of software commands, functions, and protocols that programmers can use to develop software that can run on a certain operating system or website. On the one hand, APIs make it easier for programmers to develop software; on the other, they ensure that users experience the same user interface when using software built on the same API.
Artificial intelligence: the theory and development of computer systems able to perform tasks that normally require human intelligence. The field of artificial intelligence has produced a number of cognitive technologies such as computer vision, natural-language processing, speech recognition, etc.
Batch processing: the execution of a series of computer programs without the need for human intervention. Traditional analytics software generally works on batch-oriented processing wherein data are aggregated in batches and then processed. This approach, however, does not deliver the low latency required for near-real-time analysis applications.
Big data: a term popularly used to describe large data sets that cannot be handled efficiently by traditional data management systems. In addition to the large volume, the concept of big data also refers to the variety of data sets—i.e., structured and unstructured as well as the velocity or the rate at which the data are incoming.
Cloud computing: an infrastructure of shared resources (such as servers, networks, and software applications and services) that allow users to scale up their data management and processing abilities while keeping the costs low. A cloud vendor invests in and maintains the cloud infrastructure; a user pays for only the resources and applications he wishes to use.
Cognitive technologies: a set of technologies able to perform tasks that only humans used to be able to do. Examples of cognitive technologies include computer vision, natural-language processing, and speech recognition.
Communication protocol: a set of rules that provide a common language for devices to communicate. Different communication protocols are used for device-to-device communication; broadly, they vary in the format in which data packets are transferred. One example is the familiar Hypertext Transfer Protocol (HTTP).
Complex event processing (CEP): an analytics tool that enables processing and analysis of data on a real-time or a near-real-time basis, driving timely decision making and action.
CEP is relevant for the IoT in its ability to recognize patterns in massive data sets at low latency rates. A CEP tool identifies patterns by using a variety of techniques such as filtering, aggregation, and correlation to trigger automated action or flag the need for human intervention.
Computer vision: a type of cognitive technology that refers to the ability of computers to identify objects, scenes, and activities in images. Computer-vision technology uses sequences of imaging-processing operations and other techniques to decompose the task of analyzing images into manageable pieces. Certain techniques, for example, allow for detecting the edges and textures of objects in an image. Classification models may be used to determine if the features identified in an image are likely to represent a kind of object already known to the system.
Data rates: the speed at which data are transferred by a network. Sometimes termed “bandwidth,” data rates are typically measured in bits transferred per second. Network technologies that are currently available can deliver data rates of up to 1 gigabit per second.
Descriptive analytics: a type of analytics that provides insights into past business events and performance. In a fundamental sense, descriptive analytics helps answer the question “What has happened?” Descriptive analytics tools augment human intelligence by allowing us to work effectively with much larger or more complex data sets than we would ordinarily be able to without such tools.
Extraction, Transformation, Loading (ETL) tools: a set of data aggregation tools that aggregate, process, and store data in a format that can be used for analytics applications. Extraction refers to acquiring data from multiple sources and formats and then validating to ensure that only data that meet a specific criterion are included. Transformation includes activities such as splitting, merging, sorting, and transforming the data into a desired format; for example, names can be split into first and last names, addresses can be merged into city and state format, etc. Loading refers to the process of loading the data into a database that can be used for analytics applications.
Gateway: a combination of hardware and software components that connects one network to another.
Hadoop: an open-source tool that is useful for processing large data sets. Hadoop is a part of the Apache Software Foundation and is based on the Java programming language. Hadoop enables parallel processing of large data across clusters of computers in which each computer offers local aggregation and storage.
In-memory processing: the process of storing data in random access memory instead of hard disks; this enables quicker data querying, retrieval, and visualizations.
Internet Protocol (IP): an open network protocol that provides unique addresses to various devices connected to the Internet. There are two versions of IP: IP version 4 (IPv4) and IPv6.
Internet transit prices: the price charged by an Internet service provider (ISP) to transfer data on a network. Since no single ISP can cover the worldwide network, the ISPs rely on each other to transfer data using network interconnections through gateways.
IP version 4 (IPv4): an older version of the Internet Protocol (IP); IPv6 is a most recent version. IPv4 offers an addressing space of about 6 billion addresses, out of which 4 billion addresses have been used already. IPv4 allows a group of co-located sensors to be identified geographically but not individually, thus restricting the value that can be generated through the scope of data collected from individual devices that are co-located.
IP version 6 (IPv6): a recent version of the Internet Protocol (IP) that succeeds IPv4. IPv6 has superior scalability and identifiability features compared to IPv4: the IPv6 address space supports approximately 3.4x1038 unique addresses compared to 6 billion addresses under IPv4.
Latency: the time delay in transfer of data from one point in a network to another. Low-latency networks allow for near-real-time data communications.
Local area network (LAN): a network that extends to a geographic range of at least 100 meters, such as within a house, office, etc. Devices could connect to wired or wireless LAN technologies. Examples of wired LAN technologies include Ethernet, and fiber optics. Wi-Fi is an example of a wireless LAN technology.
Machine learning: the ability of computer systems to improve their performance by exposure to data, without the need to follow explicitly programmed instructions. At its core, machine learning is the process of automatically discovering patterns in data. Once discovered, the pattern can be used to make predictions. For instance, presented with a database of information about credit-card transactions—such as date, time, merchant, merchant location, price, and whether the transaction was legitimate or fraudulent—a machine-learning system recognizes patterns that are predictive of fraud. The more transaction data it processes, the better its predictions are expected to become.
Machine-to-human (M2H) interfaces: a set of technologies that enable machines to interact with human beings. Some common examples of M2H interfaces include wearables, home automation devices, and autonomous vehicles. Based on the data collected and algorithmic calculations, machines have the potential to convey suggestive actions to individuals who then exercise their discretion to take or not to take the recommended action.
Machine-to-machine (M2M) interfaces: a set of technologies that enable machines to communicate with other machines and drive action. In common vernacular, M2M is often used interchangeably with the IoT. For our purposes, though, the IoT is a broader concept that includes machine-to-machine (M2M) and machine-to-human (M2H) interfaces, as well as support systems that facilitate the management of information in a way that creates value.
Metadata: the data that describe other data. For example, metadata for a document would typically include the author’s name, size of the document, last created or modified date, etc.
Natural-language processing: a type of cognitive technology that refers to computers’ ability to work with text the way humans do, extracting meaning from text or even generating text that is readable. Natural-language processing comprises multiple techniques that may be used together to achieve its goals. Language models, a natural-language processing technique, are used to predict the probability distribution of language expressions—the likelihood that a given string of characters or words is a valid part of a language, for instance. Feature selection may be used to identify the elements of a piece of text that may distinguish one kind of text from another—for instance, a spam email versus a legitimate one.
Network protocol: a set of rules that define how computers identify each other on a network. One example of a network protocol is the Internet Protocol (IP) that offers unique addresses to machines connected to the Internet.
Network: an infrastructure of hardware components and software protocols that allows devices to share data with each other. Networks can be wired (e.g., Ethernet) or wireless (e.g., Wi-Fi).
Parallel processing: the concurrent processing of data on clusters of computers in which each computer offers local aggregation and storage.
Personal area network (PAN): a network that extends to a small geographic range of at least 10 meters, such as inside a room. Devices can connect to wireless PAN technologies such as Bluetooth and ZigBee as well as wired PAN technologies such as Universal Serial Bus (USB).
Predictive analytics: the computational tools that aim to answer questions related to “what might be happening or could happen, given historical trends?” Predictive analytics exploits the large quantity and the increasing variety of data to build useful models that correlate sometimes seemingly unrelated variables. Predictive models are expected to produce more accurate results through machine learning, a process that refers to computer systems’ ability to improve their performance by exposure to data without the need to follow explicitly programmed instructions.
Prescriptive analytics: the computational tools that endeavor to answer questions related to “What should one do to achieve a desired outcome?” based on data related to what has happened and what could happen. Prescriptive analytics includes optimization techniques that are based on large data sets, business rules (information on constraints), and mathematical models. Prescriptive algorithms can continuously include new data and improve prescriptive accuracy in decision optimizations.
Real-time processing: the processing of data instantaneously upon receiving the data and/or instruction. There is often the question of, “What data can be considered truly real?” Ideally, data are valid the second they are generated; however, because of practical issues related to latency, the meaning of “real time” varies from application to application.
Relational databases: a type of database that organizes data by establishing relationships based on a unique identifier. Structured data stored in relational databases can be queried using structured query language (SQL).
Sensor: a device that is used to “sense” a physical condition or event. A sensor works by converting a non-electrical input into an electrical signal that can be sent to an electronic circuit. A sensor does not function by itself—it is a part of a larger system that comprises microprocessors, modem chips, power sources, and other related devices.
Speech recognition: a type of cognitive technology that focuses on accurately transcribing human speech. The technology has to handle challenges such as diverse accents, background noise, homophones (e.g., “principle” and “principal”), speed of speaking, etc. Speech-recognition systems use some of the same techniques as natural-language processing systems, as well as others such as acoustic models that describe sounds and their probability of occurring in a certain sequence in a given language.
Structured data: the data stored in predefined formats, such as rows and columns in spreadsheets. Structured data are generally stored in relational databases and can be queried using structured query language (SQL).
Structured query language (SQL): a programming language standardized by the American National Standards Institute as the querying language for relational databases in 1986. SQL provides users a medium to communicate with databases and perform tasks such as data modification and retrieval. SQL aids aggregation, not only in centralized databases (all data stored in a single location) but also in distributed databases (data stored on several computers with concurrent data modifications).
Transducer: a device that converts one form of energy (electrical or not) into another (electrical or not). For example, a loudspeaker is a transducer because it converts an electrical signal into a magnetic field and, subsequently, into acoustic waves. Transducers refer to a broad category of devices that includes sensors and actuators.
Unstructured data: the data that do not fit into predefined formats. Common sources of unstructured data include images, videos, webpages, emails, blog entries, and Word documents.
Wide area network (WAN): a network that spreads to a large area, say, beyond buildings and cities. WAN is an internetwork that is set up by connecting a number of local area networks (LAN) through routers. The Internet is an example of a WAN.