Asset tracking integration has been saved
Asset tracking integration
Asset intelligence series
Data has become the holy grail for any business seeking competitive advantage and growth. Over the past decade, the costs for the hardware and computing power needed for data processing have steadily decreased, whilst speed and performance have increased per unit. Combined with a lowered entry barrier, due to widespread adoption of the cloud, this has resulted in an explosion of data collection. But what happens next? What, further, do companies need to extract value from the data?
Go directly to
- Physical-digital-physical loop
- Defining a solution architecture
- Scaling and integrating
- Overarching solution components
- Continuous improvement
The digital to digital stage of the physical-digital-physical loop
Our first article of our Asset Intelligence Series explained that the data collected through the tracking of assets can add great value to a business by bridging the physical-digital-physical loop. The second article covered the types of asset tracking technology used for data collection in the physical to digital stage. This article will dive into the implementation and integration of the digital to digital stage of the loop.
The digital to digital stage is essential to gain the ability make better informed decisions and interventions, supported by data. Only once this stage has been implemented can the required new insights and transparency in the movement of assets be obtained. This means that a complete asset tracking solution is set up and integrated into the existing business, which can seem like a big challenge. To break it up into manageable pieces, the roadmap for integration can be divided into different stages; Pilot solution, Scaling up and Continuous improvement. We will examine each stage to provide insights into the implementation approach.
We will use the example of tracking beds in a hospital to better illustrate the concepts. In each section, the normal text explains the concepts whilst the text in italics explains how this translates to the example.
Defining a solution architecture
Once the decision has been made to start implementing an asset tracking solution, the first step is to identify a use-case and set up a pilot solution that is scalable. This solution should contain the basic elements needed to quickly demonstrate the value and insights that can be extracted from collected data, whilst limiting risk and investment.
As data by itself does not provide immediate value once collected, certain technology components are needed to process the data to the point where value can be extracted and measured. The components can be split logically into 3 layers; Physical, Gateway and Platform. The flow of data starts in the Physical, then moves to the Gateway and finally to the Platform. This is illustrated in in Figure 2.
The physical layer contains the assets being tracked. Data collection is achieved using tags attached to moving assets and readers in fixed locations. These determine the location of the tag using the chosen tracking technology.
In the example of tracking hospital beds, the physical layer consists of hospital beds in one department with tags attached to them and readers in fixed locations that receive a signal from the tags.
The gateway layer outlines the various means and hardware (receivers) with which data is collected from the readers present in the physical layer. Additionally, it is where pre-processing actions such as protocol conversion and data filtering are performed in preparation for further analysis in the platform layer. The main benefit of a gateway is that it manages and limits data transported to the platform layer, which allows it to work with structured and higher quality data.
When tracking hospital beds in the example, the gateway uses antennas as receivers to collect the data from the readers in the form of data packages.
The platform layer is where the solution connects, interfaces and orchestrates data for the purposes of generating insight – and with the legacy systems of a business. A distinction is made between four parts of the layer; IoT Services, Data Preparation & Delivery, Data Storage & Access and Analytics & Visualization.
The platform layer within the example consists of a computer with an IoT application installed, that receives data from the gateway receivers. Using this application users can access and analyze the data to draw meaningful insights, thereby performing the second stage of the physical-digital-physical loop and gaining the ability to make better informed decisions and interventions in the physical world.
Scaling and integrating into the existing business
The pilot solution is successful once it is stabilized and it demonstrates the value of asset tracking. The value is measured along performance indicators and objectives that were defined before beginning implementation. After the success has been validated, the next step is to expand the asset tracking solution and integrate it into the existing business. By building on the pilot solution, in terms of scale and functionality, a more robust solution is created that can provide more value over a broader scope of assets and processes.
For the physical and gateway layer, the scaling is done by increasing the capacity of the gateway devices to handle the increased number of tagged assets in the physical layer and adding edge computing functionalities. In the example this would mean that if e.g. wheelchairs are to be tracked in addition to beds, the capacity of the gateway devices should be able to handle the newly tagged assets and expanded to perform pre-processing tasks such as data filtering.
When expanding the pilot solution, the four parts of the platform layer will become more influential in solution performance. Therefore, a better understanding of each piece and its contribution to the solution is needed.
IoT Services: This is the main interface to the physical world and can be seen as the distribution center for IoT data. Receiving data from the gateway layer, it uses event and device management rules to aggregate this data. As data is routed, it dictates which information is available to which systems/departments. The service plays a big role in determining the effectiveness of the asset tracking solution.
Data Preparation & Delivery: To ensure that high quality data ends up at the intended location, data preparation and delivery plays a key role. The tasks performed can be split in two categories; the first being common data services, which includes routine tasks that need to be performed such as master data management. The second part being data acquisition, which focusses on data sourcing, data quality checks and data transformations.
There is also a distinction between structured and unstructured data and the way it is used in data pipelines, sometimes referred to as stream and batch processing pipelines. In hot and warm data pipelines a continuous stream of incoming data is processed with the aim to provide live insights based on real time events. Cold data pipelines use large batches of recorded data to provide insights based on data collected over a longer period of time. In our example this means the difference between knowing where each bed is at any point in time, and using historic location data to analyze the movement of beds in the hospital.
Data Storage & Access: Managing data storage and access is important to maximize the value of data. Storage systems should be designed to minimize risk of data loss and ensure accessibility of essential data. Unstructured data can include objects (e.g. text files or images), key-values or graph data. Structured data usually comes in a relational format or is transformed/converted into a workable structure for use. Unstructured device data is streamed in hot data pipelines to trigger (near) real-time action – via business logic, stream analytics or stored in raw form for later use. Cold data pipelines typically use larger sets of structured or transformed data for historical analysis, often on larger batches of data with more advanced analytics techniques (e.g. machine learning).
Analytics & Visualization: The value that can be extracted using data analytics and visualization presents itself in the form of insights that drive short-term and long-term decisions to improve something in the physical world. Short term decisions are a result of quick analyses using hot data pipelines and have an immediate measurable effect on performance. Whilst long-term decisions result from larger optimization projects that use cold data pipelines and aim to make large structural improvements. With the changes resulting from both types of decision making, the physical-digital-physical loop is completed.
When tracking hospital beds, an example of a short-term improvement based on real-time data is that employees can easily find any misplaced hospital beds. An example of a structural improvement is a change of the floor layout, if it turns out that hospital beds are structurally misplaced in specific areas.
Overarching solution components
In addition to expanding and evolving the components from the pilot solution, there are three overarching components that must be considered when scaling up; Infrastructure, Workflow & Orchestration and Security & Privacy.
Infrastructure refers to a solid foundation consisting of secure and stable networks, servers with which data can be transmitted, a reliable available software platform that enables continuous performance and software as-a-service to enable agility and cost-efficiency. This can be created through on premises infrastructure, cloud infrastructure or a combination of both.
Workflow and orchestration refer to services that orchestrate tasks and can make decisions based on implemented business rules. This manages the flow of data across an asset tracking solution and ensures that the workflow of tasks is set up correctly and constantly tweaked to maintain high efficiency. In addition, new deployments of code can be orchestrated from here.
Security & privacy are aimed at reducing the risk of data becoming available to an unintended entity or person. Access control, user authentication and device safety are examples of the means available to ensure confidentiality, which is especially critical when working with sensitive or confidential data, as is done in hospitals.
Continuous improvement of an integrated asset tracking solution
Once implemented and scaled, continuous solution improvement driven by changing requirements, market needs and/or compliance/regulation allows the asset tracking solution to evolve. At the same time, operations and maintenance is important to safeguard future benefits. With the different opportunities that asset tracking data can create, this can lead to a lengthy backlog of new functionalities. To effectively manage this, a clear demand funnel is used with which each request can be assessed and prioritized based on feasibility and business impact.
We will continue this series with an article on the people, privacy and safety aspect of an asset tracking solution. Be on the lookout for this article! In the meantime, if you have questions or would like some help with assessing how to integrate an asset tracking solution in your situation, please do not hesitate to contact us via the contact details below. Or visit Deloitte's Industry 4.0 page.