Today’s ways of working demand deep expertise in narrow skill sets. Being informed about projects often requires significant specialized training and understanding of context, which can burden workers and keep information siloed. This has historically been true especially for any workflow involving a physical component. Specialized tasks demanded narrow training in a variety of unique systems, which made it hard to work across disciplines.
One example is computer-aided design (CAD) software. An experienced designer or engineer can view a CAD file and glean much information about the project. But those outside of the design and engineering realm—whether they’re in marketing, finance, supply chain, project management, or any other role that needs to be up to speed on the details of the work—will likely struggle to understand the file, which keeps essential technical details buried.
Spatial computing is one approach that can aid this type of collaboration. As discussed in Tech Trends 2024, spatial computing offers new ways to contextualize business data, engage customers and workers, and interact with digital systems. It more seamlessly blends the physical and digital, creating an immersive technology ecosystem for humans to more naturally interact with the world.1 For example, a visual interaction layer that pulls together contextual data from business software can allow supply chain workers to identify parts that need to be ordered and enable marketers to grasp a product's overall aesthetics to help them build campaigns. Employees across the organization can make meaning of and, in turn, make decisions with detailed information about a project in ways anyone can understand.
If eye-catching virtual reality (VR) headsets are the first thing that come to mind when you think about spatial computing, you’re not alone. But spatial computing is about more than providing a visual experience via a pair of goggles. It also involves blending standard business sensor data with the Internet of Things, drone, light detection and ranging (LIDAR), image, video, and other three-dimensional data types to create digital representations of business operations that mirror the real world. These models can be rendered across a range of interaction media, whether a traditional two-dimensional screen, lightweight augmented reality glasses, or full-on immersive VR environments.
Spatial computing senses real-world, physical components; uses bridging technology to connect physical and digital inputs; and overlays digital outputs onto a blended interface (figure 1).2
Spatial computing’s current applications are as diverse as they are transformative. Real-time simulations have emerged as the technology’s primary use case. Looking ahead, advancements will continue to drive new and exciting use cases, reshaping industries such as health care, manufacturing, logistics, and entertainment—which is why the market is projected to grow at a rate of 18.2% between 2022 and 2033.3 The journey from the present to the future of human-computer interaction promises to fundamentally alter how we perceive and interact with the digital and physical worlds.
At its heart, spatial computing brings the digital world closer to lived reality. Many business processes have a physical component, particularly in asset-heavy industries, but, too often, information about those processes is abstracted, and the essence (and insight) is lost. Businesses can learn much about their operations from well-organized, structured business data, but adding physical data can help them understand those operations more deeply. That’s where spatial computing comes in.
“This idea of being served the right information at the right time with the right view is the promise of spatial computing,” says David Randle, global head of go-to-market for spatial computing at Amazon Web Services (AWS). “We believe spatial computing enables more natural understanding and awareness of physical and virtual worlds.”4
One of the primary applications unlocked by spatial computing is advanced simulations. Think digital twins, but rather than virtual representations that monitor physical assets, these simulations allow organizations to test different scenarios to see how various conditions will impact their operations.
Imagine a manufacturing company where designers, engineers, and supply chain teams can seamlessly work from a single 3D model to craft, build, and procure all the parts they need; doctors who can view true-to-life simulations of their patients’ bodies through augmented reality displays; or an oil and gas company that can layer detailed engineering models on top of 2D maps. The possibilities are as vast as our physical world is varied.
The Portuguese soccer club Benfica’s sports data science team uses cameras and computer vision to track players throughout matches and develop full-scale 3D models of every move its players make. The cameras collect 2,000 data points from each player, and AI helps identify specific players, the direction they were facing, and critical factors that fed into their decision-making. The data essentially creates a digital twin of each player, allowing the team to run simulations of how plays would have worked if a player was in a different position. X’s and O’s on a chalkboard are now three-dimensional models that coaches can experiment with.5
“There’s been a huge evolution in AI pushing these models forward, and now we can use them in decision-making,” says Joao Copeto, chief information and technology officer at Sport Lisboa e Benfica.6
This isn’t only about wins and losses—it’s also about dollars and cents. Benfica has turned player development into a profitable business by leveraging data and AI. Over the past 10 years, the team has generated some of the highest player-transfer deals in Europe. Similar approaches could also pay dividends in warehouse operations, supply chain and logistics, or any other resource planning process.
Advanced simulations are also showing up in medical settings. For instance, virtual patient scenarios can be simulated as a training supplement for nurses or doctors in a more dynamic, self-paced environment than textbooks would allow. This may come with several challenges, such as patient data concerns, integration of AI into existing learning materials, and the question of realism. But AI-based simulations are poised to impact the way we learn.7
Simulations are also starting to impact health care delivery. Fraser Health Authority in Canada has been a pioneer in leveraging simulation models to improve care.8 By creating a first-of-its-kind system-wide digital twin, the public health authority in British Columbia generated powerful visualizations of patient movement through different care settings and simulations to determine the impact of deploying different care models on patient access. Although the work is ongoing, Fraser expects improvement in appropriate, need-based access to care through increased patient awareness of available services.
Enterprise IT teams will likely need to overcome significant hurdles to develop altogether-new spatial computing applications. They likely haven’t faced these hurdles when implementing more conventional software-based projects. While these projects have compelling business value, organizations will have to navigate some uncharted waters to achieve them.
For one thing, data isn’t always interoperable between systems, which limits the ability to blend data from different sources. Furthermore, the spaghetti diagrams mapping out the path that data travels in most organizations are circuitous at best, and building the data pipelines to get the correct spatial data into visual systems is a thorny engineering challenge. Ensuring that data is of high quality and faithfully mirrors real-world conditions may be one of the most significant barriers to using spatial computing effectively.9
Randle of AWS says spatial data has not historically been well managed at most organizations, even though it represents some of a business’s most valuable information.
This information, because it’s quite new and diverse, has few standards around it and much of it sits in silos, some of it’s in the cloud, most of it’s not,” says Randle. “This data landscape encompassing physical and digital assets is extremely scattered and not well managed. Our customers' first problem is managing their spatial data.”10
Taking a more systematic approach to ingesting, organizing, and storing this data, in turn, makes it more available to modern AI tools, and that’s where the real learnings begin.
We’ve often heard that data is the new oil, but for an American oil and gas company, the metaphor is becoming reality thanks to significant effort in replumbing some of its data pipelines.
The energy company uses drones to conduct 3D scans of equipment in the field and its facilities, and then applies computer vision to the data to ensure its assets operate within predefined tolerances. It’s also creating high-fidelity digital twins of assets based on data pulled from engineering, operational, and enterprise resource planning systems.
The critical piece in each example? Data integration. The energy giant built a spatial storage layer, using application program interfaces to connect to disparate data sources and file types, including machine, drone, business, and image and video data.11
Few organizations today have invested in this type of systematic approach to ingesting and storing spatial data. Still, it’s a key factor driving spatial computing capabilities and an essential first step for delivering impactful use cases.
In the past, businesses couldn't merge spatial and business data into one visualization, but that too is changing. As discussed in "What's next for AI?" multimodal AI—AI tools that can process virtually any data type as a prompt and return outputs in multiple formats—is already adept at processing virtually any input, whether text, image, audio, spatial, or structured data types.12 This capability will allow AI to serve as a bridge between different data sources, and interpret and add context between spatial and business data. AI can reach into disparate data systems and extract relevant insights.
This isn’t to say multimodal AI eliminates all barriers. Organizations still need to manage and govern their data effectively. The old saying “garbage in, garbage out” has never been more prescient. Training AI tools on disorganized and unrepresentative data is a recipe for disaster, as AI has the power to scale errors far beyond what we’ve seen with other types of software. Enterprises should focus on implementing open data standards and working with vendors to standardize data types.
But once they’ve addressed these concerns, IT teams can open new doors to exciting applications.
“You can shape this technology in new and creative ways,” says Johan Eerenstein, executive vice president of workforce enablement at Paramount.13
Many of the aforementioned challenges in spatial computing are related to integration. Enterprises struggle to pull disparate data sources into a visualization platform and render that data in a way that provides value to the user in their day-to-day work. But soon, AI stands to lower those hurdles.
As mentioned above, multimodal AI can take a variety of inputs and make sense of them in one platform, but that could be only the beginning. As AI is integrated into more applications and interaction layers, it allows services to act in concert. As mentioned in "What's next for AI?" this is already giving way to agentic systems that are context-aware and capable of executing functions proactively based on user preferences.
These autonomous agents could soon support the roles of supply chain manager, software developer, financial analyst, and more. What will separate tomorrow’s agents from today’s bots will be their ability to plan ahead and anticipate what the user needs without even having to ask. Based on user preferences and historical actions, they will know how to serve the right content or take the right action at the right time.
When AI agents and spatial computing converge, users won’t have to think about whether their data comes from a spatial system, such as LIDAR or cameras (with the important caveat that AI systems are trained on high-quality, well-managed, interoperable data in the first place), or account for the capabilities of specific applications. With intelligent agents, AI becomes the interface, and all that’s necessary is to express a preference rather than explicitly program or prompt an application. Imagine a bot that automatically alerts financial analysts to changing market conditions, or one that crafts daily reports for the C-suite about changes in the business environment or team morale.
All the many devices we interact with today, be they phone, tablet, computer, or smart speaker, will feel downright cumbersome in a future where all we have to do is gesture toward a preference and let context-aware, AI-powered systems execute our command. Eventually, once these systems have learned our preferences, we may not even need to gesture at all.
The full impact of agentic AI systems on spatial computing may be many years out, but businesses can still work toward reaping the benefits of spatial computing. Building the data pipelines may be one of the heaviest lifts, but once built, they open up myriad use cases. Autonomous asset inspection, smoother supply chains, true-to-life simulations, and immersive virtual environments are just a few ways leading enterprises are making their operations more spatially aware. As AI continues to intersect with spatial systems, we’ll see the emergence of revolutionary new digital frontiers, the contours of which we’re only beginning to map out.