3D opportunity and the digital thread has been saved
While additive manufacturing has become a crucial part of the design process, it has not reached critical mass at the enterprise level. The digital thread—a single, seamless strand of data that stretches from the initial design to the finished part—can help scale and enhance AM capabilities.
Additive manufacturing (AM) is paving the way for the next step in the shift from physical object to data management by enabling manufacturing capabilities not possible through conventional means.1 The AM process draws upon a digital design file to deposit material, layer upon layer, to construct 3D-printed parts composed of often-complex geometries.
Visit the 3D Opportunity collection
Register for our upcoming course
Despite their promise and potential, digital designs dictating the production of end-use, 3D-printed objects have not yet moved fully into the mainstream. While AM has become a crucial part of the design process through rapid prototyping and has gained traction for highly customized, small-batch parts and within “maker” movements, it has not reached critical mass for applications in end-use parts and products at the enterprise level.2 This is due, in part, to economies of scale: Printing a one-off object during the design phase or in a makerspace is entirely different from large-scale mass production of parts. For AM processes to scale at the industrial level, a series of complex, connected, and data-driven events need to occur.
This series of data-driven events is commonly referred to as the digital thread: a single, seamless strand of data that stretches from the initial design concept to the finished part, constituting the information that enables the design, modeling, production, use, and monitoring of an individual manufactured part.3 This thread enables the flow of data throughout the manufacturing process, including design concept, modeling, build plan monitoring, quality assurance, the build process itself, and post-production monitoring and inspection. The ability to dissect, understand, and apply the potentially massive amounts of data and intense computing demands within the digital thread allows users to enhance and scale their AM capabilities and manage the complexities of AM production.
For AM processes to scale at the industrial level, a series of complex, connected, and data-driven events need to occur.
Yet, for all its importance, the digital thread is only as useful as it is integrated. Gaps in connectivity or stages within the design and manufacturing process where information remains siloed prevent the manufacturer from gaining full visibility across the process.4 Thus, the right digital infrastructure—one that can store, access, and analyze vast amounts of data and interoperate across multiple different machines and processes—is crucial to building and operating a successful digital thread.5 In this paper, as we describe the importance of the digital thread and its role in scaling AM, we will:
Based on review of the technical literature, we have developed a map of the digital thread to identify the key stages along the AM design and manufacture process. This map includes stages that generate AM process information and describe the various technological inputs and infrastructure that must be in place to connect, share, and harness that information.
To help manufacturers consider their approach to implementing a digital thread, we examine various approaches based on AM strategic objectives that fall within Deloitte’s AM framework (see the sidebar). In this way, manufacturers can begin to understand the steps they must take to build a digital thread that will work for their organization and help to scale AM to the appropriate level.
AM’s roots go back nearly three decades. Its importance is derived from its ability to break existing performance trade-offs in two fundamental ways. First, AM reduces the capital required to achieve economies of scale. Second, it increases flexibility and reduces the capital required to achieve scope.
Capital versus scale: Considerations of minimum efficient scale can shape supply chains. AM has the potential to reduce the capital required to reach minimum efficient scale for production, thus lowering the manufacturing barriers to entry for a given location.6
Capital versus scope: Economies of scope influence how and what products can be made. The flexibility of AM facilitates an increase in the variety of products a unit of capital can produce, reducing the costs associated with production changeovers and customization and, thus, the overall amount of required capital.
Changing the capital versus scale relationship has the potential to impact how supply chains are configured, and changing the capital versus scope relationship has the potential to impact product designs. These impacts present companies with choices on how to deploy AM across their businesses.
Companies pursuing AM capabilities choose between divergent paths (figure 1):
Path I: Companies do not seek radical alterations in either supply chains or products, but they may explore AM technologies to improve value delivery for current products within existing supply chains.
Path II: Companies take advantage of scale economics offered by AM as a potential enabler of supply chain transformation for the products they offer.
Path III: Companies take advantage of the scope economics offered by AM technologies to achieve new levels of performance or innovation in the products they offer.
Path IV: Companies alter both supply chains and products in pursuit of new business models.
While most firms leveraging AM tend to follow path I, using it largely for rapid prototyping and to facilitate the design process, the digital thread can enable manufacturers to scale AM to an industrial level.7 As the digital thread allows AM to scale to include mass production of end-use parts, it can enable manufacturers to think more strategically about a shift to paths II, III, or IV. The right technological infrastructure and information management capabilities are crucial to a shift of this nature because the ability to share data throughout the manufacturing process remains essential to moving to a wider use of AM. Additionally, one’s role in the AM process—manufacturer, designer, or supply chain partner, for example—will determine one’s path within the framework and, by extension, help prioritize areas of focus within the digital thread.
This series of data-driven events is commonly referred to as the digital thread: a single, seamless strand of data that stretches from the initial design concept to the finished part, constituting the information that enables the design, modeling, production, use, and monitoring of an individual manufactured part.
The DTAM includes a set of interconnected technologies that span and link the entire manufacturing process, end to end: from scan or design to analysis and simulation, through build planning and fabrication, to end use of the part, all connected in a series of feedback and feed-forward loops.8 This integrated system combines data, modeling, analysis, and other tools.9 A successful DTAM includes the:
This connected process is well suited to AM’s inherent complexities and reliance on data. The DTAM promises to address many of the challenges hindering wider AM adoption: quality assurance (QA), repeatability, and meticulous levels of process control. It can do so by collecting data from each stage of the design and manufacturing process, validating them, and ensuring that required interactions occur between each stage.11
Figure 2 depicts the DTAM for a single-part design producing n number of part units using AM. This graphic illustrates the process used to bring a design from either a scan or computer model through a series of digital transformations and physical processes into fabricated parts.
Figure 2. The digital thread and additive manufacturing
It is important to note that the DTAM comprises not so much the stages of the manufacturing process itself but rather the connections and interactions between them. Figure 2 describes the information flow between stages in the design and manufacturing process as well as the data that can be collected, analyzed, and communicated at each stage—both for feed-forward and feedback control, and part QA and validation/verification. These data can grow to large magnitudes; this phenomenon will be explored in later sections.
Note that figure 2 represents a single DTAM for a single part. In an enterprise scaling to produce many parts, with many printers in multiple locations, the DTAM quickly multiplies into an interwoven network of DTAMs, colloquially termed the “digital quilt” or “digital tapestry.”
The transformative power of the DTAM is the composability it offers as a result of taking a model-based approach to describe both each step as well as the connectivity and interoperability of the many systems in AM, both digital and physical. The DTAM is woven using a federated approach that incorporates the software, standards, and processes connected to each stage of AM; these topics are explored in subsequent sections.12
We next outline and explore each of the four phases of the DTAM—scan/design + analyze, build + monitor, test + validate, and deliver + manage—describing the place of each along the digital thread, and the transformations that occur between stages.
The DTAM begins at the design and analysis phase of the engineering life cycle. Following design, the part moves into a build or produce phase, then into a test phase. Once tested and validated, the part moves into a deliver phase. This design>build>test>deliver life cycle is similar to many current engineering product life cycles; however, due to the nature of AM technology, additional considerations around computing, data, feedback, and sensing carry greater weight than in traditional engineering life cycles.
Design and scan to CAD file. The DTAM begins with product inception, design, and analysis (figure 3). Designers’ ideas are translated into a 3D computer model using computer-aided design (CAD) tools. Alternatively, 3D scanners can take an existing physical part or product and create 3D renderings that can later be modified using scanning utilities or directly translated into .STL format (discussed below) for printing. Depending on requirements, this may result in a CAD file. This step is inclusive of other design input technologies, such as the use of haptic devices.
Design and scan encompass an initial transformation into the digital realm. This event not only establishes the model-based style pervasive to the DTAM but also marks the beginning of the digital twin: the parallel, digital embodiment of all design, production, quality, and field-use data associated with a unique part (see the sidebar “Digital twinning: An extension of the DTAM”).
QA requirements, which continue throughout the design and manufacture process, also begin at this stage. These requirements vary depending on the part’s intended function and use, ranging from rigorous requirements that necessitate monitoring and testing throughout the entire process to more moderate, audit-based approaches.13
Traditional analysis. Once the CAD file is created or scan completed, design and analysis iterations may occur. These iterations make use of traditional analysis tools, including finite element analysis (FEA) for determining structural and thermal properties of the part, and computational fluid dynamics (CFD) for determining fluid flow properties. Depending on the intended use of the part, additional analyses for material properties, fatigue life, and product life cycle requirements may also occur—although in some situations, such as scanning an existing part, these design and analysis iterations may not be required.
In mission-critical parts, significant effort is focused on the design and analysis feedback loop, the iterative process by which product designs are subjected to performance testing, evaluated, and revised to improve the quality of their performance. Although this feedback loop looks similar to those used in traditional subtractive manufacturing, with AM this loop can occur differently because AM design processes can more directly use algorithmic design to create innovative shapes impossible to manufacture via subtractive methods.14 Moreover, this process can be highly integrated within the DTAM, as modeling tools are used to refine the CAD model and prepare it for production.
Advanced multi-physics modeling and simulation. Next, the part moves on to AM-specific analysis that may include advanced multi-physics modeling and simulation of the 3D printing process. Multi-physics modeling is at the center of AM-focused research because it can support the creation of high-quality, consistent parts.15 In general, these simulations occur for a particular design, but they may also be associated with a specific, produced unit. This type of modeling overlaps the analyze and build phases, as it informs both the current and future design of the part through the use of continuous improvement information. It also informs the build planning and simulations that drive the 3D printing hardware, described in the following “Build + monitor” section.16
Advanced modeling is currently computationally intense and is thus largely limited to the research and academic communities.17 For example, predicting the near-atomic scale thermal stresses and lattice structure of the printing process—and how they affect the properties of the part—take supercomputer-level processing power and can take 40 to 60 hours to complete.18 Improving the accuracy of these simulations while reducing the computing power required remains an area of intense focus.19 Efforts are underway to commoditize and industrialize these models to make them more accessible.
Up until this point in the DTAM, design and analysis revisions have been focused on defining the digital ideal or digital reference. The result of many design and analysis iterations, this ideal model informs the build and monitor process. It serves as a benchmark against which individual unit parts, each with their unique digital twin, are compared.
Build simulation, detailed build plan, and machine data. Results from the advanced multi-physics modeling and simulation occurring in the scan/design + analyze stage inform the build preparation portion of the build + monitor process (figure 4). Here, the digital reference model is translated into a series of models that eventually result in machine instructions to control the printer and produce the 3D-printed part. A series of models and transformations account for support structures and part orientation during the build, ensure that the part is “watertight” (especially if it was generated with scanning tools), and translate the parametric or vector-based CAD model into a format readable by the AM hardware. This format is known as a “2.5D” model because a core component of build preparation is slicing the models into the many 2.5D layers that stack to form the part.
Most AM build preparation phases utilize the .STL file format, which was originally developed for use with stereolithography printing, but which does not come without its challenges.20 Currently, .STL serves as the de facto format for most 3D printers, and the translation of the .STL geometry information to machine data occurs inside the printer or proprietary printer hardware and software systems.21 This process varies based on printer manufacture, AM printing technology, and level of QA required. For the sake of the DTAM, these steps should be treated as discrete models, even if they occur simultaneously with part fabrication.
These data can grow by orders of magnitude, and thus it is important for organizations to understand their requirements for the DTAM so they can selectively store, reduce, and analyze data created during the build.
Part fabrication (3D print process). Part fabrication follows the build preparation stages. Fabrication is driven by the machine instructions created during build preparation, as well as real-time corrections to the machine instructions based on in-situ sensing that occurs during the build process. In situ monitoring of the build has a continuum of maturity, again depending on the required part quality and the capability of the 3D printing hardware/control system.22 Here, the data collected during fabrication are fed back into models similar to those used to initially create the machine instructions. Reduced-order versions of the multi-physics models may also be used for “on-the-fly” corrections, depending on computing and data requirements. In-situ monitoring results in a higher-quality part build with fewer defects and is also used to refine the part-specific build process for subsequent units. The mechanics and underlying technology to support in-situ monitoring are a significant research area within the AM community.23
In-situ monitoring data are incorporated into a unit part’s digital twin and records anomalies from the digital reference model that may affect the product’s life cycle. These data can grow by orders of magnitude, and thus it is important for organizations to understand their requirements for the DTAM so they can selectively store, reduce, and analyze data created during the build.
Per-part post-processing and finishing. Following part fabrication, several digital and physical steps must be completed before the part is ready for its end use. For example, certain geometries may require temporary support structures during printing. These support structures will need to be removed in post-processing. Other examples of post-processing may include curing for certain materials or material treatments; machining, such as honing or grinding to produce high-tolerance surface finishes; and surface treatments such as anodizing for creating corrosion resistance.
Part inspection. Following finishing, the fabricated unit part moves to the test + validate phase (figure 5). Several nondestructive evaluation (NDE) technologies exist to evaluate the quality of the part, in conjunction with the data recorded during the build, and are selected based upon design requirements for the part. These technologies include x-ray, liquid penetrant or UV dye, ultrasound, and eddy current, among others24 (table 1). NDE allows the part to be used with the assurance that it will function as designed. NDE testing results are recorded for each part and, in cases of advanced scanning, may constitute a significant amount of data added to the part’s digital twin. In-situ sensing data may provide additional assurance during this phase. In contrast to subtractive processing, real-time testing can be integrated into the AM build process.
Data verification and twinning. Following testing, the part is nearly ready for production use. Data verification and twinning processes collect all of the data produced for an individual unit part and update the digital twin that forms the “body of knowledge” for the particular unit part. This twin contains specific information about the part build, any testing anomalies, and an updated CAD model reflecting the measured dimensions of the part. This information provides the basis for the final stamp of approval from a parts certification perspective and supports field service should any issues arise (see the sidebar “Digital twinning: An extension of the DTAM”).
Table 1. Nondestructive evaluation technologies25
Type | Description | Application |
Thermal infrared | Measuring the infrared radiation emitted by an object to capture defects using thermal imaging devices | Internal inspection |
Liquid penetrant | Application of low-viscosity fluid to a part’s surface to detect fissures and voids | External surface finish |
Ultrasound | High-frequency sound waves transmitted to identify discontinuity in objects | Internal inspection, dense materials |
Neutron radiographic | An intense beam of low-energy neutrons to penetrate the object and observe faults | Internal inspection |
Laser | Using laser beams to detect defects as small as a few micrometers in size | External inspection |
Eddy current | Electromagnetic testing that induces an electric current into a conductive piece and measures the secondary current produced | Internal and external inspection of conductive materials |
Part field service sensing and inspection. As the part moves into field service, connected sensors may be used to feed data points into the digital twin (figure 6). They can also continuously improve the information flow, impacting concurrent production of the same part, future design iterations, and design of new parts. This is made possible via connected technologies inherent in the Internet of Things (IoT), the connected web of devices sharing data about part performance and health.26
A component of the DTAM, the “digital twin” also uses advanced modeling and simulation techniques. But in the case of the digital twin—also known as a digital surrogate—these models are applied to the physical object over its life cycle in the field, rather than through the design + build process. Here, a physical object is fitted with multiple sensors that send data about its activities and status in real time to a highly complex, cloud-based simulation of that object.27 The simulation, or digital twin, then mirrors the life of its physical sibling in real time, down to object-specific anomalies (figure 7).
The digital twin represents a leap forward from common approaches to certification, maintenance, and scenario planning, which are based on models that use assumptions rooted in conventional wisdom, engineering judgment, or past approaches.28 With these more traditional approaches, multiple models and databases developed by different engineering teams for the same object are not always fully integrated into a single, holistic model.29 As a result, parts may be designed in less efficient ways or receive maintenance at less optimal intervals.
In the case of aerospace, a digital twin can estimate repair costs and other needs over a period of time based on flight data regarding various stresses and strains sustained across routes and flight conditions. The digital twin gains accuracy with each flight because it is able to collect and model more data, and it can be “flown virtually” through the aircraft’s regular missions to predict future repair needs and remaining lifespan.30 Individual models, coupled with those of other aircraft, can then be projected to estimate the maintenance needs for the fleet as a whole. In another example, GE’s Digital Power Plant applies digital twinning to gas power plants and wind farms to model the current state of physical assets.31
The DTAM sequence we have described generates significant amounts of data during the design, production, and monitoring processes. Implementing a successful DTAM requires more than simply managing data, however; other critical enabling components must be in place to connect, analyze, and act upon the data gathered throughout the design + manufacture process. In this next section, we examine each of these components and processes—divided into architectural considerations and infrastructure considerations—and their role in implementing a successful DTAM.
The most important aspects of the DTAM are found not only in the ability to trace a product from inception to production but also in its capacity to seamlessly link together disparate printers, models, and data into a single, coherent ecosystem. Multiple enablers are necessary for successful implementation and function of a full-scale DTAM: metrics and models; modularity and connectivity; interoperability; and composability. Each of these critical components builds upon the other to form the architectural foundation of the DTAM, supported by technological infrastructure considerations critical to managing and moving data: information management, and data standards and federation (figure 8).
Metrics are a critical underpinning of the DTAM: Without a series of baseline data points and benchmarks to use as a basis for comparison, production and part improvement would be all but impossible.32
Establishing metrics is particularly important in the context of AM because measuring real-world outcomes and comparing them against these targeted ideals may direct focus to specific areas of the DTAM that merit more attention: particular part characteristics where performance is falling short, for example, or the overall performance of a larger AM supply chain. To ensure effectiveness, it is particularly important that metrics be quantifiable and easy to understand.33
Models categorize metrics within a particular process. They can be highly granular, focusing on one specific phase in the DTAM, or represent a larger system or combination of phases that span multiple domains. Each model, however, establishes a baseline set of information inputs, transformations, and outputs, all of which rely heavily on the firm establishment of clear metrics.
Each of these critical components builds upon the other to form the architectural foundation of the DTAM.
Modularity is defined as “the design principle of having a complex system composed from smaller subsystems that can be managed independently, yet function together as a whole.”34 In other words, it allows multiple systems and technologies—such as stages in the DTAM—to be connected while still remaining independent of each other. Inherent in modularity is the ability to adapt to different types of AM printing technologies, file and data formats, process parameters, and different physical and environmental conditions. From a systems engineering perspective, modularity is focused on understanding the information inputs and outputs of models for the purpose of integrating them with other phases in various configurations.35
Connectivity binds together the DTAM, allowing multiple, federated systems to interact with one another so that information contained within individual models can be shared across the manufacturing process. Connected manufacturing environments are already used extensively, fostered by product life cycle management (PLM) tools. An emerging application of connectivity is the IoT, where connected devices communicate with one another, providing environmental information and sensor feedback.36 AM requires additional considerations beyond those of traditional manufacturing, however; additional steps, analyses, and models are needed to translate geometries and production data into improved build files. Translation between these models can be complex, as traceability can be slowed or stalled as data grow to enormous volumes that need to be retained or transferred. Advanced technologies necessary to scale industrial applications of AM—in-situ monitoring, feed-forward, feedback, and concurrent information flow—require highly advanced connectivity to interpret data produced at a variety of cadences during production.37
As with the IoT, connectivity is an evolving area within AM. Currently, proprietary connected solutions offer specific CAD software and hardware partnerships, but many of these systems do not demonstrate connectivity with software or hardware outside their bundle.38 This is similar to the IoT marketplace, where many smart-home providers offer connected solutions that may not work with those offered by other brands.39 This challenge must be addressed for the DTAM to function effectively, which can be accomplished with the right set of strategies, approaches, and tools, including requirement gathering, system design documentation, systems integration, and enterprise data management.
With the breadth and depth of data created throughout the DTAM, it is critical to be able to sort through them all to extract useful information.
Interoperability is the application of connectivity: the assurance that data will be accessible, readable, and usable throughout each stage of the manufacturing process, no matter their format, so that they can move between and through models, across phases and processes.40Interoperability is made possible through understanding data and information systems, and through federated data standards and formats (to be discussed later).
With the breadth and depth of data created throughout the DTAM, it is critical to be able to sort through them all to extract useful information. Composability is the intelligent selection of available information to produce a better part design or process.41 In essence, composability is the ability to weave a digital thread (or threads) utilizing the enablers described above. In simple, single-part AM processes, composability is less crucial, but as organizations look to scale AM into their supply chain or utilize advanced design technologies to improve part design, production grows more complex and incorporates more parts and processes.
Data standards and federation ensure that each connected stage of the manufacturing process can communicate effectively with the others, even if they speak different languages or use different file formats. They are, arguably, the most important enablers for a successful DTAM deployment. Standards and federation work together to promote supply chain evolution by enabling frequent association between numerous manufacturers, distributors, and designers.
The notion of standards is integral to manufacturing: The modern assembly line was built upon standardization, and supply chains could not have scaled without standardization in common parts and processes. Today, AM standards are still in their nascent form, borrowing from formats originally established in the 1980s. Only through development of modern standards and architectures can disparate technologies cooperate and achieve a larger, more powerful network: the digital quilt.
Federation. We define federation with respect to AM as the ability for multiple technologies and machines to speak the same language, even if they are disparate and have different internal workings.42 Federation is only truly achievable through enhanced data standards and AM file formats that account for more than just part geometry.
Information management encompasses the data technology on which the DTAM runs. The structural backbone underpinning the DTAM, information management comprises multiple facets, described in table 2.
Table 2. Facets of information management underpinning the DTAM
Information management facet | Function/area of focus |
Information/data intellectual property and cybersecurity | Protecting IP assets and design files against theft or malicious intent |
Information integrity and metadata management | Validating data for traceability and certification; data and metadata associated with production/process data |
Infrastructure support and design | Supporting and accommodating large data needs |
Information transmission and consumption frequency | Enabling real-time access and processing of bulk data |
Maintenance and upgrades | Ensuring that assets continue to function, and accommodating new and updated technologies and file formats |
Organizational IT maturity | Accommodating change management needs |
As AM processes continue to grow in complexity, the data inputs and outputs of these systems will demand more robust information management. The management of these data is essential in high-quality parts fabrication, where a tremendous amount of machine control and sensor data is both required and created as a part moves from conception through production.43 This capability becomes even more crucial where in-situ monitoring necessitates real-time control, and as the entire QA process necessitates data archiving. Information management can also protect and validate data, enabling each part to have a digital twin or body of knowledge. Emerging information management technologies are helping to ensure information integrity and traceability—with implications for the DTAM.44
The shape each DTAM takes depends on the scenario at hand: The scale of production, the scope of product variety, and the level of QA needed will each play a role in determining the level of resource investment needed for successful DTAM implementation. Some manufacturers will need to create, store, and process large amounts of production data, while others will need to focus on geographically federating production of parts to create a leaner digital supply chain. Taking a deliberate approach to building and implementing the right DTAM is thus crucial to its successful function. As with any large system deployment, implementing a DTAM is a complex process. Generally speaking, however, initial considerations and planning should focus on information technology, organizational and technical processes, and workforce development and training—or, put more simply, on people, process, and technology.
Taking a deliberate approach to building and implementing the right DTAM is thus crucial to its successful function. As with any large system deployment, implementing a DTAM is a complex process.
The DTAM will require significant computing and data storage capacity. Product development can require modeling and simulation, often on high-performance computing platforms to optimize product design and account for the myriad of variables in the AM build process. Furthermore, supply chain growth often requires data warehousing capabilities to capture data associated with each part build—especially in situations with robust QA requirements.45
Organizations should focus on securing the commodity or specialized hardware required for their intended application to accommodate these demands. Additionally, organizations may consider the implementation or expansion of PLM tools to track parts from design to field service. To truly serve the DTAM, these tools must accommodate a federated information environment based on evolving AM data standards that allow for frequent association with multiple parts, materials, processes, printers, locations, and environments.
Evolutions within both the supply chain and product design are two of the most notable ways in which the DTAM can disrupt engineering and manufacturing processes. For its part, product design and development must be adapted in response to the tighter coupling between design and analysis brought about by the DTAM. Further complicating matters, the advanced modeling and simulation tools that partially drive the DTAM may also disrupt current organizational structures, condensing roles and collapsing design processes—leading to confusion and, in some cases, resistance.46
Yet the DTAM brings still further changes, such as those for QA processes, with in-situ monitoring and additional NDE techniques. These new considerations associated with the DTAM shift the onus closer to the design phase and create feedback loops that require both technical and organizational process change.
As new AM technologies enter the market, workforce development should be a central priority to organizations. Change can be difficult, and learning new approaches—particularly those that may upend familiar and well-worn processes—can pose a high barrier.47 Implementation of a DTAM may pose something of a double whammy: adjusting not only to new manufacturing processes with AM, but also to entirely new mind-sets as well. But for the DTAM to function successfully, the workforce must support its development and sustainment. Organizations should develop a roadmap that takes into account recruiting skilled resources, training current talent, assessing the organization’s willingness to adopt AM, and retaining critical workforce.
As organizations seek to scale AM beyond one-off parts and rapid prototyping, the DTAM holds the key to linking the stages of the design and manufacturing process. Despite the promise it holds in revolutionizing AM adoption, however, the DTAM brings with it a number of challenges that companies must address as they seek to implement this capability: architectural considerations related to issues such as models and interoperability, and infrastructure needs around information management, and federation and standardization. Federation and standardization are perhaps the greatest challenges of all: the ability to manage and analyze immense data loads while ensuring systems from various DTAM stages can speak to each other.
Federation and standardization are perhaps the greatest challenges of all: the ability to manage and analyze immense data loads while ensuring systems from various DTAM stages can speak to each other.
As organizations seek to understand and implement the DTAM, it is important to:
The digital thread is one that transcends AM and can be considered an essential step for industries looking to scale operations via processes linked together by data and analysis. Using information generated throughout the digital thread, manufacturers can more accurately assess product use, performance, and maintenance cycles, and adjust designs accordingly—ultimately reducing waste, optimizing product design, and improving functions. This is perhaps relevant nowhere more than with AM, where data can be crucial not only for production control and process monitoring but also for scaling production to truly realize the its value at the enterprise level.
Deloitte Consulting LLP’s Supply Chain and Manufacturing Operations practice helps companies understand and address opportunities to apply advanced manufacturing technologies to impact their businesses’ performance, innovation, and growth. Our insights into additive manufacturing allow us to help organizations reassess their people, process, technology, and innovation strategies in light of this emerging set of technologies. Contact the authors for more information, or read more about our alliance with 3D Systems and our 3D Printing Discovery Center on www.deloitte.com.