Moving towards the ideal data strategy in manufacturing
As with chess, the right strategy is crucial to reach your goal when digitalizing manufacturing operations. Find out how implement a coherent data strategy using semantics and digital twins in this blogpost.
Data is available in abundance in most manufacturing companies. But how to generate as much added value as possible from this ever-increasing amount? While process data used to be of primary interest, the focus is now shifting to the product. Thanks to IoT capability, it provides companies with additional data, even beyond factory boundaries. More and more useful information for increasing efficiency and improving quality comes from numerous sources – and is distributed in different data formats throughout the company. The biggest adversaries on the further digitalization journey are now silo formation, data heterogeneity and incompatibility, as well as traceability gaps. These need to be tackled as early as possible with the right strategy.
The most important move: establishing data homogeneity
A data strategy is a structured and goal-oriented framework for the use of data as a business asset. Currently, many companies are already able to make good moves in certain areas – in terms of individual digitalization projects – but do not have the entire chessboard in view. Take, for example, the newly created option of obtaining field data of a product. This data flows back to customer service and is processed there to determine the causes of defects faster. The same data could theoretically also provide important insights for production quality or the development of subsequent products. However, most companies lack uniform semantics or the ability to assign the data to previous stages of the product lifecycle. To make this possible, the opening move should consist in homogenizing the data – both vertically and horizontally across the entire lifecycle.
Technology stacks as enablers
Uniform semantics make data universally usable regardless of its source and form the basis of a coherent data strategy. Practical implementation requires different chess pieces, or in other words: a technology stack with a software toolbox to standardize data and make it usable for all user groups. The digital twin approach has proven its worth here with the creation of digital (data) representatives of individual instances, such as a product. All collected data can be easily accessed via these digital twins, bundled into so-called aspects, i.e. information groups that contain, for example, error and status data, master data, or historical information. In our example of the IoT-enabled product, the field data is now available to all user groups and applications. In addition, all user groups – from customer service and quality management to product development – can access all other aspects. Redundancies or traceability gaps become a thing of the past.
Implementation during operation
Setting up value-adding software and services based on a technology stack sounds good in theory. In practice, however, corporate IT is a heterogeneous structure that has grown over decades and cannot be changed from scratch. The good news first: it doesn’t have to. A lighthouse project that starts with data homogenization is sufficient for the first move. This opens the chess board for further moves, i.e. the implementation of additional data sources, products, or company divisions. In our new white paper, we use three successful practical examples to illustrate how this can be achieved. It also describes all the technologies required to implement a coherent and future-proof data strategy. Your chess pieces are already in place, now it's time to plan your moves to success. The first one is easy: download our whitepaper!
Get in touch with us
Monday – Friday, 9 a.m. – 4 p.m. CET