• Articles
8min January 18, 2024 8min Jan 18, 2024
January 18, 2024 8 minute read

What’s better than time series data for manufacturing process engineers?

Repeatable success, time and time again.

For more than 15 years, Braincube Digital Twins have helped leading manufacturers continuously achieve their goals—regardless of industry or use case. Read about their successes in our expansive case study library.

Introduction

Many manufacturers assume that data analysis challenges stem from large data volumes. Given the amount of data necessary to follow a complex manufacturing process, this is a reasonable assumption. 

The reality is that data complexity is the largest challenge that manufacturers face when it comes to data analysis. Today’s manufacturing datasets include more variables than humans (and even most technologies) can process. This makes it difficult to consistently stabilize—or even better, optimize—processes at scale. 

Engineering teams often bring process optimizations from their intrinsic knowledge about physical processes, the assistance of tools like Excel, PowerBI, or data from other IT/OT systems. However, this approach is no longer enough for manufacturers to drive sustainable impact on profits.

Fortunately, some solutions automate data contextualization to diagnose and prescribe better alternatives. This automation will be the future of operational success. See why. 

Can time series data be used to determine root causes?

For years, engineers have successfully used historical data to support production trend analyses, quota forecasting, and anomaly detection. Industrial engineers create time series data to capture a sequence of equally-spaced data points that are collected or recorded over time. 

Time series data can then be used for a variety of tasks, ranging from predictions to determining plausible root causes. This data can also be used for identifying deviations from historical patterns or triggering specific actions, such as automated alerts when performance drifts arise. Using time series analysis, you can improve product quality, design more sustainable operations, and, ultimately, identify cost-saving opportunities.

While time series data can’t ensure that you will find root causes, time series analysis can identify probable causes. Time series data captures a linear view of time-stamped production data. As a result, time series data usually lacks critical information necessary for determining true root causes. For example, dynamic time delays between variables, also known as lag, are missing from typical time series data. 

Time series data is best for informing predictions and detecting anomalies. On its own, time-series data is not designed to comprehensively connect how different inputs impact downstream outputs.

Even if lag is considered in a time series model, this approach has limitations. Building, experimenting, and maintaining time series models is both costly and demanding. Companies with limited resources may lack the budget required for hiring a data scientist team to build and maintain time series data models and cannot afford to go offline. Furthermore, to build a usable time series model, you typically rely on a large volume of historical data (which is not always available at every manufacturing company). 

Determining root cause from time series data requires a combination of analytical skills, domain knowledge, and sometimes collaboration with experts in the field. For example, you may run a study on a machine failure, a series of defects, or a lowered productivity score. You use statistical methods to identify relationships between the time series data and a potential root cause. If you find something promising, you can put it into production and track changes over time.

But even in this example, you are forced to react to what has already happened, building additional impact to any loss you’ve already experienced. With predictions from time series analysis, you must wait until they happen before you can compare model predictions with actual results and see if your predictions are correct. 

Time series data provides valuable insights to many different industries and can help engineering teams make major strides toward their production goals. In fact, Braincube offers time-series digital twins that can drastically improve operations’ bottom line.

However, on its own, time series data is not designed to comprehensively connect how different inputs impact downstream outputs. Instead, time series data is best for informing predictions and detecting anomalies. 

Understanding how each variable is interrelated—both inputs and downstream—is critical for the production improvements that truly move the needle. This is especially true in continuous and continuous batch manufacturing where processes are disconnected.

What is the difference between time series data and contextualized data? 

Time series data lacks contextual information that is generated during production. In order to get a full picture of your production process, you need your data set to be as accurate as possible to true production conditions. In other words: you need a contextualized data set.

One way to contextualize a time series data set is by accounting for residency time. Adding residency time to a data set means incorporating the different lengths of time that material spends in a particular state or phase when performing an analysis. This can be relevant for tracking and optimizing data workflows because it adds precision to your analyses, making it possible to pinpoint solutions rather than getting estimates or averages. 

Take, for instance, process optimizations. Optimizations are the most efficient when data is closely associated with the physical reality of the process. Accounting for residency is a more accurate way to view and understand the production environment than just using time series data alone. This is because residency time is a critical way to ensure your data closely relates to the actual production environment. 

For example, let’s say materials move through a 10,000-gallon tank during production. Accounting for residency time means you recognize that fluid entering at the top of the tank will take 30 minutes to flow to the bottom when performing an analysis. You need to recognize that, when fluid hits the tank, it doesn’t immediately flow out of the tank. 

If you perform an analysis without accounting for contextual information, such as residency time, you will have an inaccurate analysis that will likely lead to subpar results. Understanding—and accounting for—how long material remains in different states can help generate more accurate analyses, making it possible to identify bottlenecks and improve overall efficiency. 

Residency time allows engineers to make more precise analyses than time series data alone. In turn, these analyses enable teams to change specific inputs that impact downstream outputs. Braincube leverages mathematical probability instead of predictions. To illustrate, residency time allows engineers to adjust and control the optimal flow rate, temperature, and other factors to achieve desired outcomes in manufacturing processes. 

In simple terms, time series data helps engineers find patterns and make predictions. By adding contextual information like residency time (often by using digital twins), engineers can prescribe specific actions or solutions to improve processes or address inefficiencies.

Conclusion

Time series data and predictive analytics do not inherently suggest actionable steps to address or optimize identified scenarios. The reason this type of data can’t be used to generate prescriptive insights is that it primarily deals with patterns, correlations, and predictions instead of prescribing specific actions or solutions to improve processes or address inefficiencies. Most time series data sets also do not typically include contextual information, like residency time. Context is critical when trying to understand how something is produced or why something went wrong.

If engineering teams are to adequately prescribe continuous optimizations, they will need both time series data (for trend forecasting and anomaly detection) and structured data (for contextual understanding and historical analysis). Integrating these two data types into analyses can facilitate a holistic approach to decision-making in manufacturing, allowing for both predictive and prescriptive capabilities to optimize processes effectively.

Utilizing technologies that provide you with lagged, structured data—like Braincube Digital Twins—drastically reduces the time to value for data analysis. With a holistic data set that is continuously updated with real-time data for analyses, engineers can uncover dependencies, correlations, or causal relationships between various factors. In turn, they can solve complex problems quickly and with more precision. 

Want to see how Braincube’s Digital Twins can help you overcome KPI plateaus?

Types of Digital Twins
in manufacturing

In this quick overview of the types of Digital Twins used in manufacturing, you’ll learn the pros and cons of each type as well as how to choose the right one for your goals.

Improve food manufacturing with Digital Twins

As food manufacturers continue pressing forward amid tumultuous times, there’s never been a more pressing time for food manufacturers to bring in Digital Twins and Industry 4.0 technology.

Optimize using predictive data analytics and Digital Twins

Learn how Kimberly-Clark implemented closed-loop production processes by leveraging Braincube’s Digital Twins and ready-to-use apps.