The most important terms
Operation technology (OT) are systems that process operational data (e.g. from sensors or actuators) to regulate process values such as speed, temperature, pressure or flow. OT can typically be found in factories to control assets (e.g., machines, valves, engines, conveyors) and is based on very divers communication technologies. Generally, these technologies cannot be interpreted by IT (e.g., software applications) without major integration efforts.
Different types of resources are required to operate a plant (e.g. machines, sensors, equipment, systems, processes). We call them assets. Once created in i-flow, an asset represents a specific factory resource (e.g. milling machine in factory X at line Y, temperature sensor in factory X at test bench Y). An asset is defined by a connection to one or more factory resources and one or more data models. Assets can typically be classified in different categories. A category might be a set of machines, sensors etc. that have a common purpose of use in factories (e.g. milling machines for milling processes, temperature sensors to measure process temperatures). Assets of the same asset category are typically based on the same data model(s).
A data model is an abstract archetype for describing data and properties of real objects (e.g. machines) and their relationship to each other. Here, a data model defines the structure of the data (e.g., how data can be received or displayed by a software application). A data model in i-flow is usually derived from a defined business objective. Imagine, for example, that you want to improve your machine efficiency and calculate the key figure “OEE” (Overall Equipment Effectiveness) of your machine for this purpose. Before connecting to the machine, you can now model the data and its structure in the way it is expected by your analysis or OEE application. You can use this data model to connect any other machine to your OEE use case. Data models are usually defined by an expert for the corresponding use case – in other words, someone who knows what data is expected from the target system and how it must be structured.
In terms of data, context is defined as the set of circumstances surrounding the data. Only when data is provided with context it can be fully understood and correctly interpreted. This is necessary to use data correctly in a software application (e.g. an analysis tool) and to derive correct decisions. There are 3 basic rules: Always provide data with context, context should always be true and relevant, context must be interpretable by third parties. As simple as it sounds in theory, the more difficult it is to implement in reality. This is especially true for raw data from factory equipment (e.g., machines) combined with a dynamic business environment (e.g., staff turnover, changing IT OT). Contact us to learn how i-flow automates adding relevant context to your data.
Interoperability is the ability of heterogeneous systems to seamlessly interact, exchange data, and provide it to users in an efficient and usable manner without the need for costly customization. Interoperability accelerates digital transformation. Achieving this is particularly challenging in heterogeneous factory environments consisting of operating technology (e.g. machines, sensors) and information technology (e.g. MES, cloud applications). There are 3 levels of interoperability: data interoperability (data can be understood and exchanged by systems), semantic interoperability (relationships between systems and data can be understood – even in dynamic environments), and network interoperability (communication between security zones and hybrid edge-to-cloud architectures). Contact us to learn how i-flow can help you achieve interoperability across all layers.
Discoverability of data is the degree to which information and data sets can be found and interpreted in order to integrate them into target systems (e.g., analyses). Discovery processes involve searching, understanding, preparing, and integrating data into a target system to ultimately achieve a specific business goal (e.g., reducing machine downtime). The effort required to identify data from operational technology (e.g. machines) can be reduced to a minimum if perfectly contextualized and clean data is available. Contact us to learn more.
Data flexibility describes the extent to which data can be easily adapted to changing business requirements without incurring high data processing overhead. Flexibility allows a solution or application to grow, shrink, or adapt to meet changing requirements while minimizing the need for customization. This is especially helpful in dynamic environments (e.g., integrating new machines, rolling out use cases to additional machines, applying new use cases to existing data).
Data aggregation describes the process of combining different data, usually from multiple sources (e.g., databases), to feed a target application (e.g., for data analysis). This is a very important step in increasing the meaningfulness of the insights gained from the data, and depends heavily on the quality of the source data.
Data cleansing describes the process of transforming poor quality data into good quality data for use in IT (e.g. for analysis). Poor quality is data that is damaged, incorrect, not in the correct format, duplicated, or incomplete. It is particularly time-consuming when several data sets of poor quality have to be combined. Data cleansing is critical because your data-driven decisions can only be as good as the analysis of the data, which in turn can only be as good as the data itself. Bad data in, bad analysis and decision out (English: shit in, shit out). Today, this process is highly manual and demanding, especially in environments with a lot of operational technology and factory equipment that generates vast amounts of raw, uncontextualized data. The result: the absolute majority of today’s factory data is of poor quality. So how about minimizing your data cleansing efforts by automatically cleansing and harmonizing your factory data? Contact us and find out how i-flow can support you.
Data transformation is the process of converting data from its raw original form into data that can be used in a target system (e.g. database, application). This includes, among other things, changing the format, assigning data types, and manipulating values.
Metadata is “data about data”. You can describe the source of the data (e.g. machine, database), its structure, data types or data access permissions. Metadata can be added manually or automatically. Overall, metadata enables data to be found, interpreted and processed accordingly. We would be happy to send you information on how i-flow automatically links your color data with relevant metadata. Contact us.
A tag is a keyword, label or marker that adds information to assigned data or records. For example, raw data can be tagged with keywords to describe the origin of the data it contains (e.g., from which factory, production line, and machine). Tags are a subset of metadata and help make data or records easy to find and interpret. In i-flow, users can select tags from a list of keywords predefined by the administrator (controlled vocabulary, such as factory ID).
Data enrichment is the process of refining existing information by adding additional information (e.g., due to missing or incomplete data, new requirements). This usually involves enriching existing data with data from other sources. In factories, for example, raw data from machines (process data) can be enriched with information about the processed workpiece (product data) or the production order from separate IT applications or databases.
Data harmonization is a process from data processing, which leads to the fact that all factory information (data) can be understood, interpreted and used identically from all business locations and in all systems as well as processes.
Any questions? Speak to us now
I am looking forward to speaking to you.
Daniel Goldeband
Director of i-flow