Glossary

Our glossary contains and defines the terms relevant in the context of i-flow.

Operation technology (OT) are systems that process operational data (e.g. from sensors or actuators) to regulate process values such as speed, temperature, pressure or flow. OT can typically be found in factories to control assets (e.g., machines, valves, engines, conveyors) and is based on very divers communication technologies. Generally, these technologies cannot be interpreted by IT (e.g., software applications) without major integration efforts.

Different types of resources are required to operate a plant (e.g. machines, sensors, equipment, systems, processes). We call them assets. Once created in i-flow, an asset represents a specific factory resource (e.g. milling machine in factory X at line Y, temperature sensor in factory X at test bench Y). An asset is defined by a connection to one or more factory resources and one or more data models. Assets can typically be classified in different categories. A category might be a set of machines, sensors etc. that have a common purpose of use in factories (e.g. milling machines for milling processes, temperature sensors to measure process temperatures). Assets of the same asset category are typically based on the same data model(s).

A data model is an abstract archetype to describe data and properties of real-world assets (e.g., machines) and how they relate to one another. Hereby, a data model determines the structure of the data (e.g., how data can be received or is exposed by a software application). A data model within i-flow is typically derived from a defined business goal. Imagine you want to improve your machine efficiencies and therefore calculate the KPI “OEE” (Overall Equipment Effectiveness) of your machine. Before connecting to the machine you can now model the data and its structure that is expected by your analytics or OEE application. You can always re-use this data model whenever you want to add another machine to your OEE use-case. Data models are typically defined by an expert of the corresponding use-case (e.g., OEE) – someone who knows what data is expected from the target system and how it must be structured.

With regard to data, context is the set of circumstances that surrounds the data. Only when data is provided with context, data can be fully understood and interpreted correctly to make use of it in a software application (e.g., analytics tool). This is necessary to use data correctly in a software application (e.g. an analysis tool) and derive correct decisions. Therefore, make sure you follow these 3 rules: always provide context, always provide context that is true and relevant, always provide context that can be interpreted. The simple it sounds, the hard it is. This is especially true for raw data of operation technology (e.g., machines) and dynamic global business environments (staff fluctuation, changing IT-OT). Contact us and learn how i-flow can support in establishing context so human and machines can arrive at the most accurate understanding of data.

Interoperability is the ability of heterogeneous systems to interact seamlessly so that data can be exchanged and made available to users in an efficient and usable manner, without the need for costly adaptations. While interoperability accelerates digital transformation, it is especially challenging to achieve in factory environments with systems from operation technology (e.g., machines, sensors) and information technology (e.g., MES, cloud applications). There are 3 layers of interoperability: data interoperability (data can be understood and exchanged by systems), semantic interoperability (interrelationship between systems and data can be understood – even in very dynamic environments) and network interoperability (cost effective communication between security zones and hybrid edge to cloud architectures). Please contact us and learn how i-flow can support in establishing interoperability across all layers.

With regard to data, discoverability is the degree to which information and datasets can be found and interpreted accurately to integrate into target systems (e.g., analytics). Discovery processes include searching, understanding, preparing and integrating data to achieve a certain business goal (e.g., reduce machine downtimes) by sending the data to a target system or applying self-service analytics. The efforts for the discovery of data from operation technology (e.g., machines) is reduced to a minimum when accessing perfectly contextualized and clean data. Please contact us to learn more.

Data flexibility describes to which degree data can be adapted easily to changing business requirements without costly data processing efforts. Flexibilization allows a solution or application to grow, shrink or change in order to meet changing requirements while minimizing adaptation needs. This is especially helpful in dynamic environments with (e.g., adding operation technology, adapt to changing in machine parameters, applying new use-cases on existing data).

Data aggregation describes the process of combining data typically from multiple sources (e.g., databases) to feed an application (e.g., for data analysis). This is a very important step to ensure accuracy of insights from data and is highly dependent on the quality of the source data.

Data cleansing or data cleaning describes the process of turning poor quality data into good quality data for use in IT (e.g., analytics). Poor quality is referred to as data that is corrupt, incorrect, not in the right format, duplicated or incomplete. This becomes especially challenging when combining multiple data sets that are of poor quality. Data cleaning is crucial, because your data driven decision can only be as good as the analysis of the data, which in turn can only be as good as the data you are using. Garbage data in, garbage analysis and decision out. Nowadays, this process is highly manual and challenging specifically in environments with lots of operation technology (OT) creating tons of uncontextualized raw data. As a result, most of OT data is of poor quality today. But what if you could reduce your data cleansing efforts to a minimum due to clean data from your operations by default. Contact us and learn how i-flow can support you.

Data transformation is the process of turning data from it’s “raw” form into data that can be used in a target system (e.g., database, application). This includes change of format, data type data mapping, value manipulation and so on.

Metadata is “data about data”. It can describe the resource of the data (e.g., machine, database), its structure, data types or data access permissions. Metadata can be added manually or automatically. Altogether, metadata allows data to be found, interpreted and processed accordingly. We would be happy to send you information on how i-flow automatically links your factory data with relevant metadata. Please contact us.

A tag is a keyword, label or marker that adds information to the data or set(s) of data that it is assigned to. For example, raw data can be tagged with keywords to describe the origin of the data it contains (e.g., from which factory, production line and machine). Tags are a subset of metadata and help to make data or set(s) of data easy to find and interpret. Within i-flow, users can choose tags from a list of keywords (controlled vocabulary e.g., factory ID) that has been pre-defined by the administrator.

Data enrichment is the process of augmenting existing information by adding additional information (e.g., because of missing or incomplete data, new business requirements). Typically, this includes enriching existing data with data from other sources. In factories for example, raw data from machines might be enriched by adding information about the processed part or production order from IT applications or databases.