ZVEI – ZVEI: software defined industry and Agentic AI bring Industry 4.0 to its full potential
Frankfurt, December 10, 2025 – Software-defined industry (SDI) and agent-based artificial intelligence (Agentic AI) are creating a new production paradigm.
Processes are controlled by software and AI, enabling plants to operate more flexibly, efficiently, resiliently, and autonomously.
“The next decade of productivity growth will come from IT or AI solutions,” says Gunther Koschnick, Division Manager at ZVEI.
“Therefore, thinking in terms of software is the logical next step to increasing autonomy in systems. This is the only way production in Germany and Europe can remain competitive, resilient, and sustainable in the future: We must actively shape this transformation through investments in SDI projects, the development of interoperable standards, and the further training of skilled workers.”
Traditional systems are now reaching their limits: long setup times, rigid structures, and a shortage of skilled workers. Agentic AI addresses these challenges. Autonomous software agents orchestrate production steps, prepare decisions, and react dynamically to disruptions or new product requirements.
“In today’s rapidly evolving world, we need production systems that can autonomously adapt to situations, reconfigure, and optimize,” Koschnick continued.
The advantage of SDI: It connects the shop floor with IT: Production modules are flexibly combined via digitally described capabilities, and digital twins enable simulation and rapid changes beyond production. SDI and Agentic AI thus unlock efficiency gains, new data-driven business models, and strengthen the resilience of global value chains. However, this requires interoperable standards, secure data spaces, and an innovation-friendly regulatory environment.
“Globally applicable, secure data standards are a key element,” emphasizes Koschnick.
Strategic compass for the path to SDI
With its white paper “From Automation to Action – Using Agentic AI for Software-Defined Industry,” the ZVEI (German Electrical and Electronic Manufacturers’ Association) offers guidance for this transformation. It contains clear recommendations for companies as well as demands on policymakers.
SourceZVEI
EMR Analysis
More information on ZVEI: See the full profile on EMR Executive Services
More information on Wolfgang Weber (Chief Executive Officer and Chairman of the Executive Board, ZVEI): See the full profile on EMR Executive Services
More information on Dr. Gunther Kegel (President, ZVEI till May 20, 2026 + Chairman of the Management Board, Pepperl+Fuchs GmbH + Chief Executive Officer, Pepperl+Fuchs SE): See the full profile on EMR Executive Services
More information on Gunther Koschnick (Managing Director, Industry Division, ZVEI): See the full profile on EMR Executive Services
EMR Additional Notes:
- Software-Defined Industry (SDI):
- Software-Defined Industry (SDI), more commonly known as Software-Defined Infrastructure (SDI), is a tech approach where IT resources (compute, storage, networking) are pooled, virtualized, and centrally managed by software, enabling automation, agility, and flexibility beyond traditional hardware, allowing data centers to become self-aware, self-healing, and rapidly provision resources like an “app store,” boosting efficiency for businesses. It breaks the hardware-software link, using general-purpose hardware for diverse IT/OT needs, from cloud to edge, streamlining operations with AI integration for predictive maintenance and security.
- AI – Artificial Intelligence:
- Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.
- As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but several, including Python, R and Java, are popular.
- In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.
- AI programming focuses on three cognitive skills: learning, reasoning and self-correction.
- The 4 types of artificial intelligence?
- Type 1: Reactive machines. These AI systems have no memory and are task specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones.
- Type 2: Limited memory. These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way.
- Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means that the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.
- Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them consciousness. Machines with self-awareness understand their own current state. This type of AI does not yet exist.
- Machine Learning (ML):
- Developed to mimic human intelligence, it lets the machines learn independently by ingesting vast amounts of data, statistics formulas and detecting patterns.
- ML allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so.
- ML algorithms use historical data as input to predict new output values.
- Recommendation engines are a common use case for ML. Other uses include fraud detection, spam filtering, business process automation (BPA) and predictive maintenance.
- Classical ML is often categorized by how an algorithm learns to become more accurate in its predictions. There are four basic approaches: supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning.
- Deep Learning (DL):
- Subset of machine learning, Deep Learning enabled much smarter results than were originally possible with ML. Face recognition is a good example.
- DL makes use of layers of information processing, each gradually learning more and more complex representations of data. The early layers may learn about colors, the next ones about shapes, the following about combinations of those shapes, and finally actual objects. DL demonstrated a breakthrough in object recognition.
- DL is currently the most sophisticated AI architecture we have developed.
- Generative AI (GenAI):
- Generative AI technology generates outputs based on some kind of input – often a prompt supplied by a person. Some GenAI tools work in one medium, such as turning text inputs into text outputs, for example. With the public release of ChatGPT in late November 2022, the world at large was introduced to an AI app capable of creating text that sounded more authentic and less artificial than any previous generation of computer-crafted text.
- Small Language Models (SLM) and Large Language Models (LLM):
- Small Language Models (SLMs) are artificial intelligence (AI) models capable of processing, understanding and generating natural language content. As their name implies, SLMs are smaller in scale and scope than large language models (LLMs).
- LLM means Large Language Models — a type of machine learning/deep learning model that can perform a variety of natural language processing (NLP) and analysis tasks, including translating, classifying, and generating text; answering questions in a conversational manner; and identifying data patterns.
- For example, virtual assistants like Siri, Alexa, or Google Assistant use LLMs to process natural language queries and provide useful information or execute tasks such as setting reminders or controlling smart home devices.
- Computer Vision (CV) / Vision AI & Machine Vision (MV):
- Field of AI that enables computers to interpret and act on visual data (images, videos). It works by using deep learning models trained on large datasets to recognize patterns, objects, and context.
- The most well-known case of this today is Google’s Translate, which can take an image of anything — from menus to signboards — and convert it into text that the program then translates into the user’s native language.
- Machine Vision (MV) :
- Specific application for industrial settings, relying on cameras to analyze tasks in manufacturing, quality control, and worker safety. The key difference is that CV is a broader field for extracting information from various visual inputs, while MV is more focused on specific industrial tasks.
- Machine Vision is the ability of a computer to see; it employs one or more video cameras, analog-to-digital conversion and digital signal processing. The resulting data goes to a computer or robot controller. Machine Vision is similar in complexity to Voice Recognition.
- MV uses the latest AI technologies to give industrial equipment the ability to see and analyze tasks in smart manufacturing, quality control, and worker safety.
- Multimodal Intelligence and Agents:
- Subset of artificial intelligence that integrates information from various modalities, such as text, images, audio, and video, to build more accurate and comprehensive AI models.
- Multimodal capabilities allows AI to interact with users in a more natural and intuitive way. It can see, hear and speak, which means that users can provide input and receive responses in a variety of ways.
- An AI agent is a computational entity designed to act independently. It performs specific tasks autonomously by making decisions based on its environment, inputs, and a predefined goal. What separates an AI agent from an AI model is the ability to act. There are many different kinds of agents such as reactive agents and proactive agents. Agents can also act in fixed and dynamic environments. Additionally, more sophisticated applications of agents involve utilizing agents to handle data in various formats, known as multimodal agents and deploying multiple agents to tackle complex problems.
- Agentic AI:
- Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time. In a multiagent system, each agent performs a specific subtask required to reach the goal and their efforts are coordinated through AI orchestration.
- Unlike traditional AI models, which operate within predefined constraints and require human intervention, agentic AI exhibits autonomy, goal-driven behavior and adaptability. The term “agentic” refers to these models’ agency, or, their capacity to act independently and purposefully.
- Agentic AI builds on generative AI (gen AI) techniques by using large language models (LLMs) to function in dynamic environments. While generative models focus on creating content based on learned patterns, agentic AI extends this capability by applying generative outputs toward specific goals.
- Edge AI Technology:
- Edge artificial intelligence refers to the deployment of AI algorithms and AI models directly on local edge devices such as sensors or Internet of Things (IoT) devices, which enables real-time data processing and analysis without constant reliance on cloud infrastructure.
- Simply stated, edge AI, or “AI on the edge“, refers to the combination of edge computing and artificial intelligence to execute machine learning tasks directly on interconnected edge devices. Edge computing allows for data to be stored close to the device location, and AI algorithms enable the data to be processed right on the network edge, with or without an internet connection. This facilitates the processing of data within milliseconds, providing real-time feedback.
- Self-driving cars, wearable devices, security cameras, and smart home appliances are among the technologies that leverage edge AI capabilities to promptly deliver users with real-time information when it is most essential.
- High-Density AI:
- High-density AI refers to the concentration of AI computing power and storage within a compact physical space, often found in specialized data centers. This approach allows for increased computational capacity, faster training times, and the ability to handle complex simulations that would be impossible with traditional infrastructure.
- Explainable AI (XAI) and Human-Centered Explainable AI (HCXAI):
- Explainable AI (XAI) refers to methods for making AI model decisions understandable to humans, focusing on how the AI works, whereas Human-Centered Explainable AI (HCXAI) goes further by contextualizing those explanations to a user’s specific task and understanding needs. While XAI aims for technical transparency of the model, HCXAI emphasizes the human context, emphasizing user relevance, and the broader implications of explanations, including fairness, trust, and ethical considerations.
- Physical AI & Embodied AI:
- Physical AI refers to a branch of artificial intelligence that enables machines to perceive, understand, and interact with the physical world by directly processing data from a variety of sensors and actuators.
- Physical AI provides the overarching framework for creating autonomous systems that act intelligently in real-world settings. Embodied AI, as a subset, focuses on the sensory, decision-making, and interaction capabilities that enable these systems to function effectively in dynamic and unpredictable environments.
- Federated Learning and Reinforcement Learning:
- Federated Learning is a machine-learning technique where data stays where it is, and only the learned model updates are shared. “Training AI without sharing your data”.
- Reinforcement Learning is a type of AI where an agent learns by interacting with an environment and receiving rewards or penalties. “Learning by trial and error”
- Federated Learning (FL) and Reinforcement Learning (RL) can be combined into a field called Federated Reinforcement Learning (FRL), where multiple agents learn collaboratively without sharing their raw data. In this approach, each agent trains its own RL policy locally and shares model updates, like parameters or gradients, with a central server. The server aggregates these updates to create a more robust, global model. FRL is used in applications like optimizing resource management in communication networks and enhancing the performance of autonomous systems by learning from diverse, distributed experiences while protecting privacy.
- Hardware vs. Software vs. Firmware:
- Hardware is physical: It’s “real,” sometimes breaks, and eventually wears out.
- Since hardware is part of the “real” world, it all eventually wears out. Being a physical thing, it’s also possible to break it, drown it, overheat it, and otherwise expose it to the elements.
- Here are some examples of hardware:
- Smartphone
- Tablet
- Laptop
- Desktop computer
- Printer
- Flash drive
- Router
- Software is virtual: It can be copied, changed, and destroyed.
- Software is everything about your computer that isn’t hardware.
- Here are some examples of software:
- Operating systems like Windows 11 or iOS
- Web browsers
- Antivirus tools
- Adobe Photoshop
- Mobile apps
- Firmware is virtual: It’s software specifically designed for a piece of hardware
- While not as common a term as hardware or software, firmware is everywhere—on your smartphone, your PC’s motherboard, your camera, your headphones, and even your TV remote control.
- Firmware is just a special kind of software that serves a very narrow purpose for a piece of hardware. While you might install and uninstall software on your computer or smartphone on a regular basis, you might only rarely, if ever, update the firmware on a device, and you’d probably only do so if asked by the manufacturer, probably to fix a problem.
- Hardware is physical: It’s “real,” sometimes breaks, and eventually wears out.
- Industry 4.0:
- Industry 4.0 has been defined as “a name for the current trend of automation and data exchange in manufacturing technologies, including cyber-physical systems, the Internet of things, cloud computing and cognitive computing and creating the smart factory”.
- Industry 4.0 aims at transforming the manufacturing and engineering sectors by introducing factories where cyber-processing systems communicate over the Internet of Things, assisting people and machinery to execute their tasks within the shortest time possible.
- Industry 4.0 technology helps you manage and optimize all aspects of your manufacturing processes and supply chain. It gives you access to the real-time data and insights you need to make smarter, faster decisions about your business, which can ultimately boost the efficiency and profitability of your entire operation.
- The Fourth Industrial Revolution (4IR) is a term coined in 2016 by Klaus Schwab, Founder and Executive Chairman of the World Economic Forum (WEF).
- 4 Industrial Revolutions:
- First Industrial Revolution: Coal in 1765.
- Second Industrial Revolution: Gas in 1870.
- Third Industrial Revolution: Electronics and Nuclear in 1969.
- Fourth Industrial Revolution: Internet and Renewable Energy in 2000.

- Industry 5.0:
- The Fifth Industrial Revolution, or 5IR, encompasses the notion of harmonious human–machine collaborations, with a specific focus on the well-being of the multiple stakeholders (i.e., society, companies, employees, customers)
- The term Industry 5.0 refers to people working alongside robots and smart machines. It’s about robots helping humans work better and faster by leveraging advanced technologies like the Internet of Things (IoT) and big data. It adds a personal human touch to the Industry 4.0 pillars of automation and efficiency.
- Industry 5.0 takes a sharp turn and directs attention to the human element. It also ‘reflects a shift from a focus on economic value to a focus on societal value, and a shift in focus from welfare to wellbeing’ (Forbes). Compared to Industry 4.0, Industry 5.0 is …
- Dedicated to both customer and employee experience
- Acknowledging social and economic challenges
- Putting great attention on human well-being and sustainability
- Providing ‘a vision of industry that aims beyond efficiency and productivity as the sole goals’ (European Commission)
- It complements the existing “Industry 4.0” approach by specifically putting research and innovation at the service of the transition to a sustainable, human-centric and resilient industry.
- Information Technology (IT) & Operational Technology (OT):
- Information Technology (IT):
- Refers to anything related to computer technology, including hardware and software. Your email, for example, falls under the IT umbrella. IT forms the technological backbone of most organizations and companies by managing data, communications, and business processes. These devices and programs have little autonomy and are updated frequently.
- Operational Technology (OT):
- Refers to the hardware and software used to change, monitor, or control physical devices, processes, and events within a company or organization. This form of technology is most commonly used in industrial settings, where these systems are engineered for safety, reliability, and precision control. An example of OT includes SCADA (Supervisory Control and Data Acquisition).
- => The main difference between OT and IT devices: OT devices control the physical world, while IT systems manage data.
- Information Technology (IT):
- Digital Twin:
- Digital Twin is most commonly defined as a software representation of a physical asset, system or process designed to detect, prevent, predict, and optimize through real time analytics to deliver business value.
- A digital twin is a virtual representation of an object or system that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning and reasoning to help decision-making.

