ABB – ABB and LandingAI unleash the power of generative AI for Robotic vision
- Strategic investment secures ABB’s use of LandingAI’s vision AI capabilities, such as LandingLensTM, for robot AI vision applications
- Pre-trained models, smart data workflows and no-code tools reduce training time by 80% and accelerate deployment in fast-moving sectors including logistics, healthcare, and food & beverage
- First of its kind collaboration marks a major step towards ABB Robotics’ vision for Autonomous Versatile Robotics – AVR™
ABB Robotics has invested in California-based LandingAI to accelerate the transformation of vision AI, making it faster, more intuitive, and accessible to a broader range of users. This first of its kind collaboration will integrate LandingAI’s vision AI capabilities, like LandingLensTM, into ABB Robotics’ own software suite, marking another milestone in ABB’s journey towards truly autonomous and versatile robots.
See the video on YouTube
“This announcement is the latest in our decade-long journey to innovate and commercialize AI, benefitting our customers by enhancing robot versatility and autonomy to expand the use of robots beyond traditional manufacturing,” said Sami Atiya, President of ABB Robotics & Discrete Automation. “The demand for AI in robotics is driven by the need for greater flexibility, faster commissioning cycles and a shortage of the specialist skills needed to program and operate robots. Our collaboration with LandingAI will mean installation and deployment time is done in hours instead of weeks, allowing more businesses to automate smarter, faster and more efficiently.”

As part of the collaboration ABB has made a venture capital investment through ABB Robotics Ventures, the strategic venture capital unit of ABB Robotics, driving collaboration and investment in innovative early-stage companies that are shaping the future of robotics and automation. Financial details of the investment were not disclosed.
LandingAI’s LandingLens is a vision AI platform that enables the rapid training of vision AI systems to recognize and respond to objects, patterns or defects with no complex programming or AI expertise required.
Through this collaboration, ABB Robotics will reduce robot vision AI training & deployment time by up to 80 percent. Once deployed, system integrators and end users can retrain the AI for new scenarios on their own, unlocking a new level of versatility. This is a critical step in scaling robot adoption in dynamic environments, beyond traditional manufacturing, especially in fast-moving sectors such as logistics, healthcare and food and beverage. ABB is already piloting LandingAI’s technology and actively working to integrate it into existing vision AI applications, including item-picking, sorting, depalletizing and quality inspection.
“AI is advancing quickly, creating many opportunities, but also requiring us to keep learning and adapting new skills,” said Dan Maloney, Chief Executive Officer of LandingAI. “By combining LandingAI’s vision AI capabilities with ABB’s robots and software, we can make automation more accessible. This makes it easier for businesses to deploy and scale intelligent robotic systems that are practical and useful.
ABB Robotics is the only robotics company offering a fully integrated AI training tool within its software suite. It will be available alongside ABB’s powerful simulation and programming tool RobotStudio®, which features digital twin capabilities to further simplify commissioning.
SourceABB
EMR Analysis
More information on ABB: See full profile on EMR Executive Services
More information on Morten Wierod (Chief Executive Officer and Member of the Group Executive Committee, ABB): See full profile on EMR Executive Services
More information on Timo Ihamuotila (Chief Financial Officer and Member of the Executive Committee, ABB): See full profile on EMR Executive Services
More information on Robotics & Discrete Automation Business Area by ABB: See full profile on EMR Executive Services
More information on Sami Atiya (President, Robotics & Discrete Automation Business Area and Member of the Executive Committee, ABB): See full profile on EMR Executive Services
More information on RobotStudio® by ABB: https://www.abb.com/global/en/areas/robotics/products/software/robotstudio-suite + Visualize your ideas and reduce commissioning time.
RobotStudio® is the world’s most popular offline programming and simulation tool for robotic applications. Based on the best-in-class virtual controller technology, RobotStudio suite gives you full confidence that what you see on your screen matches how the robot will move in real life. Enabling you to build, test and refine your robot installation in a virtual environment, this unique technology speeds up commissioning time and productivity by a magnitude.
More information on ABB Ventures by ABB: https://global.abb/group/en/technology/ventures + Strategic venture capital for breakthrough industrial technology startups
The ABB Group through its business-led venture capital investment framework, ABB Ventures, looks for breakthrough technology companies aligned with ABB’s goal to write the future of industrial electrification and automation. Since its formation in 2009, ABB Ventures has deployed around $450 million into 70 startups spanning a range of sectors including robotics, industrial IoT, AI/machine learning, energy transition, cybersecurity, sustainability, electric mobility, smart buildings, and distributed energy.
More information on Dr. Kurt Kaltenegger (Group Vice President, Head of ABB Technology Ventures, ABB): See full profile on EMR Executive Services
More information on ABB Robotics & Discrete Automation (RA) Ventures by ABB Ventures by ABB: https://global.abb/group/en/technology/ventures/robotics-and-discrete-automation-ventures
More information on Claudio E. P. Jordán (Investment Principal, Robotics & Discrete Automation, ABB Robotics & Discrete Automation (RA) Ventures, ABB Ventures, ABB): See full profile on EMR Executive Services
More information on LandingAI (Strategic Investment by ABB): https://landing.ai/ + LandingAI helps companies across all industries take advantage of their vast set of vision data to build, deploy and scale Visual AI solutions.
LandingAI™ delivers cutting-edge agentic visual AI technologies that empower customers to unlock the value of visual data. With LandingAI’s solutions, companies realize the value of AI and move AI projects from proof-of-concept to production.
LandingAI’s flagship product, LandingLens™, enables users to build, iterate, and deploy Visual AI solutions quickly and easily.
LandingAI is a pioneer in agentic vision AI technologies, including Agentic Document Extraction and Agentic Object Detection, which enhance the ability to process and understand visual data at scale, making sophisticated Visual AI tools more accessible and efficient.
Founded by Andrew Ng, co-founder of Coursera, founding lead of Google Brain, and former chief scientist at Baidu, LandingAI is uniquely positioned to lead the development of Visual AI that benefits all.
More information on Andrew Ng (Executive Chairman and Founder, LandingAI): https://landing.ai/about-us + https://www.linkedin.com/in/andrewyng/
More information on Dan Maloney (Chief Executive Officer, LandingAI): https://landing.ai/about-us + https://www.linkedin.com/in/danielwilliammaloney/
EMR Additional Notes:
- AI – Artificial Intelligence:
- Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.
- As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but several, including Python, R and Java, are popular.
- In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.
- AI programming focuses on three cognitive skills: learning, reasoning and self-correction.
- The 4 types of artificial intelligence?
- Type 1: Reactive machines. These AI systems have no memory and are task specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones.
- Type 2: Limited memory. These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way.
- Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means that the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.
- Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them consciousness. Machines with self-awareness understand their own current state. This type of AI does not yet exist.
- Machine Learning (ML):
- Developed to mimic human intelligence, it lets the machines learn independently by ingesting vast amounts of data, statistics formulas and detecting patterns.
- ML allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so.
- ML algorithms use historical data as input to predict new output values.
- Recommendation engines are a common use case for ML. Other uses include fraud detection, spam filtering, business process automation (BPA) and predictive maintenance.
- Classical ML is often categorized by how an algorithm learns to become more accurate in its predictions. There are four basic approaches: supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning.
- Deep Learning (DL):
- Subset of machine learning, Deep Learning enabled much smarter results than were originally possible with ML. Face recognition is a good example.
- DL makes use of layers of information processing, each gradually learning more and more complex representations of data. The early layers may learn about colors, the next ones about shapes, the following about combinations of those shapes, and finally actual objects. DL demonstrated a breakthrough in object recognition.
- DL is currently the most sophisticated AI architecture we have developed.
- Generative AI (GenAI):
- Generative AI technology generates outputs based on some kind of input – often a prompt supplied by a person. Some GenAI tools work in one medium, such as turning text inputs into text outputs, for example. With the public release of ChatGPT in late November 2022, the world at large was introduced to an AI app capable of creating text that sounded more authentic and less artificial than any previous generation of computer-crafted text.
- Small Language Models (SLM) and Large Language Models (LLM):
- Small Language Models (SLMs) are artificial intelligence (AI) models capable of processing, understanding and generating natural language content. As their name implies, SLMs are smaller in scale and scope than large language models (LLMs).
- LLM means Large Language Models — a type of machine learning/deep learning model that can perform a variety of natural language processing (NLP) and analysis tasks, including translating, classifying, and generating text; answering questions in a conversational manner; and identifying data patterns.
- For example, virtual assistants like Siri, Alexa, or Google Assistant use LLMs to process natural language queries and provide useful information or execute tasks such as setting reminders or controlling smart home devices.
- Computer Vision (CV) / Vision AI & Machine Vision (MV):
- Field of AI that enables computers to interpret and act on visual data (images, videos). It works by using deep learning models trained on large datasets to recognize patterns, objects, and context.
- The most well-known case of this today is Google’s Translate, which can take an image of anything — from menus to signboards — and convert it into text that the program then translates into the user’s native language.
- Machine Vision (MV) :
- Specific application for industrial settings, relying on cameras to analyze tasks in manufacturing, quality control, and worker safety. The key difference is that CV is a broader field for extracting information from various visual inputs, while MV is more focused on specific industrial tasks.
- Machine Vision is the ability of a computer to see; it employs one or more video cameras, analog-to-digital conversion and digital signal processing. The resulting data goes to a computer or robot controller. Machine Vision is similar in complexity to Voice Recognition.
- MV uses the latest AI technologies to give industrial equipment the ability to see and analyze tasks in smart manufacturing, quality control, and worker safety.
- Multimodal Intelligence and Agents:
- Subset of artificial intelligence that integrates information from various modalities, such as text, images, audio, and video, to build more accurate and comprehensive AI models.
- Multimodal capabilities allows AI to interact with users in a more natural and intuitive way. It can see, hear and speak, which means that users can provide input and receive responses in a variety of ways.
- An AI agent is a computational entity designed to act independently. It performs specific tasks autonomously by making decisions based on its environment, inputs, and a predefined goal. What separates an AI agent from an AI model is the ability to act. There are many different kinds of agents such as reactive agents and proactive agents. Agents can also act in fixed and dynamic environments. Additionally, more sophisticated applications of agents involve utilizing agents to handle data in various formats, known as multimodal agents and deploying multiple agents to tackle complex problems.
- Agentic AI:
- Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time. In a multiagent system, each agent performs a specific subtask required to reach the goal and their efforts are coordinated through AI orchestration.
- Unlike traditional AI models, which operate within predefined constraints and require human intervention, agentic AI exhibits autonomy, goal-driven behavior and adaptability. The term “agentic” refers to these models’ agency, or, their capacity to act independently and purposefully.
- Agentic AI builds on generative AI (gen AI) techniques by using large language models (LLMs) to function in dynamic environments. While generative models focus on creating content based on learned patterns, agentic AI extends this capability by applying generative outputs toward specific goals.
- Edge AI Technology:
- Edge artificial intelligence refers to the deployment of AI algorithms and AI models directly on local edge devices such as sensors or Internet of Things (IoT) devices, which enables real-time data processing and analysis without constant reliance on cloud infrastructure.
- Simply stated, edge AI, or “AI on the edge“, refers to the combination of edge computing and artificial intelligence to execute machine learning tasks directly on interconnected edge devices. Edge computing allows for data to be stored close to the device location, and AI algorithms enable the data to be processed right on the network edge, with or without an internet connection. This facilitates the processing of data within milliseconds, providing real-time feedback.
- Self-driving cars, wearable devices, security cameras, and smart home appliances are among the technologies that leverage edge AI capabilities to promptly deliver users with real-time information when it is most essential.
- High-Density AI:
- High-density AI refers to the concentration of AI computing power and storage within a compact physical space, often found in specialized data centers. This approach allows for increased computational capacity, faster training times, and the ability to handle complex simulations that would be impossible with traditional infrastructure.
- Autonomous Versatile Robotics:
- Autonomous Versatile Robotics refers to a new generation of independent, flexible, and intelligent mobile robots capable of planning and executing a wide range of tasks in real-time without human intervention. Enabled by advances in generative AI, these robots go beyond pre-programmed routines to adapt to changing environments, switch between tasks seamlessly, and operate with increased efficiency and productivity
- System Integrator:
- A systems integrator is an individual or business that builds computing systems for clients by combining hardware, software, networking and storage products from multiple vendors. Using a systems integrator, a company can align cheaper, preconfigured components and commercial off-the-shelf software to meet key business goals, as opposed to more expensive, customized implementations that may require original programming or manufacturing unique components.
- Hiring a systems integrator to combine various subsystems into an integrated offering can also simplify contracting and vendor management for the customer, who would otherwise need to purchase each subsystem separately and work with multiple vendors. Systems integration is, thus, both a procurement method and a technical activity.
- Commissioning:
- Commissioning ensures the system not only works but also works efficiently and effectively to meet its intended purpose. It is a quality assurance process that ensures a newly installed system is designed, installed, tested, and maintained to operate according to the owner’s requirements.
- It goes beyond a simple installation. Commissioning is a formal, documented process that involves several key steps:
- Pre-Installation
- Installation Verification.
- Functional Performance Testing.
- Documentation & Training.