Graybar – Putting people first: the human-centered future of AI in construction

GraybaR

By Edward Fenton, Vice President, AI and Digital Transformation, Graybar

 

In an era where artificial intelligence (AI) is reshaping industries at record speed, it’s easy to get swept up in the hype and forget to ask the most important question: What do customers want? At Graybar, our goal isn’t to force new behaviors or push customers into digital channels they didn’t ask for. Instead, we’re focused on putting AI to work behind the scenes, where it can automate processes and empower our people to provide faster, smarter service without compromising the human touch that has defined our customer and manufacturer relationships for more than 150 years.

One of Graybar’s guiding principles in applying AI is this: The customer should never feel the burden of our innovation. If a customer is used to calling in their order or sending an email to our rep, we want them to keep doing that. We’ll take care of the rest. AI helps us route, respond, and fulfill requests faster and more accurately without asking our customers to change a thing on their end.

This philosophy ensures that our innovation enhances customer experience, not complicates it.

Like any tool, AI is only valuable if it solves a real problem. Based on our experience at Graybar, below are the five most important things contractors, suppliers, and industry participants should keep in mind when considering AI solutions.

1. Start With a Real Business Case

If you can’t tie your AI investment to a specific business outcome, don’t do it. Too many companies chase shiny objects. At Graybar, we ask one question before deploying any AI initiative: Is this solving a real problem for us and our customers? If not, it is not worth doing.

2. Think Small to Win Big

Avoid multi-year, mega-projects with long roadmaps and uncertain payoffs. Instead, break your AI strategy into small, focused projects. This agile, test-and-learn approach is how real progress happens in today’s quickly evolving tech landscape.

3. Always Listen to Your Customer

Don’t assume you know what your customers want. Ask them. Some of the worst digital investments happen when companies solve for internal convenience rather than customer needs. If your customers are happy placing orders by phone, don’t force them to a website. Use AI to make the phone process smarter, not obsolete.

4. Be Ready to Learn and Pivot Fast

AI technology is evolving rapidly. Some experts say core AI capabilities double every 70 days. That means today’s trending solution might be outdated in two months. At Graybar, we avoid long-term technology contracts in favor of usage-based or month-to-month billing models. It gives us the freedom to move with the market.

5. People Still Matter Most

AI can accelerate decisions and automate tasks, but it can’t build trust. That has been the role of our people for 150 years, and it always will be. Our approach to AI is simple: Use it to empower people, not replace them. That applies to our employees and our customers alike.

After all, AI isn’t about replacing people, it’s about equipping them with smarter tools. At Graybar, we see technology as a force for efficiency, not a substitute for relationships. That’s why our AI strategy starts with one fundamental belief: Keep the human interaction where it matters and automate the rest.

If we do our job right, you won’t even notice the AI behind the scenes. You’ll just notice that working with Graybar feels as easy and efficient as it always has been.

 

 

SourceGraybar

EMR Analysis

More information on Graybar: See the full profile on EMR Executive Services

More information on Kathleen M. Mazzarella (Chairman, President and Chief Executive Officer, Graybar): See the full profile on EMR Executive Services

 

More information on Edward Fenton (Vice President, AI and Digital Transformation, Graybar): See the full profile on EMR Executive Services

 

 

 

 

 

 

 

 

 

 

 

EMR Additional Notes:

  • AI – Artificial Intelligence:
    • Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.
    • As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but several, including Python, R and Java, are popular.
    • In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.
    • AI programming focuses on three cognitive skills: learning, reasoning and self-correction.
    • The 4 types of artificial intelligence?
      • Type 1: Reactive machines. These AI systems have no memory and are task specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones.
      • Type 2: Limited memory. These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way.
      • Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means that the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.
      • Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them consciousness. Machines with self-awareness understand their own current state. This type of AI does not yet exist.
    • Machine Learning (ML):
      • Developed to mimic human intelligence, it lets the machines learn independently by ingesting vast amounts of data, statistics formulas and detecting patterns.
      • ML allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so.
      • ML algorithms use historical data as input to predict new output values.
      • Recommendation engines are a common use case for ML. Other uses include fraud detection, spam filtering, business process automation (BPA) and predictive maintenance.
      • Classical ML is often categorized by how an algorithm learns to become more accurate in its predictions. There are four basic approaches: supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning.
    • Deep Learning (DL):
      • Subset of machine learning, Deep Learning enabled much smarter results than were originally possible with ML. Face recognition is a good example.
      • DL makes use of layers of information processing, each gradually learning more and more complex representations of data. The early layers may learn about colors, the next ones about shapes, the following about combinations of those shapes, and finally actual objects. DL demonstrated a breakthrough in object recognition.
      • DL is currently the most sophisticated AI architecture we have developed.
    • Generative AI (GenAI):
      • Generative AI technology generates outputs based on some kind of input – often a prompt supplied by a person. Some GenAI tools work in one medium, such as turning text inputs into text outputs, for example. With the public release of ChatGPT in late November 2022, the world at large was introduced to an AI app capable of creating text that sounded more authentic and less artificial than any previous generation of computer-crafted text.
    • Small Language Models (SLM) and Large Language Models (LLM):
      • Small Language Models (SLMs) are artificial intelligence (AI) models capable of processing, understanding and generating natural language content. As their name implies, SLMs are smaller in scale and scope than large language models (LLMs).
      • LLM means Large Language Models — a type of machine learning/deep learning model that can perform a variety of natural language processing (NLP) and analysis tasks, including translating, classifying, and generating text; answering questions in a conversational manner; and identifying data patterns.
      • For example, virtual assistants like Siri, Alexa, or Google Assistant use LLMs to process natural language queries and provide useful information or execute tasks such as setting reminders or controlling smart home devices.
    • Computer Vision (CV) / Vision AI & Machine Vision (MV):
      • Field of AI that enables computers to interpret and act on visual data (images, videos). It works by using deep learning models trained on large datasets to recognize patterns, objects, and context.
      • The most well-known case of this today is Google’s Translate, which can take an image of anything — from menus to signboards — and convert it into text that the program then translates into the user’s native language.
      • Machine Vision (MV) :
        • Specific application for industrial settings, relying on cameras to analyze tasks in manufacturing, quality control, and worker safety. The key difference is that CV is a broader field for extracting information from various visual inputs, while MV is more focused on specific industrial tasks.
        • Machine Vision is the ability of a computer to see; it employs one or more video cameras, analog-to-digital conversion and digital signal processing. The resulting data goes to a computer or robot controller. Machine Vision is similar in complexity to Voice Recognition.
        • MV uses the latest AI technologies to give industrial equipment the ability to see and analyze tasks in smart manufacturing, quality control, and worker safety.
    • Multimodal Intelligence and Agents:
      • Subset of artificial intelligence that integrates information from various modalities, such as text, images, audio, and video, to build more accurate and comprehensive AI models.
      • Multimodal capabilities allows AI to interact with users in a more natural and intuitive way. It can see, hear and speak, which means that users can provide input and receive responses in a variety of ways.
      • An AI agent is a computational entity designed to act independently. It performs specific tasks autonomously by making decisions based on its environment, inputs, and a predefined goal. What separates an AI agent from an AI model is the ability to act. There are many different kinds of agents such as reactive agents and proactive agents. Agents can also act in fixed and dynamic environments. Additionally, more sophisticated applications of agents involve utilizing agents to handle data in various formats, known as multimodal agents and deploying multiple agents to tackle complex problems.
    • Agentic AI:
      • Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time. In a multiagent system, each agent performs a specific subtask required to reach the goal and their efforts are coordinated through AI orchestration.
      • Unlike traditional AI models, which operate within predefined constraints and require human intervention, agentic AI exhibits autonomy, goal-driven behavior and adaptability. The term “agentic” refers to these models’ agency, or, their capacity to act independently and purposefully.
      • Agentic AI builds on generative AI (gen AI) techniques by using large language models (LLMs) to function in dynamic environments. While generative models focus on creating content based on learned patterns, agentic AI extends this capability by applying generative outputs toward specific goals.
    • Edge AI Technology:
      • Edge artificial intelligence refers to the deployment of AI algorithms and AI models directly on local edge devices such as sensors or Internet of Things (IoT) devices, which enables real-time data processing and analysis without constant reliance on cloud infrastructure.
      • Simply stated, edge AI, or “AI on the edge“, refers to the combination of edge computing and artificial intelligence to execute machine learning tasks directly on interconnected edge devices. Edge computing allows for data to be stored close to the device location, and AI algorithms enable the data to be processed right on the network edge, with or without an internet connection. This facilitates the processing of data within milliseconds, providing real-time feedback.
      • Self-driving cars, wearable devices, security cameras, and smart home appliances are among the technologies that leverage edge AI capabilities to promptly deliver users with real-time information when it is most essential.
    • High-Density AI: 
      • High-density AI refers to the concentration of AI computing power and storage within a compact physical space, often found in specialized data centers. This approach allows for increased computational capacity, faster training times, and the ability to handle complex simulations that would be impossible with traditional infrastructure.
    • Explainable AI (XAI) and Human-Centered Explainable AI (HCXAI): 
      • Explainable AI (XAI) refers to methods for making AI model decisions understandable to humans, focusing on how the AI works, whereas Human-Centered Explainable AI (HCXAI) goes further by contextualizing those explanations to a user’s specific task and understanding needs. While XAI aims for technical transparency of the model, HCXAI emphasizes the human context, emphasizing user relevance, and the broader implications of explanations, including fairness, trust, and ethical considerations.