Schneider Electric – Schneider Electric launches new data center solutions to meet challenges of high-density AI and accelerated compute applications

Schneider Electric

  • Innovative prefabricated data center architecture provides critical IT infrastructure for high-density computing clusters
  • New rack PDUs and rack systems are built for increased size and weight support, and feature direct-to-chip liquid cooling
  • Schneider Electric launches new Open Compute Project (OCP) inspired rack system to support NVIDIA MGX architecture

 

Schneider Electric, the leader in the digital transformation of energy management and automation, today announced new data center solutions specifically engineered to meet the intensive demands of next-generation AI cluster architectures. Evolving its EcoStruxure™ Data Center Solutions portfolio, Schneider Electric introduced a Prefabricated Modular EcoStruxure Pod Data Center solution that consolidates infrastructure for liquid cooling, high-power busway and high-density NetShelter Racks. In addition, EcoStruxure Rack Solutions incorporate detailed rack configurations and frameworks designed to accelerate High Performance Computing (HPC) and AI data center deployments. The new EcoStruxure Pod Data Center and EcoStruxure Rack Solutions are now available globally.

Organizations are deploying AI clusters and grappling with extreme rack power densities, which are projected to reach 1MW and beyond. Schneider Electric’s new line of solutions equips customers with integrated, data-validated, and easily scaled white space solutions that address new challenges in pod and rack design, power distribution and thermal management.   

“The sheer power and density required for AI clusters create bottlenecks that demand a new approach to data center architecture,” said Himamshu Prasad, Senior Vice President of EcoStruxure IT, Transactional & Edge and Energy Storage Center of Excellence at Schneider Electric. “Customers need integrated infrastructure solutions that not only handle extreme thermal loads and dynamic power profiles but also deploy rapidly, scale predictably, and operate efficiently and sustainably. Our innovative next-generation EcoStruxure solutions that support NVIDIA technology address these critical requirements head on.”

 

New Product Overview  

  • Prefabricated Modular EcoStruxure Pod Data Center: Prefabricated, scalable pod architecture enables operators to deploy high-density racks, supporting pods up to 1MW and beyond, at scale. Engineered-to-order, the new pod infrastructure offers flexibility and supports liquid cooling, power busway, complex cabling, as well as hot aisle containment, InRow and rear door heat exchanger cooling architectures. The Prefabricated Modular EcoStruxure Pod Data Center is now shipping pre-designed and pre-assembled with all components for rapid deployment to support high-density workloads.
  • EcoStruxure Rack Solutions: These reliable, high-density rack systems adapt to EIA, ORV3 and NVIDIA MGX modular design standards approved by leading IT chip and server manufacturers. Configurations accommodate a wide array of power and cooling distribution schemes and employ Motivair by Schneider Electric in-rack liquid cooling, as well as new and expanded rack and power distribution products, including:
  • NetShelter SX Advanced Enclosure: This new line features taller, deeper, and stronger racks to support increased weight, cabling and infrastructure. NetShelter SX Advanced features a reinforced shipload rating and is safeguarded with shock packaging, ensuring secure transport of AI servers and liquid cooling systems.
  • NetShelter Rack PDU Advanced: These power distribution units have been updated to support the high-current power needs of AI servers. Designed for efficient rack layouts, the NetShelter Rack PDU Advanced offers compact vertical and horizontal models with higher counts of dedicated circuits. Intelligent operational features now enabled by Schneider Electric’s Network Management Card enhance security and provide seamless integration to EcoStruxure IT.
  • NetShelter Open Architecture: This Open Compute Project (OCP) inspired rack architecture is available as a configure-to-order solution and includes open rack standards, power shelf and in-rack busbar. As part of this, a new Schneider Electric rack system has also been developed to support the NVIDIA GB200 NVL72 system that utilizes the NVIDIA MGX architecture in its rack design, integrating Schneider Electric into NVIDIA’s HGX and MGX ecosystems for the first time.

“Schneider Electric’s innovative solutions provide the reliable, scalable infrastructure our customers need to accelerate their AI initiatives,” said Vladimir Troy, vice president of data center engineering, operations, enterprise software and cloud services at NVIDIA. “Together, we’re addressing the rapidly growing demands of AI factories — from kilowatt to megawatt-scale racks—and delivering future-proof solutions that maximize scalability, density and efficiency.”

 

The new solutions and suite of engineered data center reference designs equip data center operators and Schneider Electric’s partner ecosystem with the infrastructure and information needed to deploy powerful AI clusters faster and more reliably while addressing common barriers to adoption, including: 

  • Reliable power and cooling for AI workloads
  • Deployment complexity and risk
  • Speed to market and supply chain resilience
  • Skills gap in managing advanced infrastructure

These enhanced EcoStruxure offerings add to Schneider Electric’s robust line of fully integrated, end-to-end AI infrastructure solutions — spanning advanced hardware, intelligent software, services such as EcoCare™ and EcoConsult for Data Centers, and strategic industry partnerships with key IT players. Schneider Electric is the partner of choice for building efficient, resilient, scalable, and AI-optimized data centers. 

 

EMR Analysis

More information on Schneider Electric: See the full profile on EMR Executive Services

More information on Olivier Blum (Chief Executive Officer, Schneider Electric): See the full profile on EMR Executive Services

More information on EcoStruxure™ by Schneider Electric: https://www.se.com/ww/en/work/campaign/innovation/overview.jsp + EcoStruxure is Schneider Electric’s IoT-enabled, plug-and-play, open, interoperable architecture and platform, in Homes, Buildings, Data Centers, Infrastructure and Industries. Innovation at Every Level from Connected Products to Edge Control, and Apps, Analytics and Services.

  • 45,000 + Developers and system integrators
  • 650,000+ Service providers and partners
  • 480,000 Sites deployed

More information on EcoStruxure™ Data Center Solutions by Schneider Electric: https://www.se.com/ww/en/work/products/master-ranges/ecostruxure-data-center-solutions/ + EcoStruxure Data Center Solutions bring together power, cooling, racks, and management systems to support deployment of IT equipment in all environments from small Edge applications to large Cloud data centers. 

More information on EcoStruxure™ Pod and Rack Infrastructure by Schneider Electric: https://www.se.com/ww/en/work/solutions/data-centers-and-networks/pod/ + https://cloud.go.se.com/WW_202505_AIDCAIWhitespacePODInfrastructure_CNT_NONE_EDC_SP_ALLSEG_AWENG_NA + Rack-ready system to deploy IT at scale in increments of 8-to-12 racks. Its backbone is a freestanding structure that supports necessary IT infrastructure, providing air containment for all rack types. 

EcoStruxure™ Pod Data Center streamlines AI cluster deployment with pre-engineered, ready-to-integrate architecture. EcoStruxure™ Rack Solutions deliver precision optimization—right down to the chip—ensuring faster deployment, greater efficiency, and seamless scalability

More information on Himamshu Prasad (Senior Vice President, EcoStruxure IT, Transactional & Edge and Energy Storage Center of Excellence, Schneider Electric): See the full profile on EMR Executive Services

More information on NetShelter Racks and Enclosures by APC by Schneider Electric: https://www.apc.com/us/en/product-range/61821-netshelter-sx-enclosures/#products + high-performance IT Rack for data centers, server rooms & wiring closets.

More information on EcoStruxure™ IT by Schneider Electric: https://www.se.com/ww/en/work/solutions/for-business/data-centers-and-networks/dcim-software/what-is-ecostruxure-it.jsp + Our vendor-neutral data center infrastructure management (DCIM) solution enables resilient, secure, and sustainable IT data centers. It offers business continuity with secure monitoring, management, planning, and modeling from a single IT rack to hyper-scale IT, on-premises, in the cloud, and at the edge. Software services help implement the value of EcoStruxure IT solutions to drive business growth and achieve desired results.

More information on EcoCare™ by Schneider Electric: https://www.se.com/us/en/work/services/assets-and-systems-services/ecocare/ + EcoCare, a member-based service plan, helps you identify issues before they become a problem. Keep operations running at peak performance with 24/7 proactive monitoring and alarming, asset management for increased equipment uptime, and expert priority support (both onsite and remote) for fast response.

More information on EcoConsult by Schneider Electric: https://www.se.com/ww/en/work/services/assets-and-systems-services/ecoconsult/ + Consulting services for electrical and automation systems. Get hold of Schneider Electric consultants who can audit, evaluate, and map your electrical and automation assets and systems. With the best-in-class software and digital technologies we optimize and digitize your assets and systems, so you don’t have to.

 

More information on Motivair Corporation by Schneider Electric: https://www.motivaircorp.com/ + Our focus is cooling industries. Our Business is Cooling Yours.™

Motivair Corporation is a leading global provider of advanced liquid cooling solutions designed to meet the greatest thermal challenges of modern computing technology. As a trusted partner of leading silicon manufacturers and server OEMs, Motivair delivers cooling technology that enables breakthroughs in artificial intelligence and high-performance computing, as well as increased performance and reliability for colocation and hyperscale data centers. Motivair provides customers with a comprehensive end-to-end portfolio from a single source, offering products, systems, and services that support innovators in business, technology, and science.

Headquartered in Buffalo, NY, Motivair was founded in 1988 and currently has over 150 employees. Leveraging its strong engineering competency and deep domain expertise, Motivair has a world class range of offers including Coolant Distribution Units (CDUs), Rear Door Heat Exchangers (RDHx), Cold Plates and Heat Dissipation Units (HDUs), alongside Chillers for thermal management. Motivair provides its customers with a top-tier portfolio to meet the thermal challenges of modern computing technology.

More information on Rich Whitmore (President & Chief Executive Officer, Motivair Corporation, Schneider Electric): See the full profile on EMR Executive Services

 

 

More information on NVIDIA: https://www.nvidia.com/en-us/ + NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world’s largest industries and profoundly impacting society.

Founded in 1993, NVIDIA is the world leader in accelerated computing. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, revolutionized accelerated computing, ignited the era of modern AI, and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing infrastructure company with data-center-scale offerings that are reshaping industry.

More information on Jensen Huang (Chief Executive Officer, NVIDIA): https://www.nvidia.com/en-us/about-nvidia/board-of-directors/jensen-huang/ + https://www.linkedin.com/in/jenhsunhuang/ 

More information on Vladimir Troy (Vice President, Data Center Engineering, Operations, Enterprise Software and Cloud Services, NVIDIA): https://www.linkedin.com/in/vtroy/ 

More information on MGX™ by NVIDIA: https://www.nvidia.com/en-us/data-center/products/mgx/ + Bring accelerated computing into any data center with modular server designs. 

With MGX, OEM and ODM partners can build tailored solutions for different use cases while saving development resources and reducing time to market. The modular reference architecture allows for different configurations of GPUs, CPUs, and DPUs—including NVIDIA Grace™, x86, or other Arm® CPU servers —to accelerate diverse enterprise data center workloads.

 

 

More information on OCP (Open Compute Project): https://www.opencompute.org/ + The Open Compute Project Foundation (OCP) was initiated in 2011 with a mission to apply the benefits of open source and open collaboration to hardware and rapidly increase the pace of innovation in, near and around the data center.

The Open Compute Project (OCP) is a collaborative community focused on redesigning hardware technology to efficiently support the growing demands on compute infrastructure.

OCP’s collaboration model is being applied beyond the data center, helping to advance the telecom industry & EDGE infrastructure.

 

 

 

 

 

 

 

 

 

EMR Additional Notes:

  • AI – Artificial Intelligence:
    • Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.
    • As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but well a few, including Python, R and Java, are popular.
    • In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.
    • AI programming focuses on three cognitive skills: learning, reasoning and self-correction.
    • The 4 types of artificial intelligence?
      • Type 1: Reactive machines. These AI systems have no memory and are task specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones.
      • Type 2: Limited memory. These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way.
      • Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means that the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.
      • Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them consciousness. Machines with self-awareness understand their own current state. This type of AI does not yet exist.
    • Machine Learning (ML):
      • Developed to mimic human intelligence, it lets the machines learn independently by ingesting vast amounts of data, statistics formulas and detecting patterns.
      • ML allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so.
      • ML algorithms use historical data as input to predict new output values.
      • Recommendation engines are a common use case for ML. Other uses include fraud detection, spam filtering, business process automation (BPA) and predictive maintenance.
      • Classical ML is often categorized by how an algorithm learns to become more accurate in its predictions. There are four basic approaches: supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning.
    • Deep Learning (DL):
      • Subset of machine learning, Deep Learning enabled much smarter results than were originally possible with ML. Face recognition is a good example.
      • DL makes use of layers of information processing, each gradually learning more and more complex representations of data. The early layers may learn about colors, the next ones about shapes, the following about combinations of those shapes, and finally actual objects. DL demonstrated a breakthrough in object recognition.
      • DL is currently the most sophisticated AI architecture we have developed.
    • Computer Vision (CV):
      • Computer vision is a field of artificial intelligence that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs — and take actions or make recommendations based on that information.
      • The most well-known case of this today is Google’s Translate, which can take an image of anything — from menus to signboards — and convert it into text that the program then translates into the user’s native language.
    • Machine Vision (MV):
      • Machine Vision is the ability of a computer to see; it employs one or more video cameras, analog-to-digital conversion and digital signal processing. The resulting data goes to a computer or robot controller. Machine Vision is similar in complexity to Voice Recognition.
      • MV uses the latest AI technologies to give industrial equipment the ability to see and analyze tasks in smart manufacturing, quality control, and worker safety.
      • Computer Vision systems can gain valuable information from images, videos, and other visuals, whereas Machine Vision systems rely on the image captured by the system’s camera. Another difference is that Computer Vision systems are commonly used to extract and use as much data as possible about an object.
    • Generative AI (GenAI):
      • Generative AI technology generates outputs based on some kind of input – often a prompt supplied by a person. Some GenAI tools work in one medium, such as turning text inputs into text outputs, for example. With the public release of ChatGPT in late November 2022, the world at large was introduced to an AI app capable of creating text that sounded more authentic and less artificial than any previous generation of computer-crafted text.
Image listing successful generative AI examples explained in further detail below.

 

The evolution of artificial intelligence
types ai apps
ai machine learning deep learning
  • Edge AI Technology:
    • Edge artificial intelligence refers to the deployment of AI algorithms and AI models directly on local edge devices such as sensors or Internet of Things (IoT) devices, which enables real-time data processing and analysis without constant reliance on cloud infrastructure.
    • Simply stated, edge AI, or “AI on the edge“, refers to the combination of edge computing and artificial intelligence to execute machine learning tasks directly on interconnected edge devices. Edge computing allows for data to be stored close to the device location, and AI algorithms enable the data to be processed right on the network edge, with or without an internet connection. This facilitates the processing of data within milliseconds, providing real-time feedback.
    • Self-driving cars, wearable devices, security cameras, and smart home appliances are among the technologies that leverage edge AI capabilities to promptly deliver users with real-time information when it is most essential.
  • Multimodal Intelligence and Agents:
    • Subset of artificial intelligence that integrates information from various modalities, such as text, images, audio, and video, to build more accurate and comprehensive AI models.
    • Multimodal capabilities allows to interact with users in a more natural and intuitive way. It can see, hear and speak, which means that users can provide input and receive responses in a variety of ways.
    • An AI agent is a computational entity designed to act independently. It performs specific tasks autonomously by making decisions based on its environment, inputs, and a predefined goal. What separates an AI agent from an AI model is the ability to act. There are many different kinds of agents such as reactive agents and proactive agents. Agents can also act in fixed and dynamic environments. Additionally, more sophisticated applications of agents involve utilizing agents to handle data in various formats, known as multimodal agents and deploying multiple agents to tackle complex problems.
  • Small Language Models (SLM) and Large Language Models (LLM):
    • Small language models (SLMs) are artificial intelligence (AI) models capable of processing, understanding and generating natural language content. As their name implies, SLMs are smaller in scale and scope than large language models (LLMs).
    • LLM means large language model—a type of machine learning/deep learning model that can perform a variety of natural language processing (NLP) and analysis tasks, including translating, classifying, and generating text; answering questions in a conversational manner; and identifying data patterns.
    • For example, virtual assistants like Siri, Alexa, or Google Assistant use LLMs to process natural language queries and provide useful information or execute tasks such as setting reminders or controlling smart home devices.
  • Agentic AI:
    • Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time. In a multiagent system, each agent performs a specific subtask required to reach the goal and their efforts are coordinated through AI orchestration.
    • Unlike traditional AI models, which operate within predefined constraints and require human intervention, agentic AI exhibits autonomy, goal-driven behavior and adaptability. The term “agentic” refers to these models’ agency, or, their capacity to act independently and purposefully.
    • Agentic AI builds on generative AI (gen AI) techniques by using large language models (LLMs) to function in dynamic environments. While generative models focus on creating content based on learned patterns, agentic AI extends this capability by applying generative outputs toward specific goals.
  • High-Density AI: 
    • High-density AI refers to the concentration of AI computing power and storage within a compact physical space, often found in specialized data centers. This approach allows for increased computational capacity, faster training times, and the ability to handle complex simulations that would be impossible with traditional infrastructure. 

 

 

  • Compute Applications:
    • Refers to the use of computing resources to perform various tasks, from basic calculations to complex simulations and data analysis. They encompass a wide range of applications across diverse fields, including scientific research, business operations, and everyday technologies. In essence, compute applications utilize processing power, memory, storage, and networking to execute software programs and algorithms. 

 

 

  • Information Technology (IT) & Operational Technology (OT):
    • Information technology (IT) refers to anything related to computer technology, including hardware and software. Your email, for example, falls under the IT umbrella. This form of technology is less common in industrial settings, but often constitutes the technological backbone of most organizations and companies. These devices and programs have little autonomy and are updated frequently.
    • Operational technology (OT) refers to the hardware and software used to change, monitor, or control physical devices, processes, and events within a company or organization. This form of technology is most commonly used in industrial settings, and the devices this technology refers to typically have more autonomy than information technology devices or programs. Examples of OT include SCADA (Supervisory Control and Data Acquisition).
    • => The main difference between OT and IT devices is that OT devices control the physical world, while IT systems manage data.

 

 

  • Power Distribution Units (PDU):
    • A power distribution unit (PDU) is a device for controlling electrical power in a data center. The most basic PDUs are large power strips without surge protection. They are designed to provide standard electrical outlets for data center equipment and have no monitoring or remote access capabilities.

 

 

  • Busbar – Busway – Bus Plugs:
    • A busbar is a rigid piece of copper or aluminum, bolted or housed inside switchgear, panel boards, and busway enclosures used to carry large amounts of current / to distribute ac power to the rows of circuit breakers
    • Quite often, busbars have no insulation—they’re protected by a separate enclosure.
    • Busbars are the backbones for most power applications, providing the critical interfaces between the power module and the outside world.
    • They are also used to connect high voltage equipment at electrical switchyards, and low voltage equipment in battery banks.
    • Bus plugs are large electrical power connections that contact bus duct or busway conductors to serve connected electrical loads — thereby supplying localized power to industrial equipment.
    • A typical bus plug consists of:
      • Copper conductor plates.
      • A plug or bolt-in clamps to physically contact the busway.

 

Understanding Busway: Benefits, and Busway Options

 

 

  • AI Factories and AI Gigafactories:
    • AI Factories leverage the supercomputing capacity of the EuroHPC Joint Undertaking to develop trustworthy cutting-edge generative AI models.
    • AI Factories are dynamic ecosystems that foster innovation, collaboration, and development in the field of artificial intelligence (AI). They bring together computing power, data, and talent to create cutting-edge AI models and applications. They foster collaboration across Europe, linking supercomputing centres, universities, small and medium sized enterprises (SMEs), industry, and financial actors. AI Factories serve as hubs driving advancements in AI applications across various sectors such as health, manufacturing, climate, finance, space, and more.
    • Through 2025-2026, at least 15 AI Factories and several Antennas (associated to AI-optimised supercomputers in existing AI Factories) are expected to be operational, enabling the pan-EU AI ecosystem and promoting growth by prioritising access for AI startups and SMEs. In this context, at least 9 new AI optimised supercomputers will be procured and deployed across the EU. This will more than triple the current EuroHPC AI computing capacity.
    • AI Gigafactories are large-scale facilities dedicated to the development and training of next-generation AI models containing trillions of parameters. To achieve this, AI Gigafactories will bring together computing power – over 100,000 advanced AI processors – and a strong emphasis on power capacity, reliable supply chains, advanced networking, energy efficiency, and AI-driven automation.
    • In addition, to propel Europe to the forefront of AI development, the InvestAI Facility will comprise a new European fund of €20 billion to create up to 5 AI Gigafactories.
    • The European High Performance Computing Joint Undertaking a joint initiative between the EU, European countries and private partners to develop a world-class supercomputing ecosystem in Europe.
AI factories map

 

 

  • HPC (Hight-Performance Computing):
    • Practice of aggregating computing resources to gain performance greater than that of a single workstation, server, or computer. HPC can take the form of custom-built supercomputers or groups of individual computers called clusters.
  • Cloud Computing:
    • Cloud computing is a general term for anything that involves delivering hosted services over the internet. … Cloud computing is a technology that uses the internet for storing and managing data on remote servers and then access data via the internet.
    • Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each location being a data center.
  • Edge Computing:
    • Edge computing is a form of computing that is done on site or near a particular data source, minimizing the need for data to be processed in a remote data center.
    • Edge computing can enable more effective city traffic management. Examples of this include optimising bus frequency given fluctuations in demand, managing the opening and closing of extra lanes, and, in future, managing autonomous car flows.
    • An edge device is any piece of hardware that controls data flow at the boundary between two networks. Edge devices fulfill a variety of roles, depending on what type of device they are, but they essentially serve as network entry — or exit — points.
    • There are five main types of edge computing devices: IoT sensors, smart cameras, uCPE equipment, servers and processors. IoT sensors, smart cameras and uCPE equipment will reside on the customer premises, whereas servers and processors will reside in an edge computing data centre.
    • In service-based industries such as the finance and e-commerce sector, edge computing devices also have roles to play. In this case, a smart phone, laptop, or tablet becomes the edge computing device.
    • Edge Devices:
      • Edge devices encompass a broad range of device types, including sensors, actuators and other endpoints, as well as IoT gateways. Within a local area network (LAN), switches in the access layer — that is, those connecting end-user devices to the aggregation layer — are sometimes called edge switches.
  • Data Centers:
    • A data center is a facility that centralizes an organization’s shared IT operations and equipment for the purposes of storing, processing, and disseminating data and applications. Because they house an organization’s most critical and proprietary assets, data centers are vital to the continuity of daily operations.
  • Hyperscale Data Centers:
    • The clue is in the name: hyperscale data centers are massive facilities built by companies with vast data processing and storage needs. These firms may derive their income directly from the applications or websites the equipment supports, or sell technology management services to third parties.
  • White Space and Grey Space in Data Centers:
    • White space in data center refers to the area where IT equipment are placed. Whereas Gray space in the data centers is the area where back-end infrastructure is located.
    • White Space includes housing of: servers, storage, network gear, racks, air conditioning units, power distribution system.
    • Grey Space includes space for: switchgear, UPS, transformers, chillers, generators.

 

 

  • Kilowatt (kW):
    • A kilowatt is simply a measure of how much power an electric appliance consumes—it’s 1,000 watts to be exact. You can quickly convert watts (W) to kilowatts (kW) by diving your wattage by 1,000: 1,000W 1,000 = 1 kW.
  • Megawatt (MW):
    • One megawatt equals one million watts or 1,000 kilowatts, roughly enough electricity for the instantaneous demand of 750 homes at once.
  • Gigawatt (GW):
    • A gigawatt (GW) is a unit of power, and it is equal to one billion watts.
    • According to the Department of Energy, generating one GW of power takes over three million solar panels or 310 utility-scale wind turbines
  • Terawatt (TW):
    • One terawatt is equal to 1,000,000,000,000 watts.
    • The main use of terawatts is found in the electric power industry.
    • According to the United States Energy Information Administration, America is one of the largest electricity consumers in the world using about 4,146.2 terawatt-hours.