Understanding data centres - key terms explained at a glance
Date: 2025
At Taylor Wessing, we advise companies, investors and operators on how to realise their data centre projects successfully and in compliance with legal requirements. Our specialist lawyers offer in-depth legal advice on regulatory and commercial matters, particularly in relation to IT and data infrastructure, sustainability and renewable energies, as well as real estate, construction, environmental and planning law.
With this glossary, we would like to provide you with a compact introduction to the most important terms in the complex world of data centres. Do you have any questions about your data centre project?
We can guide you through all the challenges in the data centre sector. Get in touch with our expertes.
The Legal 500 Germany 2025
An AI data centre is a data centre that meets the enormous power, storage and cooling requirements of AI technology. It has unparalleled computing resources to handle the resource-intensive training and deployment of complex machine learning models and algorithms.
AI workloads are computing tasks and processes related to the development, training and execution of AI models. This includes data pre-processing, model training, hyperparameter tuning, inference and model deployment of, for example, machine learning (ML) models, deep learning (DL) models, or natural language processing (NLP) models. This typically involves processing large amounts of data and performing complex calculations. To process large data sets and complex algorithms efficiently, resource-intensive AI workloads require considerable computing power, memory, and storage space.
Public cloud providers usually operate their services from different, geographically distributed data centres. These locations are grouped into availability zones and regions. Availability zones are isolated or physically separate data centres within specific regions. Each availability zone has at least one data centre with its own power supply, cooling and network connection. A region comprises several availability zones that are linked via dedicated networks. These availability zones can be used to operate production applications and databases that offer higher availability, fault tolerance and scalability than would be possible from a single data centre.
A battery energy storage (BESS) in a data centre is essentially a battery storage system that plays a central role in power supply and security. The BESS is designed to ensure uninterrupted power supply, load management and cost optimisation, the integration of renewable energies and grid services.
The black building test is a procedure in which the entire power supply to a data centre is deliberately shut down. The aim of this test is to check the functionality of emergency power systems, such as UPS systems (uninterruptible power supply) and diesel generators, and to ensure that they can reliably maintain operation in an emergency. This ensures that critical systems remain operational even in the event of a power failure with the emergency power system and that data integrity and availability are guaranteed.
Breakered amp power is a general term used in retail colocation leasing to describe the economic model used to bill customers for electricity. In a breakered amp colocation model, the customer is allocated a specific circuit breaker size and voltage and pays the same price for the power distribution regardless of whether they use it or not.
A BRP is an energy market participant responsible for balancing electricity generation and consumption within a specific portfolio of feed-in and withdrawal points. This is intended to ensure the stability of the electricity grid. The role of BRP can be assumed by electricity producers, large consumers or energy traders.
The built to suit concept is a customised data centre that is designed and built according to the specific requirements of a customer. Technical specifications, power supply, cooling and security requirements are implemented exactly according to the needs of the customer, usually large companies.
BCR planning involves strategies and measures that ensure the uninterrupted operation of a data centre and enable rapid recovery in the event of a failure. This includes contingency plans, redundancy systems and regular testing to minimise downtime and protect critical IT services.
In a cabinet, sometimes also referred to as “racks” , your own hardware can be installed and connected under the best possible security conditions. Further customised security requirements are possible.
To meet the various security requirements of customers, a colocation data centre offers the option of renting a locked compartment (known as a cage) for free use. Although they come in various sizes and layouts, they are often in the form of metal wire enclosures that can accommodate several cabinets. A cage can be used either jointly with other companies or exclusively by one company.
A carrier-neutral data centre, usually Colocation - or hyperscale data centres, is operated independently of the telecommunications providers located at the site. This allows connections to be established with various telecommunications carriers and/or colocation providers. By using this type of data centre, companies can outsource costly IT processes.
A CfD is a financial or hedging contract in the electricity market. The CfD sets a reference price per MWh. If the market price for electricity is below this strike price, the buyer pays the difference to the producer. If the market price is above, the producer pays back the difference. This creates price certainty for both sides, regardless of market fluctuations.
Cloud as a service is a model for hosting cloud services via servers from cloud computing providers. Since data processing does not take place on the company’s servers, it is considered a service. This enables better scalability and offers cost flexibility, as costs are usually only incurred when the service is actually used. Cloud platform services provide companies and developers with a framework for creating their applications and orchestrating their own services via these applications.
Cloud computing is a delivery model that provides access to storage, servers, databases, networks, software, analytics and applications via the internet. These resources are typically offered on demand as part of an as-a-service model with pay-per-use pricing. This allows companies to access virtual computing, network and storage resources without having to worry about maintaining their own hardware.
A cloud data centre is optimised for the provision of cloud services. It enables users to access virtual resources such as computing power, storage and applications via the internet, and therefore to use resources such as Infrastructure-as-a-Service (IaaS) , Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) Customers can determine according to their needs and only pay for the resources they actually use. Cloud data centres are divided into public clouds and private clouds .
The cloud infrastructure consists of a variety of hardware and software resources that make up the cloud. Cloud providers operate global data centres consisting of countless IT infrastructure components, including servers, physical storage devices and network devices. They configure the physical devices with different operating systems and install additional software necessary to run applications
CloudOps or cloud operations (management) refers to the operational management of cloud resources, including servers, data storage, networks and applications, to ensure that they function efficiently and effectively within the cloud environment. Companies whose business operations are based on a cloud need CloudOps to ensure seamless user availability of their applications and adaptability to changing requirements.
The COD is the date on which a data centre is commercially commissioned after construction, installation and testing. From this date onwards, the facility can fulfil customer contracts and sell or provide space, power and cooling capacity. The COD therefore marks the transition from the construction and testing phase to the operational and marketing phase.
Cold aisles are a central component of modern air conditioning concepts, in which cool supply air is directed specifically into the front areas of the server racks. They serve to optimise the cooling of the hardware equipment. Separating cold and warm air makes cooling more efficient and reduces energy consumption.
Colocation data centres are data centres shared by customers. Colocation providers offer tenants data centre space, primary infrastructure services and operational support (air conditioning, power supply, network connection) as well as service work, patch services and physical security. Often, several customers rent space to independently install and host their own IT infrastructure (servers, storage and network hardware).
Compute refers to the computing capacities and resources in a data centre that are required for processing data and executing applications. This includes processors (CPUs), random access memory (RAM) and other hardware components that work together to perform calculations and provide IT services. Computing power is determined by processor performance, memory and network capacity and is crucial for applications such as cloud computing and artificial intelligence.
Computing power is the processing power of a system, measured in terms of speed and data processing capacity. It is determined by factors such as the number of processors, clock frequency, memory and parallel processing, and is crucial for the performance of data centres. With the rise of generative artificial intelligence and other data-intensive applications, the demand for high-performance computing environments has increased significantly.
CDUs are components in liquid-cooled data centres. Specifically, they are coolant distributors or cooling distributors/coolant distribution units for efficient heat dissipation in data centres. Direct liquid cooling is used to dissipate the heat generated by the processors.
In data centres, the hardware used must be cooled to control operating temperatures and prevent overheating.
As server performance increases, so does the amount of heat, and air cooling reaches its limits. Water has a much higher cooling capacity than air. Against the backdrop of increasing energy efficiency requirements for data centres, conventional air cooling is increasingly being replaced by more modern and effective liquid cooling (DLC Direct Liquid Cooling). With DLC, heat is dissipated more energy-efficiently using special coolants or liquids that have a higher thermal conductivity than air. For this purpose, heat sinks or plates are attached directly to the critical hardware components in order to transport the waste heat to a heat exchanger, through which it is finally dissipated. DLC is becoming increasingly relevant for high-performance computing (HPC) and artificial intelligence (AI) data centres due to the high power density of the servers used there.
Critical infrastructure refers to all essential infrastructure components that are necessary for the safe and reliable operation of a data centre. These include:
Security systems, including access control, video surveillance and fire protection systems.
Critical power (also known as IT load or critical load) refers to the maximum electrical power consumption that a data centre can safely and reliably handle. This load includes all essential power consumers such as servers, IT/network equipment, cooling systems and lighting. Accurate determination of the critical load is crucial for the planning and operation of a data centre to prevent overloads and ensure continuous availability of IT services.
A cross-connect is a physical or virtual connection within a data centre between two different parties or networks. This connection enables fast, secure and direct data transfer between servers, networks or Colocation customers without the need to route data traffic via external networks. Cross-connects are often used to minimise latencies and improve security and performance compared to external network connections.
Data centre tiers are a system for uniformly describing certain types of data centre infrastructure. Tier 1 is the simplest infrastructure, while Tier 4 is the most complex and has the most redundant components.
Data centre virtualisation involves creating a virtual server from conventional physical servers, which is also referred to as a software-defined data centre, for example. Software-based solutions enable IT resources to be allocated dynamically, costs to be reduced and reliability to be increased.
The data hall is the central area of a data centre that houses IT infrastructure such as server racks , storage and network devices. The data hall is designed for optimised air conditioning, power supply and security systems to ensure efficient and secure operation of IT loads.
DC containment is a system for separating cold and warm air in data centres to maximise cooling efficiency. DC containment reduces energy consumption, improves cooling performance and increases server efficiency.
There are two main types:
Disaster recovery sites are special alternative locations that serve to maintain IT and data processing services in the event of a disaster-related failure of the primary data centre. These facilities are designed to quickly replace the main data centre in the event of natural disasters, technical malfunctions or other serious incidents, for example, and therefore ensure the resilience of companies. This includes restoring the IT infrastructure and data to continue business operations with as little interruption as possible.
Disaster recovery sites typically have redundant systems and network connections and require regular backups and tests to ensure their functionality in an emergency. They are an essential part of comprehensive (BCP / BCR).
Edge data centres are smaller, decentralised data centres located closer to the end user's location to reduce latency and enable faster data processing, especially for time-critical data – e.g. for IoT and streaming.
Enterprise data centres are owned by a company and are usually located on its premises. They are operated for internal use. The maintenance and management of the IT infrastructure and components are in the hands of the company. It offers full control over security, performance and scalability, but requires high investment and ongoing operating costs.
FLAP-D is an acronym for Frankfurt, London, Amsterdam, Paris and Dublin. These cities, known as the “FLAP-D markets”, are the leading and most important data centre hubs in Europe.
Flexible connection agreements are contracts that enable data centre users to flexibly adjust their network capacities in response to fluctuating requirements or growth.
The German Data Centre Association (GDA) is one of the leading industry organisations in the field of digital infrastructure and brings together renowned and market-relevant players from all areas of the “data centre ecosystem” under one roof. Founded in Frankfurt am Main in 2018, the association provides a platform for data centre operators and owners, as well as representatives from politics, local authorities, the real estate industry and consulting service providers. The GDA has 130 companies among its members. Taylor Wessing has been an active member since 2023.
Grid connection capacity refers to the maximum electrical power that can be supplied to a data centre from the public power grid and is available for use. This capacity is crucial for the operation of a data centre, as it determines how many servers and other IT infrastructures can be operated simultaneously. High power requirements are typical for data centres, which need considerable amounts of energy to power their extensive server farms. This makes grid connection capacity a critical factor for the smooth operation and scalability of the data centre.
GoOs are electronic certificates of origin and are considered “birth certificates” for electricity from renewable energy sources. These electronic documents, with their included electricity labelling, serve as certification of the origin and quantity of electricity from renewable energy sources, without any ecological quality assessment. The issue of these certificates of origin serves to ensure transparency.
High performance computing refers to the aggregation of computing power. Large amounts of data are processed at very high speeds by combining multiple computers and storage devices. This allows complex and performance-intensive problems to be solved.
Hot aisles are part of a design concept for optimising the cooling performance of data centres. Here, the rear sides of the server racks, from which hot exhaust air flows, are brought together in special aisles (the hot aisles). These hot aisles, from which the exhaust air is specifically removed, are separated from the cold areas (Cold Aisles ), where cool air flows to the front of the server racks. This separation enables efficient cooling and reduces the thermal load on the equipment.
Hyperscale data centres are the largest cloud data centres. They are operated by cloud service providers such as Amazon Web Services (AWS), Google Cloud and Microsoft Azure. They offer enormous scalability and efficiency for cloud services and provide the space, cooling and technical infrastructure to meet the massive demands of data and cloud computing . Hyperscale data centres cover an area of more than 10,000 m² (1,076,187 ft²),house up to 5,000 servers and have an average power output of more than 25 MW. Risk of confusion: The term “hyperscaler” is often used as a nickname for hyperscale data centres. However, it is just as commonly used to refer to cloud service providers. To avoid any risk of confusion, a clear distinction must be made between the type of data centre and companies specialising in hyperscale computing.
Infrastructure-as-a-Service (IaaS) is a cloud service model in which companies can use IT infrastructure such as computing power, storage and networks as needed without operating their own hardware.
An Internet exchange point is a physical location where internet infrastructure companies, such as hosting and cloud companies, internet service providers, data centres and colocation providers, are connected to each other. These traffic exchange points are located at the edge of various networks and enable network providers to share transit outside their own network. Essentially, an internet exchange point consists of one or more physical locations with network switches that serve as a coupling element and forward traffic between the various member networks.
The largest internet exchange points in Europe include:
In addition to DE-CIX in Frankfurt, there are other internet exchange points in Germany:
DE-CIX (Düsseldorf, Hamburg and Munich)
Latency is the time delay in the transmission of data in networks. This delay can be caused by various factors, such as the distance between the devices involved, intermediate steps in the network, or the performance of the software and hardware. In the context of a data centre, low or minimal latency is crucial for the performance and efficiency of many applications, such as data-intensive AI technology or real-time applications such as streaming.
Both terms refer to cooling methods for data centre hardware. In liquid-to-liquid cooling, heat from the processors, for example, is dissipated directly using a liquid (usually water) and transferred to the building’s primary water cooling system or external heat users. This achieves efficient cooling, whereby heat can be dissipated at a higher temperature which is particularly beneficial in terms of waste heat utilisation and compliance with regulatory requirements such as the Energy Efficiency Act. Liquid-to-air cooling also uses liquid to dissipate heat. However, this heat is then released into the ambient air via air coolers. Despite being less efficient, this approach offers the advantage of easier implementation in existing data centres.
Long lead items are components, parts or materials that have a long procurement time and must be ordered early. They play a critical role in the planning and implementation of data centres, as their availability can influence the entire progress of the project. Examples:
Network devices
OHSAS 18001:2007 (Occupational Health and Safety Assessment Series) is an international standard that specifies requirements for occupational health and safety management systems (AMS). It aims to reduce occupational accidents and illnesses and promote a safe and healthy working environment. In the context of data centres, OHSAS 18001:2007 plays an essential role in ensuring the safety and health of employees. Data centres are complex technical facilities with high demands on electrical, air conditioning and mechanical systems. These conditions require strict safety precautions and regular inspections to minimise risks. The standard is increasingly being replaced by the ISO 45001 standard.
On-site power generation comprises power supply solutions in which a data centre uses its own energy sources on site. These include diesel generators, gas or biofuels, and renewable energies to ensure energy supply independent of the public grid. In the context of data centres, multi-fuel power generation refers to power supply systems that can use multiple energy sources. This is particularly relevant for data centres and their permanently secured and fail-safe power supply.
The Open Compute Project (OCP) is an initiative to develop open, standardised hardware and data centre designs. The aim is to reduce the investment and operating costs, energy consumption and environmental impact of data centres through innovative, fully standardised IT architectures.
A POP at a data centre is a physical location, a so-called network node, that enables data exchange between different networks. A POP typically includes devices such as routers, switches and other network components and is crucial for low latencies and high-performance Internet connections.
Data centres can be classified into different categories based on their respective capacity in terms of power capacity in megawatts (MW). This classification influences location selection, infrastructure planning and scalability.
They are operated by large cloud providers such as AWS, Google Cloud or Microsoft Azure and offer extremely high scalability and efficiency.
Power densities refer to the amount of electrical power provided per unit area. This electrical power density value is a crucial factor in the design and operation of a data centre, as it indicates how much electrical energy is required for the servers and other equipment – measured in kilowatts per square metre or per rack.
A power distribution unit (PDU) in a data centre is a device used to distribute electrical power to servers and other IT equipment. A PDU takes electrical power from the data centre’s main power supply and distributes it safely and efficiently to the various IT components within a rack or a series of racks.
Power over ethernet (PoE) is a technology that enables electrical power to be transmitted together with data via an ethernet cable. In a data centre, PoE leads to increased efficiency in the installation and management of network-based devices.
Power purchase agreements (PPAs) are individually negotiated, long-term electricity supply contracts, particularly for renewable energies. These are usually contracts between an electricity producer and an electricity consumer, in this case the data centre operator, in which the consumer undertakes to purchase the electricity generated over a longer period. Long-term agreements to purchase green electricity generated from renewable energies can help to mitigate various risks for both consumers and producers and create business opportunities: price security, security of supply, minimisation of regulatory risks and minimisation of environmental risks.
The PUE value indicates how efficiently the energy supplied to a data centre is used and is therefore a measure of the energy efficiency of a data centre. A lower PUE value means higher efficiency. The closer the value is to 1.0, the more energy-efficient the data centre is and the better its energy balance. To calculate the value, the total energy supplied is divided by the energy consumed by the IT equipment.
Powered shell data centres are prefabricated data centres with a completed exterior, installed energy and cooling infrastructure, and connectivity. The interior is still in its “raw” state and can be designed and equipped by the customer to meet their individual operational requirements. The same applies to turnkey data centres. Floors/sections or an entire building can be handed over to a customer. This solution enables faster commissioning than a new building.
A private cloud is a cloud service that is available exclusively to a single organisation and excludes any “external access”. By using a private cloud, an organisation can benefit from the advantages of cloud computing without sharing resources with other organisations. Cloud computing services and infrastructure are hosted in the virtual data centre. The company itself is responsible for management, maintenance and operation.
In a locked, private suite in a data centre, the customer defines the protection for their hardware. The suites are separated from each other by walls or steel grilles with privacy screens. Customers can choose the individual furnishing and expansion options that exactly meet their requirements.
A shared data suite is shared by multiple customers, with each user having access to a clearly defined infrastructure. This solution offers a cost-effective alternative with shared resources.
Public cloud is the most common form of cloud computing. An external cloud provider provides multiple customers with virtual hardware, software or other supporting infrastructure, which they can access via an internet connection. The infrastructure is owned and managed by the cloud provider.
A rack is a standardised enclosure or frame used to house and organise IT equipment such as servers, network components, storage systems and other electronic devices. A rack provides a structured way to mount and connect these devices.
Secondary markets include areas that are not classified as FLAP-D markets, but where the total computing capacity is growing rapidly. In Europe, these include Madrid, Berlin, Warsaw and Barcelona.
A service level agreement (SLA) for a data centre is a contractual agreement between the service provider and the customer that specifies the expected and guaranteed performance standards, operating times and service quality in detail.
Software-as-a-service (SaaS) is a cloud service model in which software applications are provided and used via the Internet. Users access the software without having to install or maintain it locally. Maintenance, updates and scaling are handled by the provider.
In a software-defined data centre, the infrastructure components (servers, storage, networks) are software-controlled and managed automatically.
A standardised DC busbar is a standardised busbar used in data centres to distribute direct current. These busbars are designed to ensure efficient and reliable power distribution while minimising cabling requirements.
A TCS (Technical Control System) and an FWS (Facility Wiring System) are central components in the infrastructure of a data centre. TCS monitors and controls the technical operating conditions of a data centre. This includes systems for monitoring temperature, humidity, power supply and other environmental parameters. FWS refers to the cabling infrastructure within the data centre, including power and network cabling.
An uninterruptible power supply (UPS) serves to ensure the continuous power supply to the IT infrastructure, even in the event of power failures or fluctuations in the power grid.
The operation of servers and other components in a data centre produces large amounts of heat, some of which is unused and released into the outside air at additional cost. Waste heat from a data centre is traditionally dissipated by cooling systems, but this significantly increases the energy requirements of a data centre. The waste heat from data centres is a low-quality heat source, but it offers great potential for improving energy efficiency and achieving sustainable energy management.
Whitespace refers to the unused or freely available space within a data centre. This space is not occupied by server racks or other IT infrastructure. Whitespace can be used for future expansions or to install additional equipment.
A workload refers to the type and scope of processing that computing resources perform to complete tasks or achieve results.
作者 Dr. Nicolai Wiegand, LL.M. (NYU) 以及 Susan Hillert, geb. Lipeyko, Lic. en droit (Toulouse I Capitole)
作者 Susan Hillert, geb. Lipeyko, Lic. en droit (Toulouse I Capitole) 以及 Antonia Deml