Information Center

The era of computing power has come, and the server needs to develop with "four high"

  

Suanli is an old and new word. Computing power is accompanied by cloud computing Its commercial use has become the focus of people for more than ten years. In February this year, with the official launch of the "East Counting and West Counting" project, computing power has once again become the focus of the industry and even the whole society.

As an important computing infrastructure, in the context of increasing computing demand Data Center The requirements of are also getting higher and higher. The Three Year Action Plan for the Development of New Data Centers (2021-2023) points out that new data centers should be characterized by "high technology, high computing power, high energy efficiency and high security". "Four high" has become a prerequisite for future data centers.

As the key equipment for the data center to convert watt into bit - server, the server market also ushers in explosive growth against the background of enterprises' increasingly strong demand for computing power and data centers. Gartner released the latest global server market data in 2022, which showed that the global server market grew strongly in the first half of 2022, with shipments of 6.689 million units, up 11.8% year on year, and sales of 56.65 billion dollars, up 24.1% year on year. The server market will grow strongly in the first half of 2022.

At the same time of strong market growth, servers should also meet the "four high" requirements of new data centers, and develop in the direction of "high technology, high computing power, energy efficiency, and high security".

High security

As we all know, the data center has high requirements for business continuity, and the loss caused by an outage cannot be underestimated.

In 2021, the server of the international technology giant Facebook data center will be down, and its Facebook Instagram, WhatsApp, Messenger and other websites and applications have experienced response server errors, and 3.5 billion users worldwide have been unable to use these social platforms for nearly six hours. The outage caused Facebook shares to plummet by 6%, and Zuckerberg's personal wealth lost nearly $6 billion a day.

Server downtime will cause property loss at least, and threaten enterprise data security at most. At this stage, there are two main reasons for server downtime: network attacks and the operating environment (including software, as well as the impact of the computer room environment).

network attack : Network attacks are inevitable. When deploying business, enterprises should choose reliable security solutions, even very early warning, and use technical means such as situation awareness, dynamic defense+risk control in advance. In addition, the recently hyped DPU can also help the server improve its ability to resist network attacks from the "hardware" level.

In order to ensure the security of the network and system, people will add a large number of encryption algorithms to the server. These algorithms were previously "processed" by the CPU. Now? The DPU can share the work of the CPU. While improving the CPU efficiency, it can also strengthen the security capability of the server hardware.

Operating environment : Ensuring the stability of the operating environment is closely related to the business continuity and stability of the data center. At the software level, it is not necessary to say more about the mature products provided by reliable suppliers with big brands. At the hardware level, in order to meet the requirements of servers with higher power and higher computing power, the physical level of the data center is also facing changes, which can be summarized as "adapting to local conditions and deploying on demand". Either array cabinet or small bus bar is selected; Whether it is air cooling or liquid cooling, enterprises need to choose according to their own needs and the characteristics of the selected servers.

High computational power

With the arrival of the industrial Internet, the computing power requirements of the data center are increasingly high. The data center on the edge needs to have ultra-high computing power to deal with various complex computing scenarios, and improving computing power has become the focus of widespread attention in the industry.

There are two ways to improve the computing power of the server - using new chips (GPU DPU)、 Increase density.

New chips: Influenced by factors such as Moore's Law "gradual failure", high computing power is not limited to simple CPU stacking. It is conceivable that the future GPU DPU、 Hardware such as smart network cards will be more and more used in servers. As the "right hand" of CPU, with the popularization of applications, it will also have an important impact on the internal architecture of servers.

However, at this stage, the industry There is no unified "perception" of DPU, especially DPU. Some manufacturers believe that DPU will become a key "key" to free CPU and more computing power, and even a decisive factor to promote the real implementation of blockchain technology. Some people think that DPU is just a concept hyped by manufacturers, and they are not optimistic about the application prospect.

In my opinion, "releasing" computing power requires not only hardware level CPU GPU, Even the collaboration of DPU needs to achieve flexible and intelligent scheduling and distribution of computing power in the server by using AI and other technical means from the software level, so as to stimulate the "potential" of computing power.

Increase density: In order to meet the needs of enterprises for cloud computing, data centers and the Internet at this stage, high-density servers came into being. This kind of server equipment with optimized architecture can integrate more processors and I/O expansion capability in a very small physical space. This design can greatly reduce the space cost of customers and significantly improve the performance of computers.

High density servers can reduce the space cost of customers to a greater extent, and can significantly enhance the characteristics of computers through scalable, dual motherboard, horizontal expansion and other designs. At the same time, high-density servers share power supplies and fans (ordinary servers are mostly designed with independent power supplies and fans) to reduce the overall power consumption of servers, greatly improve the efficiency of power supply and cooling systems, and have flexible processing methods for different customer needs.

In terms of product layout, different manufacturers have different views on high density. The machine cabinet server is one of the current high-density computing solutions.

The server solution of the whole machine cabinet server is built according to the modular design idea. The system architecture consists of six subsystems: cabinet, network, power supply, server node, centralized cooling, and centralized management. In the same cabinet, multiple server nodes share power and fans. This design can greatly improve the efficiency of the power supply and cooling system, Finally, the weight of the machine body is lighter and the cost is lower. This design of high-density servers can reduce the user's use cost. Not only that, the whole machine cabinet server is designed by factory prefabrication, which greatly shortens the construction period. In the Internet era of "only fast but not broken", the earlier the deployment and production, the earlier the business can be opened up, and the earlier the users can "make profits".

Of course, the promotion of the whole cabinet naturally puts forward higher requirements on the size, standardization, and compatibility of servers. When it comes to the whole cabinet server, we have to mention the tide. It is not difficult to see from the product layout of Inspur in the past two years that Inspur mainly focuses on the whole cabinet, trying to promote the standardization and open source process of the whole cabinet.

The world is different and wonderful, IBM has a wave of different views. IBM believes that users are more concerned about the concept of "Cloud in a Box" than the whole cabinet. Ying Kangyong, the technical director of IBM Asia Pacific hosts and LinuxONE, pointed out that users need a server like a "cloud in a box", which can host tens of thousands of workloads in a limited space, and also ensure the service level of mutual isolation and transactions. In the author's opinion, this puts forward higher requirements for the expansion ability of the server in both vertical and horizontal directions, as well as the cooperation of server software and hardware.

Whether it is to "integrate" the server with power supply, refrigeration and other equipment, or focus on improving the performance of the server itself, to achieve "one to top one hundred", the industry is still at the stage of constant debate. However, it is certain that improving the computing power of unit density has become an indisputable fact. Whether it is high density at the "physical" level or high density at the "virtual" level, high density has become one of the development trends of servers.

Energy efficiency

Jointly promote in the whole industry Carbon neutralization In the context of process, how to improve the energy efficiency of servers has become the focus of server suppliers and users.

According to the International Energy Agency, data centers around the world consume a total of 250 TWh of electricity; According to the data released by the Ministry of Ecology and Environment, the total power consumption of China's data centers will reach 216.6 billion kWh in 2021, accounting for 2.6% of the total social power consumption, and the carbon emissions will account for about 1.14% of the national carbon emissions. Among them, IT equipment accounts for a large proportion.

On the path of carbon neutrality in the data center, server energy consumption will also become the focus of energy conservation and emission reduction in the data center. Data centers need energy-efficient servers, whether to fulfill social responsibilities or comply with the national "double carbon" goal.

Taking stock of the technical means for servers to achieve "high efficiency and low carbon", virtualization technology is definitely "on the list". Since 2000, virtualization technology has gradually become one of the key technologies of the data center.

Virtualization technology enables a physical server to run multiple virtual hosts, so that the computing resources of a single server can be shared by multiple environments. By adjusting the server load, it helps the data center make full use of unused resources, thus reducing carbon dioxide emissions and management and operating costs. At present, super integration and virtualization are the most important means to achieve resource optimization, energy conservation and emission reduction at the data center server level.

Combined with the current industry development trend, for users, virtualization alone can no longer meet the needs of promoting the implementation of the "dual carbon" strategy, and industry users begin to pay more attention to how server hardware improves efficiency.

From the perspective of hardware, the main means to improve the efficiency of servers are: increasing the proportion of renewable materials (carbon reduction from the perspective of the whole life cycle of servers); Rationally optimize the internal heat dissipation layout, reduce excessive airflow, and improve heat dissipation efficiency; Application GPU DPU and other new generation chip technologies share the CPU pressure, which will stimulate the server's maximum computing potential and improve efficiency.

In addition to software (super fusion, virtualization and other technologies) and hardware to improve server energy efficiency, you can also reduce the energy consumption of the data center by changing the "environment" around the server, thereby indirectly improving server energy efficiency.

The application of liquid cooling is one of the means to reduce energy consumption by changing the "environment" of the server at this stage. However, with the application of liquid cooling, new requirements have also been put forward for the server, such as the size, impedance, insertion force, and fixing mode of the server interface; Higher requirements are put forward for the overall size, material, corrosion resistance, vibration resistance, electromagnetic compatibility, and operation and maintenance mode of the server. In order to better apply and promote the liquid cooling technology, it also naturally poses new challenges to many aspects such as the internal structure of the server, and the materials of the electronic components used. At present, many server suppliers such as Ningchang, Lenovo, Inspur have deployed the liquid cooling server business, and Inspur has put forward the slogan of "ALL IN" liquid cooling.

However, reducing server energy consumption is a prerequisite - it cannot be at the expense of performance, which can be said to be the common attitude of service providers. The server industry has entered the era of "high efficiency and low carbon".

high technology

In addition to liquid cooling, some suppliers focus on the research and development of new generation servers. As enterprises Digital transformation With the advancement of the process, enterprises digitization The cost problem is also increasingly concerned. In the process of selecting products, we are eager to obtain customized "power" and have more types of servers to choose from according to our own needs.

At the moment when cloud computing is hot, more and more users choose ARM architecture servers. Recently, Alibaba Cloud and Microsoft Azure, two major public cloud service providers, updated the latest trends of using the ARM processor instance; IBM is focusing on the new generation of Z host. Through the IBM Telum processor, it combines AI reasoning with IBM's well-known highly secure and reliable mass transaction processing capability in a unique way.

It is worth noting that in the near future, IBM has launched a new generation of Linux servers. According to information provided by IBM, compared with x86 servers running the same workload under similar conditions, the new generation of Linux servers can reduce energy consumption for enterprises by 75% every year, save 50% of space, and reduce more than 850 metric tons of carbon dioxide emissions.

Although at this stage, servers of other architecture types cannot compete with X86 servers in terms of market share, technology changes can give users more choices, allowing them to "purchase on demand".

Looking into the future, it can be predicted that under the demand of enterprises for high computing power, the pattern of the server market will change. High density, high computing power, high availability and low carbon will become the four major trends of future server development. In this context, the server industry will usher in a new round of innovation boom.