Netease homepage > Netease Technology > Netease Technology > text

Huang Renxun's performance will be recorded in ten thousand words: so many customers need GPU, we have too much pressure

zero
Share to

May 23 news, Wednesday, local time, Yingweida The company announced its first fiscal quarter financial report of fiscal year 2025 as of April 28, 2024. The report shows that NVIDIA's revenue in the first fiscal quarter was 26 billion US dollars, up 262% year on year, exceeding the analysts' average expectation of 24.65 billion US dollars; Net profit was 14.81 billion US dollars, up 628% year on year; Earnings per share were $5.98, exceeding analysts' average expectations of $5.59. As the revenue and profit both exceeded the market expectations, Nvidia's share price soared after the financial report, exceeding the 1000 dollar mark for the first time.

After the release of the financial report, the President and CEO of Nvidia Huang Renxun (Jensen Huang), executive vice president and chief financial officer Colette Kress and other executives attended the subsequent financial report conference call to interpret the key points of the financial report and answer analysts' questions.

Interpretation of Huang Renxun's Financial Report

At present, the whole industry is undergoing major changes. Before starting the Q&A session, I want to talk about the importance of this change. A new industrial revolution has begun.

Many companies and countries are working with Nvidia to transform traditional data centers worth trillions of dollars into accelerated computing, and are committed to building new data centers, namely artificial intelligence AI Factories are used to produce an unprecedented commodity - artificial intelligence.

AI will bring about earth shaking efficiency improvement for almost all industries, helping enterprises to improve cost-effectiveness and energy efficiency while expanding revenue. Cloud service providers are pioneers of generative AI. With Nvidia's advanced technology, these cloud service providers accelerate workload processing, save money and reduce power consumption. The Tokens generated by Nvidia Hopper platform bring revenue to their AI services, while Nvidia Cloud instance attracts tenants in our huge developer ecosystem.

Due to the rapid growth of training and reasoning needs of generative AI on the Hopper platform, our data center business is also growing strongly. The scale of training continues to expand as models learn how to handle multimodal content, such as text, voice, image, video and 3D, and learn how to reason and plan.

Our reasoning workload is increasing significantly. With the development of generative AI, reasoning now involves the rapid generation of tokens on a large scale, which has become extremely complex. Generative AI is driving the transformation of computing platforms from basic to full stack, which will completely change every interaction experience between us and computers. We are moving from today's information retrieval model to a computing model that generates answers and skills. AI will gradually understand the context and our true intention, and have more powerful knowledge, reasoning, planning and task execution capabilities.

We are fundamentally reforming the working methods and functions of computers, from general CPU to GPU Accelerate computing, from instruction driven software to model of understanding intention, from simple information retrieval to complex skills execution. On the industrial level, we have changed from traditional software production to Token generation, that is, digital intelligence manufacturing.

Token generation will continue to promote the long-term construction of AI factories. In addition to cloud service providers, generative AI has also expanded to consumer Internet companies, various enterprises, Sovereign AI, automotive and healthcare fields, thus spawning multiple multi billion dollar vertical markets.

The Blackwell platform has been fully launched, laying a solid foundation for the processing of trillion parameter level generative AI. The combination of Grace CPU, Blackwell GPU, NVLink, Quantum, Spectrum and high-speed interconnection technology, together with our rich software and partner ecosystem, enables us to provide customers with more comprehensive and improved AI factory solutions than ever before.

Spectrum X has opened up a new market for us, enabling us to introduce large-scale artificial intelligence into Ethernet only data centers. NVIDIA NIM, As our new software product, with the support of our extensive ecosystem partner network, we can run enterprise optimized generative AI in various environments, from the cloud to the on-site data center to the RTX AI personal computer. From Blackwell to Spectrum X to NIM, we are ready for a new wave of growth in the future.

The following is a question and answer session for analysts:

Stacy Rasgon, an analyst at Bernstein: I want to know more about Blackwell. It has been put into production. Does this mean that the product has passed the sample stage? If so, how will this affect delivery and delivery time? What does Blackwell mean to customers when it really reaches them?

Huang Renxun: We will start shipping. In fact, we have started production for some time. But our production and delivery will start in the second quarter and accelerate in the third quarter. Customers should be able to set up a data center in the fourth quarter.

Rasgoon: Can Blackwell generate revenue this year?

Huang Renxun: Yes, this year we will see Blackwell generate a lot of revenue.

Timothy Arcuri, UBS analyst: I want to compare the deployment differences between Blackwell and Hopper, especially considering the system features and the huge demand for GB. How is this deployment different from Hopper? I ask because we have never used large-scale liquid cooling technology before, and there are some engineering challenges at the node level and in the data center. Will these complexities extend the transition period? How do you view this process?

Huang Renxun: Yes, Blackwell has multiple configurations. Blackwell is a platform, not just a GPU. This platform supports air cooling, liquid cooling, x86 and Grace, InfiniBand, and now Spectrum X and the very large NVLink field I demonstrated on GTC. Therefore, for some customers, they will gradually transition from the existing data center where Hopper has been installed. They can easily switch from H100 to H200 to B100. Therefore, the Blackwell system is designed with backward compatibility in mind, both in electrical and mechanical aspects.

Of course, the software stack running on Hopper will also perform well on Blackwell. We have also been "injecting live water" into the entire ecosystem to prepare them for liquid cooling. We have had a long and in-depth discussion with companies in Blackwell's ecosystem, including cloud service providers, data centers, ODM, system manufacturers, our supply chain, cooling technology supply chain and data center supply chain. They will not be surprised by the arrival of Blackwell and the capabilities we hope to provide through Grace and Blackwell 200.

Vivek Arya, securities analyst of Bank of America: Thank you for answering my question, Renxun. I would like to know how you can ensure that your products are kept in high utilization rate, and prevent early procurement or hoarding due to tight supply, competition or other factors? What mechanisms are there in your system to assure us that the revenue keeps pace with the very strong growth of shipments?

Huang Renxun: This is a very important point. I will answer your question directly. At present, the global data center's demand for GPU has reached an alarming level. We are struggling to catch up with this demand every day. The reason is that applications such as ChatGPT and GPT-4 are moving towards multimodal processing. The ongoing work of Gemini, Anthropic and all cloud service providers (CSPs) is consuming all available GPU resources in the market. In addition, there are about 15000 to 20000 generative AI start-ups involving multimedia, digital characters, and various design tools and productivity applications, including those in the field of digital biology and automatic driving video training. They are actively expanding, and their demand for GPU resources is increasing. We are actually racing against time. Customers have great pressure on us, and they are eager for us to deliver and deploy the system as soon as possible.

In addition, we also face challenges from sovereign AI, which aims to train regional models using national natural resource data. The deployment of these systems is also under great pressure. Therefore, the current demand is very high, far exceeding our supply capacity.

In the long run, we are revolutionizing the way computers work. This is a major platform transformation. Although it is compared to other platform changes in history, time will prove that this transformation will be more profound than ever before. Because modern computers are no longer driven only by instructions, but have turned to understand the intentions of users. It can not only understand the way we interact with it, but also grasp our needs and intentions. It has the ability of iterative reasoning and can develop and implement solutions. Therefore, every aspect of the computer is changing from simple information retrieval to generating context sensitive intelligent answers. This will completely change the global computing architecture, and even the PC computing platform will experience a revolution. All this is just the beginning. In the future, we will continue to explore in the laboratory and cooperate with start-ups, large enterprises and developers around the world to jointly promote this change. Its impact will be extraordinary.

Joseph Moore, Morgan Stanley analyst: I understand how strong the demand you just mentioned is. Your H200 and Blackwell have huge demand. So, how do you expect the market reaction when you move to Hopper and H100 products? Will people wait for the launch of these new products and expect their outstanding performance? Or do you think the demand of H100 is enough to maintain growth?

Huang Renxun: We noticed that the demand for Hopper in this quarter is growing continuously. We expect that, as we transition to H200 and Blackwell now, the situation of demand exceeding supply may continue for a period of time. Everyone is eager to get their infrastructure online as soon as possible. Because they can save money and make money as soon as possible.

Toshiya Hari, Goldman Sachs analyst: I want to ask about competition. I know that many of your cloud customers have announced new or updated existing internal programs to synchronize with your cooperation. To what extent do you see them as competitors in the medium and long term? In your opinion, do they mainly solve the internal workload, or may their role be more extensive?

Huang Renxun: Our differences are reflected in the following aspects. First of all, Nvidia's accelerated computing architecture enables customers to handle every link in their processes, from preparation training for processing of unstructured data to structured data processing, data frame processing similar to SQL, to training and reasoning. As I mentioned earlier, reasoning has undergone a fundamental change, and now it has become a generative model. It doesn't just recognize cats - which is quite difficult in itself - it needs to generate every pixel of cats. Therefore, the generation process is a new processing architecture. This is one of the reasons why TensorRT LLM is very popular. We use the same chip to triple the performance through our architecture. This fully demonstrates the depth and strength of our architecture and software. Therefore, from computer vision to image processing, from computer graphics to various computing forms, you can use NVIDIA's technology.

As the world is facing computing costs and energy inflation, general computing has reached a bottleneck, and accelerating computing is indeed a sustainable way to move forward. Accelerated computing is the key to saving computing costs and energy. Therefore, the versatility of our platform brings the lowest total cost of ownership (TCO) to the customer's data center.

Secondly, we are all over every cloud platform. Therefore, NVIDIA is always an excellent choice for developers looking for a development platform. We are local, in the cloud, no matter the size and shape of computers, we are almost everywhere. This is our second advantage.

The third advantage is closely related to the fact that we build an AI factory. It is increasingly recognized that AI is not just about chips. Of course, everything starts with excellent chips. We have made a lot of chips for our AI factory, but AI is a system problem. In fact, AI is now a system problem, not just a large language model, but a complex system composed of multiple large language models. Therefore, Nvidia builds such a system so that we can optimize all our chips to work together as a system, have software that can operate as a system, and can optimize it in the whole system.

From a simple numerical point of view, if you have an infrastructure worth 5 billion dollars, when you triple the performance of the infrastructure (which we often do), its value will also increase to 10 billion dollars. The cost of all these chips is not enough to pay for them. Therefore, its value is very great. That's why performance is critical today. In such an era, the highest performance also means the lowest cost, because the cost of infrastructure to maintain all these chips is very high. A large amount of money is needed to build and operate the data center, which also includes all related costs such as manpower, electricity, and real estate. Therefore, the highest performance also ensures the lowest total cost of ownership (TCO).

TD Cowen analyst Matt Ramsay: I have spent my entire career in the data center industry, but I have never seen such a rapid launch of a new platform as Nvidia, and your products have made a remarkable leap in performance: the training performance has been improved by 5 times, and the reasoning performance has been improved by 30 times, which is undoubtedly a remarkable achievement, But it also brings an interesting challenge: compared with your new products, the previous generation products that your customers spent billions of dollars on may be inferior in competitiveness, and their depreciation cycle is far shorter than expected. In the face of this situation, what do you think? When you migrate to a new generation of products such as Blackwell, you will have a huge installation foundation. Obviously, there is no problem in software compatibility, but the performance of a large number of installed products will be far less than that of the new generation of products. I am very curious about this and look forward to hearing about the changes you have observed in this process.

Huang Renxun: Thank you very much for your question. I'm glad to share my views. I want to emphasize three points.

First of all, whether in the initial stage of infrastructure construction (5%) or near completion (95%), your feelings will be very different. Because only 5% has been completed so far, you need to build as soon as possible. When Blackwell products are launched, it will be a huge leap. Later, as we continue to launch new Blackwell products, we are in the rhythm of annual updates. We hope customers can clearly see our development blueprint. Although their projects have just begun, they must continue to promote. Therefore, a large number of new chips will be launched, and they need to continue to build and gradually reach the standard by improving performance. This is a wise move. They need to make profits immediately and save costs, and time is crucial for them.

Let me give an example to illustrate the importance of time: why it is so critical to rapidly deploy the data center and shorten the training time. Because the next company to reach a new level of technology will announce a breakthrough AI technology, while the subsequent companies may only announce slightly improved products, with an increase of only 0.3%. So, the question is, do you want to be a company that has made breakthroughs or just a little ahead? This is why competition is so crucial in all technical competitions. You can see that many companies are competing in this field. It is very important to have a leading position in technology. Enterprises need to believe this and be willing to build on your platform for a long time, because they know that the platform will become better and better. Therefore, leadership is very important, and training time is also critical. The ability to complete the training three months in advance means that the project can be started three months in advance, all of which are crucial.

This is why we are so active in deploying the Hopper system, because the next technology platform is coming. The first comment you mentioned is very good, which is why we can make rapid progress and development. We have all the necessary technology stacks. We actually built the entire data center, which can monitor, measure and optimize everything. We know where the bottleneck lies. We are not making a pointless guess. We are not just showing beautiful slides. We do hope that our slides will look good, but what we provide is a system that can run on a large scale. We know how they can behave on a large scale, because we build them here. What we have done almost miraculously is that we have built the entire AI infrastructure here, and then we deconstruct and integrate it into the customer's data center, no matter which way they choose. But we know how it will work, we know where the bottleneck lies, we know where we need to work with them to optimize, and we know where we need to help them improve their infrastructure to achieve the best performance. This in-depth understanding of the size of the entire data center is the fundamental reason why we can distinguish ourselves from other competitors today. We build each chip from scratch, and we know exactly how the whole system handles it. Therefore, we are very clear about how it will behave and how it will bring its potential into full play in each generation of products.

So I am very grateful. These are the three points I want to share.

Evercore Mark Lipacis, ISI analyst: You have mentioned that general computing ecosystems tend to dominate in every computing era, because by adapting to different workloads, these systems can achieve higher utilization when computing demand drops. This seems to be your motivation to promote the establishment of a CUDA based general GPU ecosystem to accelerate the development of computing. Now, considering that the current main workload driving the demand for solutions is driven by neural network training and reasoning, this seems to be a limited number of workloads on the surface. Therefore, some people may think that it is more suitable to adopt customized solutions. However, the key question is: Are the general computing frameworks facing greater challenges, or do they have enough flexibility and development speed to continue to play the historical advantages of the general framework on these specific workloads?

Huang Renxun: Nvidia's accelerated computing is multifunctional, but it cannot be regarded as a general computing platform. For example, we are not good at performing typical general computing tasks such as spreadsheets. The control loop of operating system code may be acceptable for general computing, but it may be unsatisfactory for accelerated computing. Therefore, although I call our platform multifunctional, it does not mean that it is applicable to all scenarios. We can accelerate applications in many fields. Although these applications have deep differences, they are more common: they can be processed in parallel and highly threaded. For example, 5% of the code may account for 99% of the running time, which is the feature of accelerated computing. The versatility of our platform and the overall design of our system have enabled countless start-ups to grow rapidly relying on our technology in the past decade. Although the architecture of these companies is fragile, our systems can provide stable support when facing emerging technologies such as generative artificial intelligence or fusion models. Especially when there is a large language model that requires continuous dialogue and context understanding, Grace's memory function is particularly critical. Therefore, in the progress of AI, we emphasize not only the need to design solutions for a single model, but also the need to provide systems that can widely serve the entire field. We follow the basic principles of software and believe that software will continue to evolve and become more perfect and powerful. We firmly believe that the scale of these models will be expanded millions of times in the next few years. Our platform versatility has played a key role in this process. If we are too single-minded, we may only be making FPGA or ASIC, but this is far from a complete computing solution.

Blayne Curtis, Jefferies analyst: I am very interested in your H20 product specially launched for the Chinese market. Given the current supply constraints, I am curious how you can balance the demand for this product with the supply of other Hopper products. Can you elaborate on the outlook for the second half of the year, including the possible impact on sales and gross profit margin?

Huang Renxun: I may not fully understand your question about the H20 you mentioned and the supply distribution between different Hopper products. But I want to say that we respect every customer and try our best to provide them with the best service. Indeed, our business in China has declined compared with the past, which is mainly due to the restrictions on technology exports and the intensification of competition in the Chinese market. But please rest assured that we will still try our best to provide the best service for customers in the Chinese market. As for the supply problem you mentioned, our comments also apply to the whole market, especially the supply of H200 and Blackwell at the end of the year. Indeed, the demand for these two products is very strong.

Srini Pajjuri, analyst of Raymond James: I want to know more about the GB 200 system you just mentioned. At present, the market has a great demand for these systems. Historically, Nvidia has sold a large number of HGX and GPU, while the system business is relatively small. So I'm curious, why do you foresee such strong demand for the system now? Is this just due to the consideration of total cost of ownership (TCO), or are there other factors, such as architectural advantages?

Huang Renxun: In fact, the way we sell the GB 200 is the same as the way we deconstruct the product. We break down all reasonable components and integrate them into computer manufacturers. This year, we will have 100 different Blackwell computer system configurations on the market, which is unprecedented. Hopper has only half of its configuration options during its peak period, and the initial configuration is much less than this. Blackwell offers more diverse options. Therefore, you will see liquid cooled version, air-cooled version, x86 version, Grace version, and so on. Our partners are also providing these diverse systems. Nothing has really changed. Of course, the Blackwell platform has greatly expanded our product portfolio. With CPU integration and more compact computing density, liquid cooling will save a lot of costs in power supply for the data center and improve energy efficiency. Therefore, this is a better solution. It is more extensible, which means that we provide more components for the data center. In this process, everyone is the winner. The data center will get higher performance network, from network switch to network. Of course, we now have network cards and Ethernet, so that we can bring NVIDIA AI to large-scale customers who only know how to operate Ethernet, because they have such an ecosystem. Therefore, Blackwell is more scalable, and we provide more things for customers. This generation has more products.

Truist William Stein, Securities analyst: Although there are CPUs with good performance available in the market for data centers, your Grace CPU based on the Arm architecture provides some real advantages, making this technology worth delivering to customers. These advantages may be related to cost-effectiveness and power consumption? Or is it related to the technical synergy between Grace and Hopper, Grace and Blackwell? Can you explain whether similar dynamics may occur on the client side? Although there are already good solutions in the market, such as Intel and AMD both provide excellent X86 products, NVIDIA may have some unique advantages in terms of emerging AI workloads, which may be difficult for other companies to compete with?

Huang Renxun: You mentioned some very good reasons. Indeed, for many applications, our partnership with x86 partners is excellent, and we have built many excellent systems together. However, Grace allows us to do things that the current system configuration cannot do. The memory system between Grace and Hopper is coherent and tightly connected. It seems inappropriate to regard them as two independent chips, because they are more like a super chip. The bandwidth of the connection interface between the two is several terabytes per second, which is very amazing. Grace uses LPDDR memory, which is the first data center level low-power memory. Therefore, we have saved a lot of power on each node. In addition, since we can now create the architecture of the entire system, we can create a system with a very large NV connection domain, which is crucial to the reasoning of the next generation of large language models.

Therefore, you can see that GB200 has a 72 node NVLink domain, which is like connecting 72 Blackwells into a huge GPU. Therefore, we need to combine Grace and Blackwells closely. Therefore, there are architectural reasons, software programming reasons, and system level reasons, which are necessary conditions for us to build them. So if we see similar opportunities, we will explore them. As you saw at the Microsoft press conference yesterday, Satya Nadella, the CEO of Microsoft, announced the next generation PC Copilot+PC, which runs very well on our RTX GPUs, which are shipping on laptops. But it also supports ARM well. Therefore, it opens the door for system innovation and even for PC.

Cantor Fitzgerald analyst C C.J. Muse: I think this is a long-term problem. I know that Blackwell has not even launched products, but it is clear that investors always have foresight. In the increasingly fierce competition between GPU and customized ASIC, how do you view Nvidia's innovation pace in the next decade? In the past decade, Nvidia has introduced impressive technologies such as CUDA, Varsity, Precision, Grace and Connectivity. What challenges does Nvidia need to address in the next 10 years? Perhaps more importantly, what would you like to share with us today?

Huang Renxun: For the future, I can proudly tell you that after Blackwell, we will also introduce a new chip. We are in the rhythm of updating once a year, so you can expect that we will introduce new network technologies at an extremely fast speed. Recently, we launched Spectrum-X for Ethernet, but our plan for Ethernet is far more than that. It is full of passionate potential. We have a strong partner ecosystem. For example, Dell announced that Spectrum X will be launched to the market. Our customers and partners will continue to launch new products based on Nvidia AI factory architecture. For those companies pursuing the ultimate performance, we provide InfiniBand computing architecture, which is a network solution that has become more and more excellent after years of development. As the basic network, Ethernet will have stronger computing power through Spectra-X.

We are fully committed to the development of these three paths: NVLink computing architecture for a single computing domain, InfiniBand computing architecture, and Ethernet network computing architecture. We will push forward the development in these three directions at an astonishing speed. You will soon see new switches, new network cards, new functions and new software stacks running on these devices emerge. A series of new chips such as CPU, GPU, network card and switch will be introduced soon.

The most exciting thing is that all these products will support CUDA and will be compatible with our entire software stack. This means that if you invest in our software stack today, you never need to worry about its obsolescence or backwardness, because it will continue to evolve and become faster and more powerful. If you choose to adopt our architecture today, as it gradually enters more clouds and data centers, you will be able to continue to run your business seamlessly.

I believe that the innovation brought by Nvidia will continuously improve our ability and reduce the total cost of ownership (TCO). We are confident that through Nvidia's architecture, we will be able to lead this new computing era and start this new industrial revolution. We are no longer just producing software. We are manufacturing AI Token on a large scale. (small)

Extended Reading
Related recommendations
Hot spot recommendation
 A man in Shanghai invited a friend to his house, but unexpectedly, his wife cheated while taking a bath

A man in Shanghai invited a friend to his house, but unexpectedly, his wife cheated while taking a bath

Documentary record
2024-08-27 21:25:56
 Sima Nan has harvested a screen full of rotten eggs in Xi'an!

Sima Nan has harvested a screen full of rotten eggs in Xi'an!

Go to Africa
2024-09-19 05:28:27
 It is wise for the army to ban it! Israel declares to the world that mobile phones are time bombs

It is wise for the army to ban it! Israel declares to the world that mobile phones are time bombs

Brother Ji said something
2024-09-18 14:29:33
 Reeves: LeBron doesn't play one-on-one. He doesn't think it's real basketball

Reeves: LeBron doesn't play one-on-one. He doesn't think it's real basketball

Live
2024-09-21 04:04:43
 Hizbullah was almost killed and injured, reminding China not to think too well of them

Hizbullah was almost killed and injured, reminding China not to think too well of them

Yunhua said
2024-09-20 10:52:09
 The property market in Zhengzhou was completely destroyed, and the house price of a community in Zhengzhou fell to more than 5200 yuan

The property market in Zhengzhou was completely destroyed, and the house price of a community in Zhengzhou fell to more than 5200 yuan

Ask Uncle Peng if you need anything
2024-09-20 17:16:04
 The temperature drops 6~7 ℃! The cold air has reached the middle of Zhejiang! The weather stage is about to be greatly adjusted, and it will be rainy in the next few days!

The temperature drops 6~7 ℃! The cold air has reached the middle of Zhejiang! The weather stage is about to be greatly adjusted, and it will be rainy in the next few days!

Zhejiang Weather
2024-09-21 11:35:10
 Internet users are concerned about the forced demolition of orchards in Henan in April. Why was the video released in September

Internet users are concerned about the forced demolition of orchards in Henan in April. Why was the video released in September

Sister Mordor
2024-09-21 10:36:48
 Alvarez: Night of Champions League victory! Thank you for your support ❤️

Alvarez: Night of Champions League victory! Thank you for your support ❤️

Live
2024-09-21 08:32:16
 No virtue without speaking! Indonesia plays like this, but Asia has no rivals. How Toshir plays football

No virtue without speaking! Indonesia plays like this, but Asia has no rivals. How Toshir plays football

Mr. Tian Basketball
2024-09-21 11:46:05
 The offshore RMB rose nearly 300 points last night

The offshore RMB rose nearly 300 points last night

Securities Times
2024-09-21 09:44:27
 Quan Hongchan's father is worried. He doesn't know how to deal with the two red flag cars. Quan Hongchan's father: netizens give some advice

Quan Hongchan's father is worried. He doesn't know how to deal with the two red flag cars. Quan Hongchan's father: netizens give some advice

Entertainment gossip Mumuzi
2024-09-20 21:40:11
 The itinerary is a mystery. Su Lin will visit the United States. He made a statement to China before boarding the plane, and 13 words set the tone for Sino Vietnamese relations

The itinerary is a mystery. Su Lin will visit the United States. He made a statement to China before boarding the plane, and 13 words set the tone for Sino Vietnamese relations

Tell the truth
2024-09-21 08:22:48
 There is a mole on the left cheekbone. He escaped on September 18 with a green umbrella and a telescope The police are offering a reward to the suspect in this major criminal case

There is a mole on the left cheekbone. He escaped on September 18 with a green umbrella and a telescope The police are offering a reward to the suspect in this major criminal case

Extreme news
2024-09-21 10:50:41
 58 years old, childless, 2500 villas for retirement pension auctioned, actress Yang Kun's current situation makes people sad

58 years old, childless, 2500 villas for retirement pension auctioned, actress Yang Kun's current situation makes people sad

Qingyan Finance
2024-09-19 14:05:58
 Rumors of rumor dispelling and running away, Ding Lei's 2024 is doomed to be difficult

Rumors of rumor dispelling and running away, Ding Lei's 2024 is doomed to be difficult

Market capitalization
2024-09-20 14:34:02
 The international community tries to prevent Israel and Lebanon from starting an all-out war

The international community tries to prevent Israel and Lebanon from starting an all-out war

Global Times International
2024-09-21 06:37:21
 130km long endurance! The first launch of Xiaoniu FX series electric vehicles: from 5499 yuan

130km long endurance! The first launch of Xiaoniu FX series electric vehicles: from 5499 yuan

Fast technology
2024-09-20 21:07:09
 Liu Zhenyun, a talent from Peking University

Liu Zhenyun, a talent from Peking University

The wind blows the heart
2024-09-19 15:10:02
 In September, the Shanghai Auction results were announced, with an average transaction price of 93255 yuan

In September, the Shanghai Auction results were announced, with an average transaction price of 93255 yuan

Interface News
2024-09-21 12:00:16
2024-09-21 15:00:49

Highlights of science and technology

Huawei's new machine costs 20000 yuan per second, and Apple's top machine is hard to find

Headlines

Cui Yongxi, a Guangdong player, has officially landed in the NBA. His father was once famous for dunking at the age of 48

Headlines

Cui Yongxi, a Guangdong player, has officially landed in the NBA. His father was once famous for dunking at the age of 48

Sports News

Cui Yongxi Landing in NBA! Nets signed a two-way contract with Chinese forward

Entertainment Highlights

Huang Xuanguan announces his girlfriend and takes a public photo with her

financial news

Will you sell yourself to Qualcomm? Where Intel will go

Automobile News

The standard configuration of the whole series of Yi San Fang Denza Z9GT sells for 3348-414800 yuan

Original attitude

game
Parenting
mobile phone
education
Open class

The 3D model of "Black Myth" Princess Iron Fan has been extracted! Elegant and beautiful

News of parents and children

The father washed the baby's hair and poured water directly

Mobile News

HMD Skyline mobile phone goes on sale at JD International: 12G+256G, 2999 yuan

Highlights of education

Mathematics in Grade One: Difficulty in Learning Dominant Problems in Primary Schools

Open class

Ten Little Things That Change Your Life

Accessible browsing Enter the caring version
×