-
In 2018, with With the launch of RTX technology and the first consumer grade GPU (GeForce RTX) specially built for AI, NVIDIA has accelerated the transition to AI computing. -
Definition of AI PC: AI PC is a computer equipped with special AI acceleration hardware. On the RTX GPU, these dedicated AI accelerators are called Tensor Core. When running extremely demanding work and entertainment applications, the Tensor Core can significantly accelerate AI performance and introduce new functions that can only be run in the cloud before for PC users. -
How to measure the performance of AI PC: One way to measure AI performance is to take teraops as the unit. Teraops is "trillions of operations per second" (TOPS). Similar to the horsepower level of the engine, TOPS can be used as a single indicator to let users understand the AI performance of the PC. -
Advantages of running AI applications locally on a PC: It runs on the GeForce RTX PC system, so it is fast, and the user's data is saved locally. Users can process sensitive data on the local PC, so there is no need to share data with a third party or connect to the Internet.
-
World class digital human technology
-
NVIDIA ACE microservice brings AI characters to life.
-
NVIDIA RTX - A collection of rendering technologies such as RTX Global Illumination (RTXGI) and DLSS 3.5, which can realize real-time path tracking in games and applications.
-
NVIDIA ACE brings lifelike NPC to games -
Covert Protocol is a new technology Demo jointly developed by Inworld AI and NVIDIA, which breaks through the boundaries of role interaction in the game. Inworld AI engine integrates NVIDIA Riva and NVIDIA Audio2Face, the former can achieve accurate voice to text conversion, and the latter can provide realistic facial expressions. -
The Inworld AI engine uses multimodal methods to display non player characters (NPC), integrates cognitive, perceptual and behavioral systems, and presents amazing RTX rendered characters in an elaborate environment to achieve immersive narrative effects.
-
Ubisoft NEO NPCs use Inworld and NVIDIA ACE technology to explore the possibility of digital people in games -
NEO NPCs, developed by the multidisciplinary team of Ubisoft Paris Studio, is the result of close cooperation between Ubisoft creators and NVIDIA and Inworld AI, the leading partners of generative AI technology. Inworld's role engine and LLM technology enable Ubisoft's narrative team to establish a complete background, knowledge base and dialogue style for each NPC, while NVIDIA Audio2Face in NVIDIA ACE technology suite is used to achieve real-time facial animation.
-
Ubisoft demonstrated the functions of NEO NPC through three independent technology Demos. Each scenario focuses on different aspects of NPC behavior, environment and context awareness; A series of real-time reactions and animations; And continuous dialogue, collaboration and strategic decision-making. These experimental findings break the boundaries between game design and immersion.
-
Ubisoft's narrative team used Inworld AI technology to create two NEO NPCs: Bloom and Iron. They have their own background stories, knowledge bases and different dialogue styles, and establish unique roles in the technological Demo universe.
-
Inworld technology also provides NEO NPC with internal knowledge about the surrounding environment, and provides interactive response through the LLM of Inworld. Audio2Face provides real-time facial expression and mouth shape synchronization for the faces of two NPCs.
-
DLSS 3.5 sets a new game standard and enhances ray tracing through AI -
DLSS: Since 2019, more than 500 games and applications have revolutionized the way people play and create games using ray tracing, DLSS and AI technologies. -
On GDC, NVIDIA announced that two new games will support DLSS 3.5 ray reconstruction and panoramic ray tracing technology, which will greatly improve the image quality and performance and bring the ultimate experience to GeForce players. The highly anticipated Black Myth: Wukong will be released on August 20. NARAKA: BLADEPOINT will add panoramic ray tracing to three maps in PVP and PVE modes, and will support panoramic ray tracing in more maps later. -
Black Myth: Wukong was highly anticipated and received a lot of reports after the release of GDC# Black Myth: Wukong DLSS 3.5 trailer # ranked first in the search volume of Bilibili. Within 90 minutes after GDC was released, all platforms had more than 300000 views.
-
Chat with RTX, Listen more, see more, talk more
-
Chat with RTX (or ChatRTX for short), uses retrieval enhanced generation, NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration to introduce local generated AI functions into Windows systems that support RTX. Users can quickly and easily connect local files as datasets to open big language models (such as Mistral or Llama 2) to quickly query context sensitive answers. -
In addition to text, Chat with RTX will soon add support for voice, image and new models. In addition to Google's Gemma, Chat with RTX will also support ChatGLM in future updates.
-
China's leading editing software Scissor Image takes the lead in using RTX AI platform to accelerate product functions
-
NVIDIA is exploring and promoting the landing of generative AI on the PC side together with the cutting and mapping professional version of the popular editing software in China. At present, NVIDIA has promoted the launch of the Scissor Image AI WordArt function through TensorRT, and is accelerating the application of generative AI in the Scissor Image product function through the RTX AI platform.
-
The product manager of Scissor Image said: "Scissor Image and NVIDIA have been very close partners for a long time, and the professional version of Scissor Image can achieve higher performance under the acceleration of RTX GPU. We expect that the powerful performance of RTX AI PC and the ability to accelerate and optimize the processing of AI models can help Scissor Image users to create more efficiently and intelligently."
-
RTX AI tools for developers: AI solutions that drive innovation -
NIM microservices -- The pre built AI "container" uses industry standard APIs, including necessary software components, to optimize the overall experience and help reduce the deployment time from weeks to minutes. -
NVIDIA AI Workbench -- Now fully available, it can help developers quickly create, test and customize pre training generative AI models and LLMs by taking advantage of PC level performance and video memory footprint. -
TensorRT acceleration helps developers make full use of the accelerator library software of Tensor Core in RTX GPU. TensorRT acceleration is now available for text-based applications through TensorRT-LLM for Windows. Developers can also access the TensorRT-LLM wrapper of the OpenAI Chat API.