Graphics processing technology has evolved to deliver unique benefits in the world of computing. Designed for parallel processing, the GPU (graphic processing unit) is used in a wide range of applications, including graphics and video rendering. Although they’re best known for their capabilities in gaming, GPUs are becoming more popular for use in creative production and AI. Over time, also CPUs (central processing unit) and the software libraries that run on them have evolved to become much more capable for deep learning tasks. For example, through extensive software optimizations and the addition of dedicated AI hardware, such as Intel® Deep Learning Boost (Intel ® DL Boost) in the latest Intel® Xeon ® Scalable processors, CPU-based systems have enjoyed improvements in deep learning performance. For many applications, such as high-definition-, 3D-, and non-image-based deep learning on language, text, and time-series data, CPUs shine. CPUs can support much larger memory capacities than even the best GPUs can today for complex models or deep learning applications (e.g., 2D image detection). Besides the obvious cost advantage, there are other reasons to challenge the thinking AI = GPU.
AI is all about GPUs, or is it?
For example, CPU based systems are generally simpler and more robust, e.g., for the implementation in edge environments. GPUs have a higher power consumption and cooling requirements while CPUs are available in various proven standard systems for easy deployed in the data center and on the edge. Additionally, the commitment of Intel for developing on the oneAPI open standard helps ensure maximum code reuse across stacks and architectures and tools like OpenVINO simplify deep learning inference deployment for hundreds of pre-trained models for CPU based systems. The development of AI solutions on the AI Test Drive, such as the Fujitsu Sentiment Analyzer, show how maintaining a common workflow reduces time and cost for more experimentation and better accuracy. This is how to scale AI everywhere - partnering with a broad, open software ecosystem. Today, CPUs let you build the AI you want, where you want it, on the x86 architecture you know. Let us test your solution together on the AI Test Drive to find out what fits your individual needs.
Graphics processing technology has evolved to deliver unique benefits in the world of computing. Designed for parallel processing, the GPU (graphic processing unit) is used in a wide range of applications, including graphics and video rendering. Although they’re best known for their capabilities in gaming, GPUs are becoming more popular for use in creative production and AI. Over time, also CPUs (central processing unit) and the software libraries that run on them have evolved to become much more capable for deep learning tasks. For example, through extensive software optimizations and the addition of dedicated AI hardware, such as Intel® Deep Learning Boost (Intel ® DL Boost) in the latest Intel® Xeon ® Scalable processors, CPU-based systems have enjoyed improvements in deep learning performance. For many applications, such as high-definition-, 3D-, and non-image-based deep learning on language, text, and time-series data, CPUs shine. CPUs can support much larger memory capacities than even the best GPUs can today for complex models or deep learning applications (e.g., 2D image detection). Besides the obvious cost advantage, there are other reasons to challenge the thinking AI = GPU. For example, CPU based systems are generally simpler and more robust, e.g., for the implementation in edge environments. GPUs have a higher power consumption and cooling requirements while CPUs are available in various proven standard systems for easy deployed in the data center and on the edge. Additionally, the commitment of Intel for developing on the oneAPI open standard helps ensure maximum code reuse across stacks and architectures and tools like OpenVINO simplify deep learning inference deployment for hundreds of pre-trained models for CPU based systems. The development of AI solutions on the AI Test Drive, such as the Fujitsu Sentiment Analyzer, show how maintaining a common workflow reduces time and cost for more experimentation and better accuracy. This is how to scale AI everywhere - partnering with a broad, open software ecosystem. Today, CPUs let you build the AI you want, where you want it, on the x86 architecture you know. Let us test your solution together on the AI Test Drive to find out what fits your individual needs.