GPUs or CPUs? It’s an important IT performance question today. Do you know the right answer? Before you can choose the right processing unit/units, you have to understand what they do and how they differ, which is where we will begin.
Our Logical Friend, the CPU
A Central Processing Unit (CPU) is a general purpose processor that does all the logical work of computation. In principle, it can do any computation but not necessarily in an optimal fashion. CPUs generally have a higher operating frequency than Graphics Processing Units (GPUs) and are designed for general-purpose use. Consisting of a few cores optimized for sequential serial processing, CPUs excel in serial tasks, branching operations and file operations. They have traditionally been easier to program and more versatile for general purpose usage, such as opening a file or calculating a sum.
Our Special Purpose Friend, the GPU
Next we have the GPU. It is a special purpose processor optimized for calculations commonly required for computer graphics, particularly for Single Instruction, Multiple Data (SIMD) operations. GPUs have a massive parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously. This allows them to operate at a lower frequency and a lower number of registers. GPUs are combined with lots of memory and generally have a high memory bandwidth to support the hundreds of small processors. They are special purpose built and designed to solve floating point math problems. Parallel floating point operations and the instruction pipeline that feeds these specialized cores is why GPUs are extremely good at parallel operations.
Which Offers Optimal Performance?
Now that we have an understanding of what these two processing units are and their differences, this leads me to the purpose of this blog: Does the CPU or the GPU provide better optimal performance? Is there a business case you should be using one and not the other? The fact is both offer advantages for different problems and neither one is a clear winner in all use cases.
GPUs historically have a very high “arithmetic intensity” due to their architectural specializations, which are designed to speed up texture operations essential to advanced 3D graphics. GPU movement has exploited this high arithmetic intensity to make available another resource to improve computational performance. Architectural specializations, as I mentioned before, have enhanced certain forms of computation while reducing GPUs’ abilities to do general purpose computation. The specializations applied at the GPU level to improve its arithmetic intensity make it great for computing certain classes of problems that have a very regular memory representation or can be expressed easily in the form of streams. However, classes of problems may not be as amenable to this special class of GPU hardware. All that is to say, GPUs offer a lot but you still need both special and general-purpose hardware to provide performance on a broader class of problems. Why? Because speed isn’t everything.
When GPUs Are Slower than CPUs
First of all, having computations computed 20 times faster can be highly problematic. Just because some computations can be sped up, it doesn’t mean that the entire analysis would be faster. In fact, the entire analysis could be even slower than using a CPU if the CPU can compute other parts of the analysis faster.
Additionally, it would take a significant amount of software development to run even fairly common code on a GPU. Some types of code may require modification while other types may not be able to run on the GPU at all. Many software vendors aren’t convinced that the effort will even deliver ROI. By having the GPU only providing parallel coding and by keeping the CPU doing serial coding makes it rather a simplistic solution given the nature of the environments we have today. These environments will become more complicated, while providing more choices for optimizing performance in the future. Therefore, you will be able to provide gains in both parallel and serial computing for greater performance in the entire system. The optimal way would be to have your GPU doing graphic calculations and the CPU will provide non-graphical calculations at the same time. Parallel processing has become ingrained at the computing level and overlooking it would be an oversight in performance gains.
A Combined Force
The best performance today is a combined force. While GPUs have a lot to offer, you still need the traditional CPU after all. CPUs are where the vast majority of engineering and office software runs and where the primary software development skill set resides. For the foreseeable future, the CPU also offers all-around performance levels that are at least good enough to make them essential for the foreseeable future. The way forward (for now in a fast-changing tech landscape) is to leverage both and watch for further convergences or advances for these two bedrocks of technology.
To learn more about IT performance, contact IDS.