That's Not What APUs Are For!

On Episode 41 of Hardware Addicts, APUs were described by the crew as a CPU with really well integrated graphics, and that’s about where the description stopped, even though I’d say that was stopping short of what an APU is supposed to be.

Yes, as a shallow description, an APU is a GPU/CPU combo that shares the same die and bus, but the reason behind that is where the name comes in. The point in combining the GPU and CPU on the same bus and integrating them into the same die is to facilitate heterogeneous computing. The idea behind an APU is that vectorized, parallel processing can be offloaded to a GPU with the same latency as offloading computation to a different core. In normal GPGPU computations, data and instructions have to be passed over PCI before they can be handled and then must be passed back over PCI to get back to the CPU and handed off to I/O.

AMD was pushing GPGPU hard when they bought ATI as an attempt to get around the fact that they were falling behind Intel and were probably aware at the time that 15H (Bulldozer) was not going to hit performance expectations. I remember being exited for this and started teaching myself OpenCL to prepare for that future. Unfortunately, Nvidia did a pretty good job at the time of painting CUDA as much easier to use than OpenCL to academics, successfully locking out OpenCL for much of the industry; this being despite the facts that CUDA is locked down to only a single supplier, isn’t easier for anything that goes beyond the most basic of programs and most developers use intermediate languages anyways.

Despite all the above, there are still plenty of situations where APUs really shine. Slipping through the cracks of the last decade or so, GPU computation has been able to carve out a few niches in things like GLSL compute shaders and the like. While it hurts to think about what could have been compared to what we see today, an APU’s strength is still what is always was: GPU compute. I just wish more people understood that.

1 Like