Debunking Hype
I am usually very excited to read about new graphical techniques, and even new hardware approaches, even if they are presently impractical for real use. I was very interested in Intel's Larabee project, even though I didn't expect to see usable results for years.
However, sometimes articles get published which are nothing but snake oil to raise stock prices. The uninformed reader doesn't know the difference, and these articles are usually written in such a way that they sound authoritative and knowledgeable. It's unfair to consumers, it's unfair to stockholders, and it hurts the industry, because customers become unable to differentiate between legitimate claims and marketing nonsense. This one is so over-the-top, I have to say something.
In an attempt to stay relevant in real-time graphics, Intel, the company that single-handedly destroyed the PC gaming market with their integrated graphics chips, is touting anti-aliasing on the CPU.
There's a nice explanation with diagrams that make this sound like an exciting new technique Intel engineers came up with. The algorithm looks for edges and attempts to smooth them out:
It's so advanced, that I wrote this exact same algorithm back in 2007, just for fun. Below are my images from it.
Original:
Processed:
The reason this is dishonest is because you would never do this in real-time on the CPU. It may be possible, but you can always perform antialiasing on the GPU an order of magnitude faster, whether the device is a PC, a netbook, or a cell phone. I don't think Sebastian Anthony has any clue what he is writing about, nor should he be expected to, since he isn't a graphics programmer.
Furthermore, swapping images between the GPU and the CPU requires the CPU to wait for the GPU to "catch up" to the current instructions. You can see they completely gloss over this important aspect of the graphics pipeline:
This means that a pipeline can be created where the graphics hardware churns out standard frames, and the CPU handles post-processing. Post-processed frames are handed back to the GPU for any finishing touches (like overlaying the UI), and then they’re sent to the display.
Normally, graphics are a one-way street from the CPU, to the GPU, to the monitor. The CPU throws instructions at the GPU and says "get this done ASAP". The GPU renders as fast as it can, but there is a few milliseconds delay between when the CPU says to do something, and when the GPU actually does it. Sending data back to the CPU forces the CPU to wait and sync with what the GPU is doing, causing a delay significant enough that you NEVER do this in a real-time renderer. This is why occlusion queries have a short delay when used to hide occluded objects; the CPU doesn't get the results of the query until a few frames later. If I made the CPU wait to get the results before proceeding, the savings you would gain by hiding occluded geometry would be completely negligible compared to the enormous slowdown you would experience!
What Intel is suggesting would be like if you went to the post office to mail a letter, and you weren't allowed to leave the building until the person you were sending it to received your letter and wrote back. They're making these claims with full knowledge of how ridiculous they are, and counting on the public's ignorance to let it slide by unchallenged.
So no, Sebastian, this is not going to "take a little wind out of AMD’s heterogeneous computing sails". Please check with me first next time before you reprint Intel's claims on anything related to graphics. If any Intel executives would like to discuss this with me over lunch (your treat) so that I can explain to you how to turn the graphics division of your company around, I live near your main headquarters.
5 Comments
Recommended Comments