800x100 static WP 3
WP_Term Object
(
    [term_id] => 71
    [name] => Xilinx
    [slug] => xilinx
    [term_group] => 0
    [term_taxonomy_id] => 71
    [taxonomy] => category
    [description] => 
    [parent] => 106
    [count] => 114
    [filter] => raw
    [cat_ID] => 71
    [category_count] => 114
    [category_description] => 
    [cat_name] => Xilinx
    [category_nicename] => xilinx
    [category_parent] => 106
)

GPU vs. FPGA

GPU vs. FPGA
by Luke Miller on 06-09-2013 at 9:00 pm

I just don’t understand it? My kids love surprises but I have yet to find management that does, go figure but boy during a review they can really spring them on ya! What surprises me is the absurdness of my title, GPU vs. FPGA. FPGAs are not GPUs and likewise but none the less there is the push to make a fit where nature does not allow. I liken the FPGA to the Ferrari and the GPU to Big Foot. Remember that, going to the Aud in your town and watching that monster truck crush them cars. I never regained my hearing. Vinny Boombots is still saying the suit will close any day now.

The GPU back in the day was just a Graphical Processing Unit. Today thanks to CUDA and OpenCL, they can be programmed to be massively parallel. Why does the word parallel always proceed with massively nowadays? Anyways, we see the benchmarks, take a NVIDIA Fermi and all its cores and unroll your 262k point FFT and get er done in 9us. Not really, we forgot the memory overhead, which is roughly another 60ms. An Old Virtex-5 does the same FFT in 2.6ms. The FPGA used about 15 Watts and the GPU roughly 130 watts. Not that I’m a green fella and all that; but for heavy DSP processing where perhaps these things have a SWaP requirement, the GPU is a tough cooling challenge and awfully hungry.

Who’s in control? That’s like telling someone to settle down when they are spun up. Try it; you’ll be in for some laughs. Unless you are using the Intel Sandy Bridge, and I’m not sure you can run GPGPU there but let’s just say you could, other GPUs need to be controlled by a CPU. More watts and overhead. Now the one die CPU/GPU is a great idea but the FPGA performing DSP processing does not need a CPU per say to control it. Even if it did, the Xilinx Zynq is a nice fit with the dual ARMs to handle the out of band calculations right into the same FPGA. What about IO, the FPGA can support anything you can think of, as for real-time data processing we are not using TCP/IP or even UDP, think low latency here.

The take away is that they really are two different beasts and it is complicated to make a fit where it does not belong. What has happened though is the open community thanks to CUDA has allowed the littlest of nerds to play with GPU processing and the community has come to this conclusion, it is cool. And it really is, but when you have requirements to meet and processing that needs low latency and the answer at the same time every time (deterministic) the FPGA will be your choice. Now perhaps you have a non-real time system and need to play back lots, and lots of data for hardware in the loop acceleration, then the GPU may be your answer. My point is get your head out of the buzz word bingo and sift through all the marketing propaganda. Make the right decision and design the best system for your customer and make your stockholders happy…. Have fun…

 lang: en_US

Share this post via:

Comments

0 Replies to “GPU vs. FPGA”

You must register or log in to view/post comments.