You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Embedded FPGA's create new IP category

    FPGAs are the new superstar in the world of Machine Learning and Cloud Computing, and with new methods of implementing them in SOCs there will be even more growth ahead. FPGAs started out as a cost effective method for implementing logic without having to spin an ASIC or gate array. With the advent of the web and high performance networking hardware, discrete FPGAs evolved into heavy duty workhorses. The market has also matured and shaken out, leaving two large gorillas and a number of smaller players. However, the growth of AI and the quest for dramatically improved cloud server hardware seems to be the expanding the role of FPGAs.

    Article: Will Andy Grove Save Intel By Recruiting Jen Hsun Huang?-achronix-speedster-22i-min-jpg

    At DAC in Austin I came across Achronix a relatively new FPGA company that is experiencing a renaissance. I stopped by the speak to Steve Mensor, their VP of Marketing. There was reason enough to speak with him because of their recent announcement that their YTD revenues for 2017 are already over $100M. This is largely the result of solid growth in their Speedster 22i line of FPGA chips. Achronix originally implemented this line at the debut of Intels Custom Foundry on the then state of the art 22nm FinFET node. This gave them the distinction of being the first customer of Intels Custom Foundry.

    Article: Will Andy Grove Save Intel By Recruiting Jen Hsun Huang?-speedcore-socs-min-jpg

    Building on this, Steve was eager to talk about their game changing IP offering of embedded FPGA cores aptly named Speedcore eFPGA. These are offered as fully customized embedded FPGA cores that can be integrated right into system level SOCs. To understand why this important, we have to look at a recent research project by Microsoft called Catapult with the goal of significantly boosting search engine performance. Microsoft discovered that there was a big advantage in converting a subset of the search engine software into hardware optimized for the specific compute operation. This advantage is amplified when these compute tasks can be made massively parallel exactly the kind of thing that FPGAs are good at. They also studied the same approach for cloud computing with Azure and found performance benefits there too.

    The next market factor that starts to make embedded FPGA cores look extremely attractive is Neural Networks. Both training and recognition require massive computing that can be broken into parallel operations. The recognition phase such as the one running in an autonomous vehicle can be implemented largely with integer operations. Once again this aligns nicely with FPGA capabilities. So if FPGAs can boost search engine and AI applications, what are the barriers to implementing them in todays systems?

    If you look at the current marketing materials for Altera and Xilinx you can see that they dedicate a lot of energy to developing and promoting their IO capabilities. Getting data in and out of an FPGA is a critical function. Examining the floor plan for an FPGA chip, you will see a large area used for programmable IOs. Of course along with the large areas resources used, comes the need for higher power consumption.

    Article: Will Andy Grove Save Intel By Recruiting Jen Hsun Huang?-speedcore-soc-connections-min-jpg

    Embedding an eFPGA core means that interface lines can be directly connected to the rest of the design. With less area for each signal, wider busses can be implemented. Interfaces can run faster, now that interface SI and timing issues have been reduced with on-chip integration.

    The other benefit alluded to earlier is that eFPGA can be configured to achieve optimal performance. The adjustable parameters include the number of LUTs, embedded memories and DSP blocks. Customers get GDS II that is ready to stitch into their design. The tool chain for Speedcore eFPGAs can accommodate the custom configurations.

    Article: Will Andy Grove Save Intel By Recruiting Jen Hsun Huang?-speedcore-internal-connections-min-jpg

    Steve told me that today the largest share of their impressive revenue is standalone chips, but by 2020 he expects 50% of their sales to be embedded. Another application for FPGAs is use as chiplets for 2.5D designs. But more on that in future writings.

    Steve emphasized that designing FPGAs is pretty tricky. There are power and signal integrity issues that need to be solved due to their massive interconnect. Real improvement only comes over time with years of experience optimizing and tuning architecture. Steve suggested that many small improvements over time have added up to much better results in their FPGAs.

    Right now it looks like Achronix is positioned to break away from the pack of smaller FPGA providers and potentially revolutionize the market. With this new appproach, FPGAs can be said to have decisively transitioned from their early days of being a glue logic vehicle to a pivotal component of advanced computing and networking applications. For more details on Achronix eFPGA cores take a look at their website.