You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Is FPGA Intel Next Big Thing for IoT ?

    I write this article in reaction to another article from Seeking Alpha titled “Intel Next Big Thing”. I have extracted this from the article:

    The IoT space is growing rapidly with the advent of connected cars, smart homes and a variety of connected devices and appliances. However, before a full-blown ecosystem around these devices is developed, device makers have to deal with power efficiency. The good news is that with the help of low-power FPGAs, the devices can be made power efficient.

    People writing in SA are expected to explain why you should (or should not) buy stock from a company in respect with this company’ strategy. In this case, the “Next Big Thing” for Intel is the Altera FPGA product line and the article explains how Intel could generate high return on their ($37B) investment by developing the FPGA business in Data Center and IoT. Data Center and IoT are completely different stories and the IoT ecosystem interesct with the data center only if you consider that the amount of data generated by a multitude of IoT systems will end up in the cloud, in the data center. Let’s see why I think that the flagship FPGA from Altera (or Xilinx, by the way), the products priced over $1,000, very performant and extensively used in Networking are NOT the best choice for IoT application, if you agree with the prerequisite “have to deal with power efficiency”.

    At first, let me say that I think that FPGA is a great technology. FPGA has brought a benefit of an inestimable value in the fast changing world relying on networking systems to carry the data we are consuming for work or entertainment: flexibility. This flexibility has a cost and I am not talking about IC ASP (multiplied by x10 or x20 for the same function implemented into FPGA), but about power consumption. I have searched the web to find a short definition of the FPGA architecture: “Modern SRAM-based FPGAs have highest densities, but consume a lot of power and need an external non-volatile memory to store configuration bit-stream”.

    This definition applies to both Altera and Xilinx FPGA, and we can verify what “a lot of power” means by taking a look at this figure extracted from an Altera white paper titled: “Leveraging HyperFlex Architecture in Stratix 10 devices to Achieve Maximum Power Reduction”:

    Article: John Cooley (DeepChip.com) is a Tool!-power-reduction-stratix-10-optical-switch-min-jpg

    Moving from Stratix V to Stratix 10 device means you move from 28nm to 14nm FinFET technology node. You don’t expect to integrate Transceivers (high speed SerDes based interfaces) into IoT, so let’s focus on Core Dynamic power, decreasing by 42%, as expected when you move from 28 nm to 14 nm, and Static power. In fact, the static power has two components. The first is the leakage power that you would have on any other Bulk of FinFET technology, but the second component is inherent to FPGA technology. This is the power dissipated by the SRAM (remember that FPGA is SRAM based architecture), considered as static as you have to refresh SRAN continuously to keep the FPGA programmed. The author is very proud of the 10 to 12 Watt of static power but imagine that you use such FPGA for an IoT application! Typical IoT has to stays always-on, and the logic to be wakes-up from time to time, but you have to keep the SRAM alive… at the price of this huge static power.

    As far as I am concerned, I would not consider Stratix 10 product line for IoT application, such a static power consumption being far too high (by several order of magnitude) to comply with IoT requirements.

    Article: John Cooley (DeepChip.com) is a Tool!-broadwell_arria-fpga-min-jpg


    To end this article on a positive point, Intel is developing interesting new products, like this multi-chip package (MCP) integrating Broadwell CPU and Arria 10 GX FPGA in the same package. Such product will address data center application (not IoT), providing flexibility thanks to the FPGA and should help to slightly decrease the power consumption. The power consumption related to the chip to chip communication (Transceivers in the above figure) should benefit from the lack of package between the two chips. Let say that this is not a revolution, but a move in the right direction to reduce the power…

    Frankly speaking, if the embedded FPGA (eFPGA) technology development becomes effective and if eFPGA could be used in the data center, it could be the revolution: instead of putting a SoC inside a FPGA, or beside a FPGA like in the above example, integrating just the amount of needed FPGA into a SoC would bring both flexibility and lower power. We will need to wait and see if eFPGA adoption will occur…

    Eric Esteve from IPNEST

    About Static Power:

    The leakage power issue is so serious that in its 2009 report, the International Technology Roadmap for semiconductors (ITRS) describes the situation in terms of an existential crisis:

    While power consumption is an urgent challenge, its leakage or static component will become a major industry crisis in the longterm, threatening the survival of CMOS technology itself, just as bipolar technology was threatened and eventually disposed of decades ago (14).

    Article: John Cooley (DeepChip.com) is a Tool!-sram-cell-jpg