800x100 static WP 3
WP_Term Object
(
    [term_id] => 106
    [name] => FPGA
    [slug] => fpga
    [term_group] => 0
    [term_taxonomy_id] => 106
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 336
    [filter] => raw
    [cat_ID] => 106
    [category_count] => 336
    [category_description] => 
    [cat_name] => FPGA
    [category_nicename] => fpga
    [category_parent] => 0
)

Machine Learning Optimizes FPGA Timing

Machine Learning Optimizes FPGA Timing
by Bernard Murphy on 08-04-2017 at 7:00 am

Machine learning (ML) is the hot new technology of our time so EDA development teams are eagerly searching for new ways to optimize various facets of design using ML to distill wisdom from the mountains of data generated in previous designs. Pre-ML, we had little interest in historical data and would mostly look only at localized comparisons with recent runs to decide whatever we felt were best-case implementations. Now, prompted by demonstrated ML-value in other domains, we are starting to look for hidden intelligence in a broader range of data.


One such direction uses machine-learning methods to find a path to optimization. Plunify does this with their InTime optimizer for FPGA design. The tool operates as a plugin to a variety of standard FPGA design tools but does the clever part in the cloud (private or public at your choice), in which the goal is to provide optimized strategies for synthesis and place and route.

There is a very limited way to do this today, common in many design tools, in which the tool will sweep parameters around a seed value, looking for an optimum. Perhaps this could be thought of as a primitive form of ML, but InTime takes a much wider view. It builds a database based on multiple designs and multiple analyses of each design (and even looks across tool versions). Using this data and ML methods it builds more generalized strategies to optimize PPA than can be accomplished in sweep approaches which are inevitably limited to the current incarnation of a particular design.


Naturally in this approach there is a learning phase and an inference/deployment phase (which Plunify calls Last Mile). InTime provides standard recipes for these phases, a sample of which is shown above. In the early phases of tool adoption, you’re building a training database but you still have to meet design deadlines so recipes help you get a running start through default and other strategies. As design experience and the training database grow, learning continues to improve and inferences will correspondingly improve.

Plunify note that one of the most immediate advantages of their technology is that you can get to better results without needing to make design changes. The ML-based strategies are all based on optimizing tool options to deliver improved results. Not that you might not also want/need to make design changes to further improve. But it’s good to know that the tool flow will then infer best implementations based on that knowledge-base of learned tool setup strategies. And why spend extra time on design tuning if the tool flow can get you to an acceptable solution without additional investment?

An obvious question is if ML can improve the implementation flow, why not also work on inferring design optimizations? Plunify are taking some steps in this direction with an early-release product using ML to suggest changes in RTL code which would mitigate timing and performance problems. Learning is sensitive not just to RTL issues but also to tool vendor differences so learned optimizations may differ between tools.

Plunify is based in Singapore with offices in the US (Los Altos) and Chengdu (China). Crunchbase shows they raised a $2M Series A round last year, following earlier undisclosed seed rounds (they were founded in 2010). I have also been told that Lucio Lanza is an investor and Rick Carlson is an advisor. You can learn more about Plunify HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.