Plunify was founded in 2009 and started work on a prototype (self-funded). In 2011 they received an investment from the Singapore government's fund SPRING. In 2012 they had a beta version of their EDAExtend Cloud.
The basic idea was to do cloud-based optimization of FPGA designs. You fire up a couple of dozen servers out in the cloud, put your design out there, and it will optimize the timing closure much better than you can yourself using a mixture of parallel processing (trying lots of different options at once) and learning algorithms (once run #1 is done, the tool can select better options for run #2 until everything converges). The technology worked pretty well but there was one big problem that almost everyone who has tried to do cloud-based EDA has run into. Companies are not prepared to put their crown-jewel designs out in the cloud due to security fears, sometimes also company policy. Although the technology worked well, it was basically impossible to sell.
But it turned out customers loved the technology, just not the way that it was delivered. So Plunify pivoted and produced inTime, essentially the same technology but not running in the cloud, running on the customer's own server farms.
So what does it do? FPGA designs (well, all designs) want to reduce area, reduce power and meet timing. The traditional approach when timing is not met is to use one or more of this menu:
- Alter the RTL
- If negative slack is small try fiddling with placement seeds
- Tweak a few synthesis/P&R settings
What Plunify's product, called inTime, does is a different loop:
- Generate strategies based on its database (a strategy is a set of synthesis/P&R settings)
- Implement all the strategies in parallel on the server farm (maybe 20-30 servers)
- Use machine learning to analyze the results
- Update the database with new knowledge
- Go back to step 1 until either timing closure is reached or the tool determines that it is impossible and gives up
In essence it is doing a highly optimized search of the whole space of settings (which might be of the order 10100 choices, so brute force has no chance) until it finds a strategy that meets timing (zero negative slack). To make it clearer the diagram below (click to enlarge) is inTime running a large number of strategies. The orange bars have a lot of negative slack. The green bars still have negative slack but they are the best solutions, ones that in round 2 InTime will start from and try strategies nearby. In fact, like simulated annealing, it sometimes tries stuff that is far away to avoid getting trapped in local minima.
In round 2 there are still lots of horrible results with large negative slack, but there are also now 7 solutions that pass with zero negative slack.
The results are impressive. Here is an example of Huawei doing a design with 98% utilization. To put that in perspective, many companies have a rule that if utilization is over 60% then go to the next bigger array. Doing a design at 98% utilization is almost insane. But in 6 rounds, with a total time elapsed of 10 hours, inTime succeeds in getting closure.
Once a design has reached closure, even if the RTL is changed (up to 15-20% even) then inTime will start from the good strategies from last time and achieve closure much more quickly than from a standing start.
One of the benefits of inTime is that instead of spending 4-6 weeks fixing timing manually (and perhaps failing), inTime will find a solution in just a few days from first seeing the design.
Of course it doesn't always succeed, it is possible to simply demand more than is possible from the FPGA. But when it does fail it will reveal which paths were critical most often in its runs thus giving guidance to the designer where it might make sense to consider changing the RTL.
A second benefit of inTime is that if you take a design that is at, say, 60% utilzation and drop it down to the next smaller (and cheaper) array, the utilization may now be 90%. But if inTime can close timing you have a huge saving by being able to use a smaller array. In a similar way, it might be possible to get a design into an array of the same size but a lower speed-grade, again saving on cost.
There is a webinar on Plunify's inTime titled Got FPGA Timing Closure Problems? It is on Tuesday March 10th at 10am Pacific Time. The presenters are Harn Hua Ng, Plunify's CEO, and Tim Davis, the president of Aspen Logic, an FPGA design and consulting services group. The webinar will be moderated by Dan Ganousis.
Plunify's has a silicon valley presence with an office in Los Altos. Their website is here. You can register for the webinar here.