I posted this before in a thread about computer power, I think better to start a new thread. Since the 90 nm node, leakage power is taking a greater share of total power. I think every node has gotten worse, and some of talk of leakage reduction with 28 and below seems to be wishful thinking or selling. Here is what I am after: data that speaks to the real practical ratio of speed and dynamic power improvement seen on a realistic circuit which takes into effect the (seems to me) increased RC delays between the gates. For example, let's say you synthesize something simple like a 32x32 multiplier in a few process nodes. what really matters to me is the simple summary of area/performance/dynamic_power/static_power. not to get too fussy, but performance should be at the slow corner, and leakage at the hi temp corner.
At a first pass, that is what matters to me. I am guessing that:
a) from 65 to 40 to 28 the real speed improvement has been modest.
b) the ratio of leakage power to dynamic power ( at max performance speed) has increased.
Building really large chips (FPGA or otherwise) that have alot of transistors sitting around leaking has become more painful. The leakage current is notoriously difficult to control , so leakage current can be a yield limiting constraint in many designs now.
Can anyone point me to some graphs that show this kind of data???
It seems that things got tougher after 130. by the way, any comparison has to compare LP to LP or G-like to G-like to be useful.
There must be some good data out there that people can share.