Data-movement consumes two orders of magnitude higher energy than a floating-point operation and hence, data-movement is becoming the primary bottleneck in scaling the performance of modern processors within the fixed power budget. The accelerators for deep neural networks have huge memory footprint and hence, data-encoding techniques are useful for these AI-accelerators also.


We present a survey of encoding techniques for reducing data-movement energy in instruction-, data-address and multiplexed buses, data-buses, both precise and approximate encoding techniques. Covers techniques proposed between 1995-2018.

Available here, accepted in Journal of Systems Architecture.