You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
The main problem with new hardware approaches is getting the old software on them. Unless there's a compelling cost advantage most people aren't that interested in learning how to program from scratch again.
1% error is not a small number in computation world. FVM, FEM, FDTD all need heavy matrix operation. With 1% error, it may diverge...
In IOT era this imprecise hardware may come out to fight with MCUs. As pointed out by simguru, forcing coders to learn error controlling could be the most difficult task for them to pave.
They have used for a computer vision application , working well.
As for accuracy , first - it's not 1% , because they using an error correction algorithm. Second there are many places where this can be used - computer vision, neural networks and probably others.
Third - i think this might be analog at it's core - that's how they get such small area.
Interesting - I built something like this for a specialist application in the 1980s using AMD 2901 bit slices, but as faster processors like the 68020 came along that became the cheaper solution.