Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/computing-10000x-more-efficiently.6030/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

computing 10000x more efficiently

The main problem with new hardware approaches is getting the old software on them. Unless there's a compelling cost advantage most people aren't that interested in learning how to program from scratch again.
 
Last edited:
1% error is not a small number in computation world. FVM, FEM, FDTD all need heavy matrix operation. With 1% error, it may diverge...

In IOT era this imprecise hardware may come out to fight with MCUs. As pointed out by simguru, forcing coders to learn error controlling could be the most difficult task for them to pave.
 
Interesting - I built something like this for a specialist application in the 1980s using AMD 2901 bit slices, but as faster processors like the 68020 came along that became the cheaper solution.

But yes it definitely works
 
Back
Top