WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 68
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 68
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)
            
Achronix Logo SemiWiki
WP_Term Object
(
    [term_id] => 37
    [name] => Achronix
    [slug] => achronix
    [term_group] => 0
    [term_taxonomy_id] => 37
    [taxonomy] => category
    [description] => 
    [parent] => 36
    [count] => 68
    [filter] => raw
    [cat_ID] => 37
    [category_count] => 68
    [category_description] => 
    [cat_name] => Achronix
    [category_nicename] => achronix
    [category_parent] => 36
)

Increased Processing Power Moves to Edge

Increased Processing Power Moves to Edge
by Tom Simon on 02-06-2018 at 12:00 pm

Recently there has been a lot of buzz about 5G networks. Aside from the talk about it possibly being nationalized, 5G will be a lot different than its predecessors. Rather than a single data link in a predetermined band, 5G will consist of a web of connections all working together to support existing types of data traffic and many new types, including automotive, IoT and others. In urban areas, there will be a large number of smaller nodes using GHz bands that do not travel far. Also, it will support IoT and automotive data traffic traffic that will require low latency and packet sizes suited to the data payloads.

21083-5gmvision_v04-min.jpg

Many of the effects of this shift in mobile data architecture are readily understandable, but there are other more subtle shifts in data communication and processing that are going to affect where compute resources are deployed. We live in the era of cloud computing. This is exemplified by light weight edge computing power augmented by heavy duty processing resources in the cloud. Many tasks manifest only as a user interface or have low compute requirements on edge devices, and the heavy lifting is done at data centers.

However, we are about to enter another cycle where the location of processing activity makes a significant migration. IoT and the automotive communications known as V2V (vehicle to vehicle) and V2X (vehicle to other) demand lower latency and more localized processing. A recent white paper by Achronix talks about these trends and the requirements they will impose on processing devices. The paper, “2018 Ushers in a Renewed Push to the Edge”, provides many specific examples of why edge processing demands will expand significantly.

Coming back to 5G, one of the new capabilities will be millisecond latency. Older networks have much higher latency and extended backhaul routing can add to huge delays to system responsiveness. In the case of moving vehicles dealing with their environment or other vehicles, time is of the essence. V2X is one of the more interesting topics. Roadside beacons can aggregate and communicate information about road surface conditions, traffic, obstacles, and other cars. V2V can be used to enhance safety to broadcast obscured hazard information.

21083-5gmvision_v04-min.jpg

Another harbinger of how computing is moving to the edge that is discussed in the Achronix white paper are services like Amazon Web Services’ Greengrass offering. Instead of requiring all network traffic to return to AWS for processing, Greengrass lets system designers define IoT based applications in the cloud and then instantiate them in remote/edge processing nodes that can operate without an active connection to AWS. An edge processing unit is used to network IoT devices to create a local IoT network. One example is in a hospital where there might be pulse, temperature and other sensors that can be linked together in a patient’s room to provide intelligent monitoring.

Greengrass uses Amazon’s flavor of FreeRTOS for the hub at the edge in the processing unit. When internet connections are available the edge processor can update the cloud, but it can operate on its own without the need for a cloud connection.

The drive to add processing power at the edge raises the question of what is the best hardware design for achieving reliability, power, security and performance goals. We have seen, through Microsoft’s Catapult project, how marrying traditional CPU’s and programmable logic can boost server performance. Achronix asserts that the same benefits accrue at the edge. Programmable logic can be uniquely tailored to the specific edge processing needs. FPGA based packet and data processing can occur in parallel with low overhead for a range of tasks. If we look at security needs, because these edge nodes may not reside in physically secure facilities, they need to be fundamentally more secure. Embedded FPGA fabric admittedly is more secure and reduces power. Also, lower part count and reduced board interconnect can lead to better reliability. Achronix makes a convincing case that for many applications that require enhanced edge processing, that their embedded eFPGA fabric is a desirable solution. You should download the paper if you are interested in learning about the other motivations for increased edge processing power, and also to learn about how effective solutions can be architected.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.