WP_Term Object
(
    [term_id] => 4
    [name] => Open-Silicon
    [slug] => open-silicon
    [term_group] => 0
    [term_taxonomy_id] => 4
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 41
    [filter] => raw
    [cat_ID] => 4
    [category_count] => 41
    [category_description] => 
    [cat_name] => Open-Silicon
    [category_nicename] => open-silicon
    [category_parent] => 386
)

Webinar: Achieving Very High Bandwidth Chip-to-Chip Communication with the Interlaken Interface Protocol

Webinar: Achieving Very High Bandwidth Chip-to-Chip Communication with the Interlaken Interface Protocol
by Eric Esteve on 06-05-2017 at 12:00 pm

Open Silicon will hold this webinar on June 13th at 8 am PDT (or 5 pm CE) to describe their Interlaken IP core, and how to achieve very high bandwidth C2C communication in various networking applications. To be more specific, the Interlaken protocol can be used to support Packet Processing/NPU, Traffic Management, Switch Fabric, Switch Fabric Interface, Framer/Mapper, TCAMs or Serial Memory (INLK-LA). Open Silicon is marketing the Interlaken IP core for ASIC, but the networking industry also loves FPGA technology, offering fast Time-to-Market (TTM) and, even more important, well-known advantage of flexibility, allowing to support protocol evolution in the field. The Interlaken protocol also supports FPGA implementation.

There are significant demands for performance and bandwidth in high-speed communications, and pressure to step up the pace on technological advancements. The panelists will outline the challenges that designers of advanced communication applications encounter with things like controller specification, latency, various SerDes architectures and implementation. They will outline use cases and discuss the key technical advantages that the Interlaken IP core offers, such as 1.2 Tbps high-bandwidth performance and up to 56 Gbps SerDes rates with Forward Error Correction (FEC), as well is its multiple user-data interface options. They will also discuss the architectural advantages of the core, such as its flexibility, configurability and scalability.

I am very honored as I have been asked by Open Silicon to be the moderator of this webinar. But you must deserve such honor and to some homework to be well prepared, and I will share with Semiwiki readers some bits of information about Interlaken, so you (the reader) will also be well prepared!

Meeting these requirements from the Interlaken Alliance will ensure interoperability for different implementations (don’t forget that Interlaken is a chip-to-chip communication protocol, so interoperability is key):

Supports multiple parallel lanes for data transfer at physical level
User interface packet based. With each packet consisting of multiple bursts
Simple control word to delineate packet and bursts
Protocol independence from the number of SerDes lanes and SerDes rates
Ability to communicate per-channel backpressure
Performance scales with the number of lanes

I think that most of the points in this list are clear enough, with probably the notable exception of per-channel backpressure. If you are (like me) not aware about this concept, you will have to dig to understand the meaning and implication of per-channel backpressure. Don’t worry, I did it, and found this definition:
In queueing theory, a discipline within the mathematical theory of probability, the backpressure routing algorithm is a method for directing traffic around a queueing network that achieves maximum network throughput which is established using concepts of Lyapunov drift. Backpressure routing considers the situation where each job can visit multiple service nodes in the network. It is an extension of max-weight scheduling where rather each job visits only a single service node.

To make it simple:

[LIST=|INDENT=1]

  • Max-weight routing: each job –> a single service node
  • per-channel backpressure: each job –> multiple service node

    Using the second allows to achieves maximum network throughput.

    To continue with the definitions, when looking at the features supported by Open Silicon Interlaken IP core, I found this one: “Supports Interlaken Look Aside protocol”. Look Aside?

    We can find the meaning of this look aside function, extensively used in packet based processing, with some examples of look-aside devices:

    • Search engines, which receive small portions of a packet header
    • Policing engines, which receive small portions of a packet header, or a simple command set
    • Value-add memories, which may perform mathematical operations or linked-list traversals in addition to reads and writes
    • Queuing and scheduling engines which dictate the packet transmission order to a packet buffer device

    The basic idea is to process only a small portion of a packet, aside from the data path. Looking at this chart, you immediately understand the benefit of this look-aside protocol: the smaller the message size (in abscissa), the higher will be the message rate.

    I am sure that you will learn a lot more about Interlaken if you attend this webinar from Open Silicon. Even if the Interlaken protocol is based on some complex concepts, don’t forget that the elegance of Interlaken is his simplicity and high flexibility, as the controller can interface with any SerDes with rates between 3.125 Gbps and 56 Gbps to support very high bandwidth chip-to-chip communication.

    To register to the webinar, click here

    Eric Esteve
    from IPnest

    Share this post via:

  • Comments

    There are no comments yet.

    You must register or log in to view/post comments.