WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 384
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 384
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)
            
Mobile Unleashed Banner SemiWiki
WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 384
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 384
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)

Slinging Stones at the Data Center Semi Goliaths

Slinging Stones at the Data Center Semi Goliaths
by Maury Wood on 12-18-2015 at 7:00 am

 For those not aware, there is quite a battle brewing in data center wired communication segment (across which most wireless data traffic traverses). A primary impetus driving the competitive positioning is the recent commercial availability of single lane 25 Gbps serdes (serializer / deserializer) channels in 28 nm CMOS from several suppliers.

Most mainstream data centers today use 1 Gbps Ethernet over copper (1000Base-T) to interconnect server nodes to top of rack (TOR) aggregation or leaf switches, with 10 GbE over copper (10GBase-T) an upgrade path that has seen relatively slow industry adoption. For higher bandwidth uplinks between aggregation and core switches, four 10 GbE lanes on a quad SFP (QSFP) optical link provides 40 Gbps of connectivity, consuming four switch ports on both ends. Using the same approach on a larger QSFP28 optical connector, ten 10 GbE lanes provides 100 Gbps uplink connectivity to the data center core spine switches.

By contrast, the 25 Gigabit Ethernet Consortium specification, driven by Broadcom, Cisco, Dell, Mellanox and others, 50 GbE connectivity requires only two ports, and 100 GbE connectivity requires only four ports, enabling very attractive total port ownership costs (including reduced cable costs), likely sufficient to further stunt 10 GbE uptake. Striking about this paradigm shift is that Broadcom, the 800 pound gorilla of the ethernet chip world with $8.4B in 2014 revenue, is seeing a fresh challenge to its switch dominance by much smaller companies, while its traditional 1 GbE chip competitors, Marvell and Realtek, are relative “no shows” in the race to field 25 and 50 GbE switch ports using 25 Gbps serdes technology.

Cavium, one of the new tigers in data center semiconductors, is an impressive example. Cavium acquired Xpliant for less than $100M last year, and introduced the CNX880XX family to the market quickly thereafter. Maximum Xpliant Ethernet bandwidth is 3.2 Tbps across 128 ports (using 25 Gbps serdes), same as Broadcom’s flagship Tomahawk switch chip. Cavium’s programmable Xpliant Packet Architecture is claimed to be friendlier to Software Defined Networking specifications such as OpenFlow.

Cavium is also offering a very competitive ARMv8 server processor family, the ThunderX, with up to 48 custom cores, dual socket coherency, and many other Xeon-class server processor features. Intel announced the 14 nm Xeon D SoC processor family at least in part as a competitive response to ThunderX and Applied Micro’s X-Gene multi-core ARMv8 server processor. Cavium also markets impressive embedded processor, security processor and network processor product portfolios. All this innovation from a company with $420M run-rate annualized revenue.

This theme of undaunted ferocity repeats with Mellanox (current annualized revenue less than $800M). They are best known for very low port latency (down to 90 nsec) InfiniBand adapters and switches for both processor and storage connectivity. InfiniBand is more prevalent than Ethernet in high performance computing applications such as supercomputers and high frequency trading machines. Their latest Enhanced Data Rate products also use 25 Gbps serdes, with as many as 144 port instantiations per chip, same as Broadcom’s latest BCM88770 “FE3600” Ethernet switch fabric device, providing 3.6 Tbps of packet bandwidth.

Mellanox is gaining share in the data center Ethernet segment as well, with their eighth generation 3.2 Tbps class Spectrum switch chip. To put this into perspective, mighty Intel’s highest performance Ethernet switch chip, the FM6764, offers 640 Gbps port bandwidth with 400 nsec cut-through latency. Intel reported $56B in 2014 revenue, making even the merged Broadcom+Avago operation seem small by comparison. Intel recently announced their Omni-Path Architecture, a new data center routing fabric designed to address scalability and other limitations associated with InfiniBand, and featuring 25 Gbps serdes. While this announcement raises the stakes for Mellanox, most markets generally want to see at least two competitive alternatives, and realistically it will take the Omni-Path ecosystem several years to become firmly established.

Cloud computing data centers equipment is one of the fastest growing semiconductor application segments today and into the foreseeable future. Despite the ongoing and unprecedented level of consolidation in microchip industry, it is exciting to see small dynamic companies like Cavium and Mellanox present fresh challenges for established goliaths such as Broadcom and Intel.

Share this post via:

Comments

0 Replies to “Slinging Stones at the Data Center Semi Goliaths”

You must register or log in to view/post comments.