WP_Term Object
(
    [term_id] => 41
    [name] => S2C EDA
    [slug] => s2c-eda
    [term_group] => 0
    [term_taxonomy_id] => 41
    [taxonomy] => category
    [description] => 
    [parent] => 14418
    [count] => 73
    [filter] => raw
    [cat_ID] => 41
    [category_count] => 73
    [category_description] => 
    [cat_name] => S2C EDA
    [category_nicename] => s2c-eda
    [category_parent] => 14418
)
            
S2C Banner
WP_Term Object
(
    [term_id] => 41
    [name] => S2C EDA
    [slug] => s2c-eda
    [term_group] => 0
    [term_taxonomy_id] => 41
    [taxonomy] => category
    [description] => 
    [parent] => 14418
    [count] => 73
    [filter] => raw
    [cat_ID] => 41
    [category_count] => 73
    [category_description] => 
    [cat_name] => S2C EDA
    [category_nicename] => s2c-eda
    [category_parent] => 14418
)

Breaking the SoC lab walls

Breaking the SoC lab walls
by Don Dingee on 05-11-2015 at 7:00 am

There used to be this thing called the “computer lab”, with glowing rows of terminals connected to a mainframe or minicomputer. Computers required a lot of care and feeding, with massive cooling and power requirements. Microprocessors and personal computers appeared in the 1970s, with much smaller and less expensive machines placing isolated dots of compute capability on desks of the fortunate ones. When one says computer lab now, they are usually referring to the 100-level computing course at the local community college.

During the rise of the engineering workstation in the 1980s and 1990s, the mantra was Sun’s “the network is the computer” slogan. Ethernet provided an easy way to connect computers quickly, and distributed file systems such as NFS and interprocess communication via sockets or RPCs made distributed application development possible. Compute power scaled easily by adding workstations on the corporate network, and they could be placed anywhere cabling could be run.

Embedded systems took full advantage, becoming Ethernet-enabled for distributed applications. In the first dot-com and open source boom, vendors with big, expensive hardware based on VME, CompactPCI, AdvancedTCA, and other system standards were creating “virtual labs”. These were controlled environments of a hardware and software configuration in a padded cell on the corporate network at headquarters, designed for customers anywhere to log in remotely so they could benchmark code and evaluate the platform.

SoCs emerged with integrated connectivity, powering a wave of mobile devices and shrinking many embedded platforms to tiny boards. Today, Wi-Fi brings Ethernet everywhere, tablets and laptops bring computers anywhere, and software developers bring code from all over the globe. SoCs now feature multicore processing and dedicated acceleration units and bottleneck-free interconnect to a wider variety of peripherals.

With all these advances, why is FPGA-based prototyping still stuck on a relatively big, expensive machine in a “SoC lab” with limited access?

Granted, one cutting-edge large FPGA with high-speed SERDES interfaces is not cheap. Placing several of those large FPGAs in a prototyping system with the proper pin multiplexing and the right partitioning software to chop up SoC designs effectively is an artform, practiced by few. FPGA-based prototyping systems also bring enhanced debugging, many with deep trace features.

Until now, people using FPGA-based prototyping systems have gotten by with the SoC lab approach. Teams of SoC designers were relatively small, located close enough to walk down to the lab to engage with their masterpiece. IP blocks might be developed and debugged first, then passed into the integrated system for a concentrated, full-up effort.

All that is changing as SoC complexity continues to increase, third party IP from a variety of sources becomes more prevalent, and software co-verification with advanced operating systems requires expertise from a community scattered across the globe. It is no longer enough to scale the FPGA-based prototyping hardware – the entire approach has to change.
 S2C has outlined the look of a state-of-the-art FPGA-based prototyping platform, and a vision for adding cloud capability, in a new white paper. There is the usual discussion of performance, extensibility, platform-aware synthesis that can partition and pin-multiplex, and debug. Data illustrating a task-level breakdown for today’s SoC design – upwards of $300M by some estimates for large designs on advanced 16/14nm nodes – shows software and system-level co-verification to be the largest efforts.

A prototyping system with cloud-based access to management features can transform those efforts. Mixed-level prototyping is now common, with some blocks complete, some under development in RTL, and some in progress using behavioral models. Physically distributed teams are also common, some working on SoC IP, some on operating systems or drivers.

Cloud-enabled FPGA-based prototyping platforms raise an exciting new set of possibilities for breaking the SoC lab walls. Perhaps a corporate SoC design team coordinates debug efforts with an off-site third-party IP supplier, or brings together software developers from multiple locations to view debug results as they unfold during a verification run. With complexity going up and co-location going down, the approach S2C is suggesting has a great deal of merit.

Download the entire white paper, along with other S2C white papers, under this title:

FPGA Prototyping of System-on-Chip Designs

Share this post via:

Comments

0 Replies to “Breaking the SoC lab walls”

You must register or log in to view/post comments.