WP_Term Object
(
    [term_id] => 40
    [name] => Tortuga Logic
    [slug] => tortuga-logic
    [term_group] => 0
    [term_taxonomy_id] => 40
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 20
    [filter] => raw
    [cat_ID] => 40
    [category_count] => 20
    [category_description] => 
    [cat_name] => Tortuga Logic
    [category_nicename] => tortuga-logic
    [category_parent] => 178
)
            
tortuga semiwiki banner
WP_Term Object
(
    [term_id] => 40
    [name] => Tortuga Logic
    [slug] => tortuga-logic
    [term_group] => 0
    [term_taxonomy_id] => 40
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 20
    [filter] => raw
    [cat_ID] => 40
    [category_count] => 20
    [category_description] => 
    [cat_name] => Tortuga Logic
    [category_nicename] => tortuga-logic
    [category_parent] => 178
)

Can I Trust my Hardware Root of Trust?

Can I Trust my Hardware Root of Trust?
by Bernard Murphy on 02-28-2019 at 7:00 am

Hardware Roots of Trust (HRoTs) have become a popular mechanism to provide a foundational level of security in a cell-phone or IoT device or indeed any device that might appear to a hacker to be a juicy target. The concept is simple. In order to offer credible levels of security, any level in the stack has to be able to trust that levels below it have not been compromised. The bottom of the stack becomes the root of trust. Given the sophistication of modern threats, most of us believe that has to be the hardware itself.

23070-radix-s-user-flow-min.jpg

HRoTs aim to manage all security functions within a tightly-managed enclave. These would typically include functions such as secure key generation and storage, authentication, encryption and decryption and perhaps secure memory partitioning. The goal is to put all the security eggs in one basket and watch that basket very carefully rather than to scatter security features around a design and then not be sure what sneaky tricks an attacker might be able to use to get around those security measures.

So HRoTs are a good thing. And there are quite a few vendors who have excellent HRoT IP and will be happy to have you adopt their solution. Problem solved? Not exactly. You are going to integrate an HRoT into your larger design and, sadly, there are a number of ways this can go wrong. HRoTs are configurable because one fixed design can’t fit all possible needs, and when you can configure something you can configure it incorrectly. Second, you can make mistakes in hardware connections. Don’t laugh; some of these can be very subtle. Third, and most challenging, vulnerabilities at this level are not just in hardware or just in software; they can be in a combination of hardware and software. (For those familiar with the domain, think of timing-channel attacks on cache.)

Now you know that mistakes can happen, how are you going to find such mistakes? For the first two classes of problem, you might argue that a combination of hardware simulation and formal verification could do the trick. Maybe – problem is you first need to know what you’re looking for. But this debate is academic anyway because as soon as you need to cover hardware+software exploits, testing complexity explodes. Exploits can run over many instruction cycles and may use cache access times and other factors to accomplish their objectives. Mapping tests for any of this into standard hardware verification formats would be painful and is clearly impractical at a scale necessary to provide the comprehensive security coverage you need.

That said, you also clearly don’t want to have to setup a whole new verification infrastructure to solve these problems. What you’d like is a mechanism which can work with your existing verification platforms, particularly emulation since you’ll want to run software on your hardware platform. A good way to accomplish this would be through a class of assertions which can capture these security level checks but then be compiled in some manner into the existing verification infrastructure.

Tortuga Logic has created a nice approach to accomplish exactly this. I should say first that my explanation here approaches their technology bottom-up rather than a top-down presentation. But us hardware verification types may find it easier to understand. And this is just an example; talk to Tortuga for the full range of capabilities.

Any security check really comes down to proving there is no path though which something privileged (such as an encryption key) can flow to some unprivileged location (such as a USB interface). What is a little different from pure hardware approaches is that these “things” can be logical (data in memory locations) or physical (hardware). Tortuga has a format to describe and compile this kind of assertion into your standard verification environment.

A set of these assertions together represents a threat-model for the design; Tortuga Logic calls the language for these assertion “Sentinel”. This threat-model, together with the RTL, is compiled into a set of SVA assertions which you can run in any of your verification platforms, from formal to emulation (or even FPGA prototypes, I would guess, since they what they generate is an RTL model which runs together with your design).

So far, so good, but many users aren’t necessarily security experts; they invested in an HRoT because they wanted to avoid becoming experts. Doesn’t this verification requirement drag them back into needing to learn more about security? Tortuga just release a new update of their software, Radix-S, which aims to put more of this verification around HRoTs on auto-pilot. Jason Oberg (the CEO) told me they have helped draft guidelines and provide more features and guidance to setup the threat model and the flow for security novices. He tells me that even the experts like this flow because for them it adds automation; they know what they want but they don’t want to have to hand-craft it every time.

All of which is great, but for me the real deal-closer is the test-bench part of the story. Normally new simulation-based technologies can do great things only if you develop special-purpose test-benches to drive them. They sound good, but you have to do even more (and often rather specialized) test development work to tap the promise, greatly limiting their appeal. Not so with the Tortuga technology. The key differentiator in their approach is how they do the analysis – looking at information transfer through logic rather than logic states. They can do this based on your existing test suites. No need to develop new test-benches, just run what you already have together with their generated security-checking RTL. (Jason added, reasonably, that they do expect your test benches deliver reasonable coverage.)

So you can run advanced hardware/software checks on your existing verification infrastructure, using your existing test-suites. Difficult to imagine why you wouldn’t want to try that if you are even a little bit worried about security. Jason presented a workshop session on this topic Monday afternoon at DVCon. If you weren’t able to attend, you can learn more about Radix-S HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.