800x100 static WP 3
WP_Term Object
(
    [term_id] => 157
    [name] => EDA
    [slug] => eda
    [term_group] => 0
    [term_taxonomy_id] => 157
    [taxonomy] => category
    [description] => Electronic Design Automation
    [parent] => 0
    [count] => 3886
    [filter] => raw
    [cat_ID] => 157
    [category_count] => 3886
    [category_description] => Electronic Design Automation
    [cat_name] => EDA
    [category_nicename] => eda
    [category_parent] => 0
)

A Security Idea for EDA / Embedded Design

A Security Idea for EDA / Embedded Design
by Bernard Murphy on 08-25-2015 at 4:00 pm

 I’m on a mission to find novel ideas for EDA / embedded design tools. One I have been discussing on and off with a DARPA friend for at least a couple of years is how to grade the security of a hardware design or, more comprehensively, the security of an embedded system including hardware and the software running on the hardware.

This feels like something that would be useful to do. After all, in hardware we can grade testability, in hardware and software we can grade test coverage and in ISO26262 there is at least a subjective concept of grading safety risk. Wouldn’t it be nice to know that embedded systems in missiles, personal payment systems, automobiles and medical implants had a similarly objective level of security? Of course these systems have well-defined defenses against known exploits, but intuitively a security metric should not be determined just by defenses against the problems you know but also by also by some measure of security risk in the problems you don’t know.

Security metrics are not very well-developed even in the software space, but there are interesting papers that could be used as a starting point for an embedded security metric. Manadhata and Wing at CMU developed an attack-surface metric for software which considers a static measure of total vulnerability of a system where the attack surface measures through how many directions weaknesses in the system could be attacked. This can be elaborated in several directions, particularly to use “Common Weaknesses and Exposures” documented by the Mitre Corporation as the basic weaknesses around which you build a metric based on accessibility, privilege and so on. An example weakness would be the well-known buffer-overflow problem.

What is appealing about a generalized version of the approach, aside from delivering a metric, is that it depends on a quite finite set of well-known weaknesses (~200 in the Mitre list). Contrast this with the very wide range of possible attack types. What we know about security today focuses almost exclusively on preventing specific classes of attack (stealing a PIN by somehow getting secure access for example) but the total class of possible attacks is almost unbounded. The approach suggested here uses the fact that each attack starts by exploiting one of a relatively small set of weaknesses and is not concerned with the objective or mechanics of the attack.

So to build a metric for the security of an embedded system, we first need a list of Common Weaknesses and Exposures. The Mitre list is a good starting point, and presumably could be enhanced by a list of common weaknesses in hardware. What these should be will require some debate but may not be as difficult as it sounds. Especially worthy of consideration are weaknesses which can enable denial of service or reduction in quality of service; it is often easier to see how these could be accomplished than to see how data could be maliciously injected or stolen from the system. Could you force bridge FIFOs to overflow? Is there a way to cause a cache to repeatedly flush? Can a state-machine be pushed into a deadlock state? Can a privilege be raised on a data transfer from a non-privileged interface IP?

You might argue that if a tool could detect these problems, you would fix them anyway. The same should be true for common software weaknesses and yet they continue to be at the root of almost all software attacks. Apparently the fact that a problem can be detected does not always guarantee it will be corrected. Sometimes this happens through oversight, sometimes because the cost of a fix exceeds the (perceived) value of the fix. But what is an acceptable compromise in one context may not be acceptable in another. A security metric as sketched here would be a way for a prospective customer to assess if an embedded design really meets their security expectations.

Stay tuned for more ideas…

More articles by Bernard…

Share this post via:

Comments

0 Replies to “A Security Idea for EDA / Embedded Design”

You must register or log in to view/post comments.