You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Software Security is Necessary but NOT Sufficient

    Article: A Brief History of ASIC, part II-tortuga-logic-chip-min.jpgAs the silicon designs inside the connected devices of the Internet of Things transition from specifications to tapeouts, electronics companies have come to the stark realization that software security is simply not adequate. Securing silicon is now a required, not optional, part of RTL design processes.

    Design-for-Security (DFS) needs to be considered from IC architecture through implementation. Recent comments from industry luminaries indicate that the topic is indeed no longer even up for discussion. The jury has returned and their decision was captured well by Larry Ellison, Executive Chairman of the Board and Chief Technology Officer of Oracle:

    "Silicon security is better than OS security. Then every operating system that runs on that silicon inherits that security. And the last time I checked, even the best hackers have not figured out a way to download changes to your microprocessor. You can't alter the silicon, that's really tricky."1

    We founded Tortuga Logic on the belief that if you secure the system at the silicon levels then the security vulnerabilities are gone from all the software layers above as well. Build upon a trusted foundation and the rest of the system will follow as you can have untrusted execution on a trusted hardware environment.

    Tortuga Logic provides hardware security assessments to top tier companies in the automotive, defense/aerospace, and IoT markets to name a few to assist companies in identifying and resolving security vulnerabilities in their silicon designs. Here are some examples that illustrate why silicon security is no longer optional in today’s RTL design process.

    Resource Isolation

    The notion of resource isolation relates to on-chip access control rules and “data leaking” between different software processes. For example, it is quite common for access control rules in an SoC to be designed into the silicon. The security software will thus have no control of the access control logic and if there is a logic design error or misconfiguration, the application software could inappropriately access a crypto key manager for example. Proving that the access control logic is correct and has no security vulnerabilities is therefore critically important for SoC confidentiality and integrity, even if security software is being utilized.

    Isolation properties enforced purely at the software level (i.e. memory isolation governed by the OS) are absolutely not true in the silicon. Software processes share a tremendous amount of information at the silicon level. For example, they can share instruction and data caches, I/O controllers, etc. On-chip hardware resources must guarantee software is not leaking information through the hardware. This point is especially a big concern for a cloud-based computing environment.

    JTAG Port

    The ubiquitous JTAG port originally was developed to provide for boundary-scan testing on printed circuit boards. As systems began to be integrated onto silicon, the port has become an effective tool for debugging embedded systems because of the access it provides to sub-blocks of an SoC. However useful the port may be for debugging and test, the downside is it creates yet another path for hackers to attack security vulnerabilities. A thorough discussion on the topic can be found in a paper authored by Kurt Rosenfeld and Ramesh Karri of the Polytechnic Institute of New York University entitled “Attacks and Defenses for JTAG.”2 Included in the paper is a relevant statement from Mohammad Tehranipoor, previously of the University of Connecticut:

    “JTAG is a well-known standard mechanism for in-field test. Although it provides high controllability and observability, it also poses great security challenges.”

    Hackers know that the first instruction executed after CPU reset can be a JTAG debug instruction. A JTAG port thus can have access to system resources at the boot level before any embedded or application software is functioning. An attacker able to put the system into debug mode also has complete control of the system with complete access to the CPU’s registers, program memory and other memory in the system. Hackers using the JTAG port can covertly wreak havoc throughout a system before any embedded processors have been booted. Pretty scary, huh? And even worse, consider almost all connected devices of the IoT have a JTAG port!

    Embedded processor companies have been promoting ways to provide adequate security for the JTAG port as shown in the figure below courtesy of Andes Technology.3

    Article: A Brief History of ASIC, part II-jtag-port-security-min.jpg

    Andes suggests to users that they implement pass code validation where anyone attempting to access the JTAG port must provide the stored pass code. The best security option Andes cites is to store the pass code on a remote server, but this is untenable for many products that do not have full time access to the Internet. The pass code must then be stored in on-chip non-volatile memory, but this only creates yet another crypto key vulnerability.

    Key Takeaways:
    These examples are some of the motivators for why Tortuga Logic developed Prospect™, our proprietary security property verification environment developed for the Tortuga Logic security language, Sentinel™. Using Sentinel properties provided by Tortuga Logic and then verifying them in Prospect early in the RTL design process allows RTL designers to fully verify an SoC for security vulnerabilities.

    A simple example of the power of Tortuga’s DFS solution is the extremely efficient way to formally verify that crypto keys are secure via the use of a Sentinel keyword called $all_outputs. This keyword indicates that a signal or asset of interest should not flow to any output under any circumstance. The resulting security property for this analysis applied to a crypto key can be read as “Key cannot flow to all outputs.” The exact Sentinel syntax is: key =/=> $all_outputs

    Article: A Brief History of ASIC, part II-turtuga-dfs-solution-example-min.jpg


    Summary

    The hopeful expectation that electronic products can be secured by software has been proven inaccurate by security vulnerabilities found in today’s silicon devices. SoC design teams are quickly integrating design-for-security capabilities into their RTL design flows to provide complete verification of security properties created for their designs. With the proliferation of connected devices of the IoT, and with the increased capabilities of cyber criminals, silicon security can no longer be an afterthought – it must be an integral part of the silicon design process.

    About Jason Oberg

    Dr. Jason Oberg is chief executive officer of Tortuga Logic, overseeing its technology and strategic positioning. He is the founding technologist of Tortuga Logic and brings years of intellectual property that he successfully transferred from the University of California. Dr. Oberg has a Bachelor of Science degree in Computer Engineering from the University of California, Santa Barbara and Master of Science and Ph.D. degrees in Computer Science from the University of California, San Diego.

    References


    1. The Inquirer: http://www.theinquirer.net/inquirer/...ity-in-silicon
    2. Polytechnic University of New York: http://isis.poly.edu/~kurt/papers/de...test_final.pdf
    3. SemiWiki: https://www.semiwiki.com/forum/conte...andes-way.html