You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • WarpStor, the Data Tardis: Small on the Outside, Large on the Inside

    NVM White Papers Wiki-warp0.jpgThere is a data explosion:
    • IBM says that 90% of all data was created in the last 2 years
    • Smartphone processor development requires 100GB of data per engineer
    • Android testing requires 30GB times the number of tests times the number of testers
    • Biotech simulation, game development and more all require enormous amounts of data

    This is a huge problem. While disk drives are cheap, reliable enterprise class storage is expensive and Gb ethernet connections are too slow and not scaling, and most tech environments are based on NFS which is slow and has a high overhead. With hundreds of users on projects, another challenge is to reduce needless duplication of the same files.

    Methodics is introducing WarpStor to address this problem. It is a content-aware network addressable storage (NAS) optimizer built on top of ProjectIC's abstraction model. It is vendor agnostic, co-existing with storage solutions from IBM, EMC, Netapp and more. It doesn't require weird stuff like kernel level patches, and seamlessly integrates with existing OS infrastructure.

    NVM White Papers Wiki-ws1.jpgAlthough this is being announced today it is actually mature technology. It has been in use at Methodics internally for over a year with great results in their build and regression process. For example, the disk space requirements for the Methodics internal regression suite has been reduced from 300GB to 1GB, with a similar reduction in network I/O and a big reduction in wall-clock time for running the regressions.

    This sort of reduction sounds too good to be true, so how does it work? There is an IP master workspace. A workspace shrink takes place to reduce the storage requirements. Changes in the workspace are handled by copy-on-write. This is how virtual memory is handled in most operating systems. Data that has not been changed is shared and when one user makes a change, only then is it copied into their workspace (and altered) and other users continue to share the original unchanged version in their workspaces. As a result, creating a new workspace before any changes have been made is instantaneous. Eventually the changes are (normally) released and then become visible to others. So the first workspace requires some disk space and time to populate, but subsequent workspaces consume almost no disk space and take less than a second to create.

    NVM White Papers Wiki-tardis2.jpgWarpStor is seamlessly integrated into ProjectIC. There is no change at all in the conceptual data model, just a major increase in efficiency both in disk space usage and in the network bandwidth required to move it in and out of users' own workspaces.

    In summary, ProjectIC's abstraction model enables smart data management and WarpStor is seamlessly integrated with it. It can create 100GB+ workspaces in seconds, requiring negligible disk space at create time. Copy-on-write is used for changed files so the disk space requirements are tied to how much of the design is actually changed. Result: a huge saving in disk space, disk reads/writes, network file transfers. This provides a true turbo-boost to ProjectIC.

    "Scotty, I need warp speed in 3 minutes or we're all dead." No problem Captain, MethodICs can do it.

    The WarpStorage webpage is here.