WP_Term Object
(
    [term_id] => 30
    [name] => Fractal Technologies
    [slug] => fractal-technologies
    [term_group] => 0
    [term_taxonomy_id] => 30
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 36
    [filter] => raw
    [cat_ID] => 30
    [category_count] => 36
    [category_description] => 
    [cat_name] => Fractal Technologies
    [category_nicename] => fractal-technologies
    [category_parent] => 14433
)
            
WP_Term Object
(
    [term_id] => 30
    [name] => Fractal Technologies
    [slug] => fractal-technologies
    [term_group] => 0
    [term_taxonomy_id] => 30
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 36
    [filter] => raw
    [cat_ID] => 30
    [category_count] => 36
    [category_description] => 
    [cat_name] => Fractal Technologies
    [category_nicename] => fractal-technologies
    [category_parent] => 14433
)

Visual Quality

Visual Quality
by Bernard Murphy on 06-12-2017 at 7:00 am

A few years ago, I started looking at data visualization methods as a way to make sense of large quantities of complex data. This is a technique that has become very popular in big data analytics where it is effectively impossible to see patterns in data in any other way. There are vast numbers of different types of diagram – treemap, network and Sankey are just a few examples – each designed to highlight certain aspects of the data – concentration, connectivity, relative size and other characteristics. Given the right type of diagram, key attributes of mountains of data can become immediately obvious.


I didn’t get beyond an experimental stage in my work, so I was very happy to see that Rene Donkers, CEO at Fractal, had finished the job in delivering a production capability for data visualization around library analytics, which he calls error fingerprint visualization.

Library (Liberty) files can get pretty large, covering OCV timing models and power models among many other characteristics. Which raises an obvious question – how do you check that this stuff is correct? I’m not thinking here about the basics – whether each model has the right name and the right pins, or basic consistency checks between these and the tables. Questions around table data can become more challenging. Monotonicity and the correct sign of the slope are already covered in the Fractal Crossfire product, but whether values fall within reasonable bounds is no longer a binary question – there is no bright line separating reasonable from unreasonable.


Crossfire now provides help in analyzing these cases through visualization. For example, for rule 7201: “Range check for cell_rise/cell_fall delay values”, you start by specifying what you think is an acceptable range for these delays, say 0-10ns. Delay values outside this range will be flagged in the normal type of error listing, but that could amount to a lot of error messages. The trick in this or any other effective visualization is to present aspects of that information in a way that makes it easy to reach conclusions about root causes. In the example above, they present all violations in a network diagram, starting from the cell-name, with connections to associated tables and from there variously through pin names, min and max values and applicable range limits.

You can temporarily extend an allowed range through waivers. In the example above, blue lines show violations which fall within waiver limits, whereas red lines show cases falling outside those limits. Waivers provide a way to experiment with more relaxed bounds before committing to those changes.

What stands out from the diagram above (OK, you need to look closely; try magnifying or look at the white paper link below) is that a lot of errors are associated with the OAMOD pin. You immediately see that you need to drill down into problems with that pin. Maybe this is a design problem, maybe a characterization problem, either way it’s obvious that addressing this area can resolve a lot of the flagged errors.

This goes to the heart of the value of visualization methods. When looking at failures from any kind of pass/fail analysis (or indeed any binary division of data), it is unlikely that the data is randomly distributed, especially when effort has been made to reduce failures. It is probable that many failures can be attributed to a relatively small number of root-causes.


Similarly, the visual can help you decide if maybe the limits you set on values should be adjusted. If values beyond an upper limit increase at a modest pace, perhaps the upper bound should be increased. If they show signs of rising rapidly, perhaps that signals a design or characterization problem, or maybe an unavoidable characteristic of this cell in this usage, indicating that designers need to be warned not to stray into this area.

I’m a believer in visualization aids to analysis of complex data. We can only do so much with pass-fail metrics presented in lists and spreadsheets. Visualization provides a way to tap skills we already have that can be much more powerful than our limited ability to see patterns through text and number inspection. Until we have deep-learning to handle these problems, perhaps we should put our visual cortex to work. You can learn more about Crossfire error fingerprints HERE.

Share this post via:

Comments

4 Replies to “Visual Quality”

You must register or log in to view/post comments.