WP_Term Object
(
    [term_id] => 30
    [name] => Fractal Technologies
    [slug] => fractal-technologies
    [term_group] => 0
    [term_taxonomy_id] => 30
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 36
    [filter] => raw
    [cat_ID] => 30
    [category_count] => 36
    [category_description] => 
    [cat_name] => Fractal Technologies
    [category_nicename] => fractal-technologies
    [category_parent] => 14433
)
            
WP_Term Object
(
    [term_id] => 30
    [name] => Fractal Technologies
    [slug] => fractal-technologies
    [term_group] => 0
    [term_taxonomy_id] => 30
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 36
    [filter] => raw
    [cat_ID] => 30
    [category_count] => 36
    [category_description] => 
    [cat_name] => Fractal Technologies
    [category_nicename] => fractal-technologies
    [category_parent] => 14433
)

Do my tests certify the quality of my products?

Do my tests certify the quality of my products?
by Pawan Fangaria on 05-23-2013 at 9:00 pm

Honestly speaking, there is no firm answer to this question, and often when we get confronted by our customers, we talk about the coverage reports. The truth is a product with high rate of coverage can very easily fail in customer environment. Of course coverage is important, and to be clear about the fact that the failure is not because a particular construct was not tested, we heavily stress on 100% coverage. When I was managing physical design EDA products, I often used to have arguments with my test team about flow tests which go much beyond the syntax and semantics, and in true sense have no limits. The toughest problem we had that no customer was ready to give its design to us for testing purposes. Sometimes, if we were lucky, could get a portion of it under NDA, else relied on repeated reporting of failures (until pass) from the customer.

I am happy to see this tool called Crossfire from Fractal Technologies. This tool enables customer as well as supplier to work in collaborative mode and certify the checks required for a design to work. It works at design level to validate complete StdCell library, IOs and IPs which are used in the design, and has more than 100 types of checks used consistently over different formats at front-end as well as back-end, including databases such as Cadence DFII, MilkyWay and Open Access. Apart from parsing the format, it has specific checks for cells, terminals, pins, nets and so on, as usual for all cells in the library.

What is interesting, and that adds up into the quality of test, is special set of checks which actually sneak into design quality at the time of construction of the design. Some nice examples of these are –

Layout Vs layout – Identity between polygons is checked by Boolean mask XOR operation and abstract enclosing layout polygons. Typical errors of this check can be in the form as below –

LEF cell size – LEF cell is checked to have correct size as per LEF technology.

Routability – Checks if signal-pins can be routed to cell-boundary. Typical errors are – “Pins not on grid” or “Wrongly coded off-set”.

Abutment – Cells checked for self-symmetry and abutment with reference cell. Typical abutment errors are represented as below –

Functional Equivalence – Functional representation in different formats is checked for equivalence. Schematic netlists such as Spice, CDL or schematic views must exhibit same functionality. Similarly Verilog, VHDL or any other description must mean the same functionality. Typical Functional Equivalence errors are – “mismatch between asynchronous and synchronous descriptions”, “short circuit”, “missing functional description”, “Preset and Set yielding different results”, and so on.

Timing Characterization: cross format – Checks for consistency of timing arcs in all formats such as Verilog, VHDL, Liberty, TLF and the like

Timing Characterization: NLDM (Non-linear Delay Model) – Consists of Index, Table Values, Trend, Property and Attribution checks. Typical characterization errors are – delay decreases with increasing output load, obsolete default conditions, non-pared setup and hold times etc.

Timing Characterization: CCS (Composite Current Source) – Consists of Index, Table Values, Reference time, Peak position, Threshold passage and Accuracy checks. Typical CCS characterization errors are represented as below –

Timing Characterization: ECSM (Effective Current Source Model) – Consists of Index, Table Values, Threshold passage and Accuracy checks. Typical ECSM characterization errors are represented as below –

Power Characterization: NLPM (Non-linear Power Model) – Consists of Index, Table Values and Trend checks.

A detailed description of all these checks is given in a paper at Fractal website here.
It also has example test reports of 45nm library and 45nm PDK library validations – Example1 and Example2

An important characteristic of Crossfire is that it allows you to create new tests on demand as per your design and design flow, hence leading to completeness in all types of checks in actual sense. It can also accommodate proprietary formats and databases. Fractal team provides expert knowledge on validation flows and integration of customized APIs with Crossfire which then provides a true environment for complete Quality Control and Assurance.

Share this post via:

Comments

0 Replies to “Do my tests certify the quality of my products?”

You must register or log in to view/post comments.