- Synchronizer failures can occur at any time before or after the MTBF.
- Most chips in a wafer perform better than they do at the worst-case corner.
- High-volume, safety-critical products should be held to a high standard.
- The worst-case environment for a CDC may be rare in actual use.

An alternative that has received some recent attention is a Monte Carlo simulation, one that randomly varies PVT conditions over the range expected for the product. Instead of an estimate of MTBF, this approach leads to an estimate of the probability of a metastability induced synchronizer failure, assessed over the expected distribution of parameters and conditions. However, in the early stages of design, an impractical level of effort is required to investigate carefully even a few alternatives.

These thoughts led me to investigate the effects on the metastability settling time-constant that result from variability in the transistor threshold voltage. This approach bypasses the extensive burden of Monte Carlo simulation, but still provided an increase in understanding of the effects of parameter variability.

I choose to study the variation in threshold voltage because it has a major effect on the settling time-constant and is a classic example of a Gaussian distribution. This investigation led me to wonder what happens when the cross-tied transistors’ threshold voltages are at an extreme value of that distribution and in the vicinity of the metastability voltage V

_{m}. If the simple theoretical model of a strongly inverted transistor holds, the settling time-constant would be infinite; not a good thing for a synchronizer’s MTBF. But that’s theory. Can reality be different?

*PublicSync*, to simulate synchronizer behavior. As you can see in the above figure, the settling time-constant grows significantly as the transistor threshold moves toward and above the metastability voltage

*V*and it does so smoothly without a singularity at

_{m}*V*. These results were obtained by the analysis tool,

_{m}*MetaACE,*using an automated scan of

*V*. By fitting a curve to the data points it was possible to calculate the probability of a synchronizer failure, Pr(

_{t}*fail*) given the mean and standard deviation of

*V*. The details of this calculation can be found here.

_{t}The table shows the ratio of two normalized calculations of the probability of failure:

*p*(_{wc}*fail*,*t*_{s}): calculated assuming worst case (*wc*) conditions for*V*and with an allowed settling time_{t}*t*_{s}*p*(_{vt}*fail*,*t*): calculated assuming a distribution of_{s}*V*and an allowed settling time_{t}*t*_{s}

This ratio of probabilities shows how failures are overestimated by the worst-case (

*wc)*measure as compared with the varying threshold (

*vt*) measure. The wider the distribution of threshold voltage and the longer the allowed settling time (

*t*), the more this discrepancy grows. For a 500 MHz clock the right-hand column would correspond to two-stage synchronizer and a latency of 4 ns. For example, for some unsurprising safety-critical product conditions and a standard deviation of 20 mv, the

_{s}*wc*measure suggests an extra, but unnecessary synchronizer stage with its accompanying added latency.

So the take-way message for me is: calculate the probability of failure Pr(

*fail*) and not worst-case MTBF. Such a probability-based measure of risk should include the number of units in use, the unit lifetime and the distribution of semiconductor parameters such as transistor threshold voltage. Pr(

*fail*) also avoids the misleading tendency to associate the MTBF with a failure-free period. A mistake many make, but one that masks the real possibility of failures that occur at anytime during a product’s lifetime.

## Beware of Parameter Variability in Clock Domain Crossings

Jerry Cox 05-12-2015