Welcome to the newest forum on SemiWiki: Silicon Test and Yield Analysis! We’re all glad that you stopped by. Our hope is that this forum can be place for product engineers, test engineers, yield engineers, failure analysts, and process engineers to come together and ask and answer questions of each other. Our measure of success will be that members can come here, ask questions, get answers, and do their jobs better as a result. Our belief is that everybody already knows a lot about something that somebody else is currently struggling with. In the words of William Gibson: “The future is already here — it's just not very evenly distributed.”
This leads us in to our first question: What does Silicon Test have to do with Yield Analysis?
Ultimately, test is the ultimate arbiter of yield. If a die on a wafer fails testing by the ATE, it will not be shipped to a customer. As such this means that it’s a great jumping off point for yield learning. The least sophisticated and most common test metric used in yield learning is the die’s hardbin. On the other extreme are bleeding edge electrical fault isolation methodologies like bitmap classification and scan-based Diagnosis Driven Yield Analysis (DDYA). In any case, data about the chip is collected from the ATE and considered offline in an effort to understand why the silicon isn’t behaving the way it’s expected to.
Interestingly, regardless of the sophistication of the data collected by the tester, everybody downstream depends on this data to do their jobs. The test engineer uses hardbin data to optimize the test program ordering. The yield engineer uses the results from the bitmap classification to track issues. The failure analyst uses results from scan diagnosis to localize the defects. What makes this interesting is that the richer the data coming off the tester, the better everybody can do their respective jobs.
So my question is, on a daily basis do you require test data to do your job, and if so do you believe that richer data can improve your results?