You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Is there anything in VLSI layout other than “pushing polygons”? (5)

    Article: GLOBALFOUNDRIES and Mentor Develop Methods to Identify Critical Features in IC Designs-calibretshirt1998.jpgBeing new in Ottawa and trying to get some momentum towards automation in full custom layout I was telling industry people that I am interested to work with everybody to move this agenda forward. My Director of Engineering at that time, Peter Gillingham, took me to visit Carleton University in Ottawa. One of his professor friends, Martin Lefebvre, had one PhD student building a silicon compiler for standard cells. Theoretically they knew “everything” but somebody from the “real world” was needed to say what make sense and what not, what is quality in layout (!) and what rules are important to follow. David Skoll, who was the architect, the implementation and the AE for it became an instant friend. It was very exiting to share my layout knowledge with him and continued to see over the years most of their roadmap and advanced alpha demos. Like any other beginning it was a bunch of scripts and some free tools from all over the map and David called it Machiavelli.

    In the following years the name evolved into a full-size company that most of you will remember as Cadabra. In my opinion it was a very powerful idea and a solid implementation but had a limited usage. At that time ASIC flows started to pop-up from all over the map and every design house was buying or getting free libraries from fabs and ASIC vendors instead of developing themselves. They tried to implement a 2 level cells generation and process migration but with little success so the tool died a few years ago under Synopsys roof. What did I get from this? We learnt a lot about the power of “coins” or “basic bricks” and will Karl help MOSAID layout team had generated a lot of “basic” cells. From devices to via arrays, coding the decoders, etc. Later in PMC Sierra we had very advanced coins as part of our flow specifically enhanced for POWER connections and capacitors on power supplies.

    One of the most time consuming and errors prone tasks for a Layout Designer is was (and still is) verifying DRC and LVS. MOSAID was using Dracula for verification but the preparation of data was tedious as the flat verification was not possible. As DRAM chips had a memory cell repeated, in our case 4-16 million times, we had to create “work structures”. To reduce the number of devices in a block we created doughnuts empty in the middle with only 1 cell on the boundary for interface checks. In top os size issues in DRAM we had 3 different sets of design rules, CORE, Memory and Periphery. Now we had to run verification on partial neighbouring. We also had to create schematics for each of these “working structures”. How many errors humans can introduce when ripping off full hierarchies to create this cells in layout and well as in schematic? Any 4M DRAM or above was taking about 1 week for one 1 DRC verification or LVS and 3-5 people in layout and circuit design. When all was clean any ECO will restart the creation and the full verification with many potential new errors.

    We were very motivated to find a better solution. First, we evaluated the new Dracula (II) which was a 2 levels hierarchy verification versus Checkmate, the new advertised Mentor Graphics tool. About the same time Cadence started to talk about Vampire and ISS came with Hercules, advertised as the first hierarchical verification. I went to DAC in 1995 and had advance and private demos for each but I was not convinced that any has the “revolutionary” solution a memory needed. Most of their effort were toward ASIC which had at that time 1-3 levels of hierarchy but very few devices at any level as they used the standard cells as a stepping stone for hierarchy. Back from DAC we decided to bring Hercules for an evaluation but before doing that I called Mentor, as we had most of our platforms coming from them. We told them that unless they have something “cooking” worth waiting for, we will go for Hercules. Suddenly somebody wanted to talk to us and we got Michael McSherry, the technical marketing manager. Together with Gregg Shimokura, the coauthor of my book, we flew to Wilsonville, Oregon with our database on a tape.

    After about a week of hiccups Gregg came back with great news, 1-hour LVS on a 16M DRAM. We had a clean data and a dirty one to be sure they actually find the same number of errors we found through our old Dracula flow. We started to see the light at the end of the tunnel. We agreed to work with them to make the tool ‘industry ready” and by 1996 was the official release. Mentor was so motivated to make this tool successful that they brought from China MingYong Yu, an experienced AE to learn everything we did and coordinate factory development. He was physically inside MOSAID for his first year in Canada. The biggest thing for a memory design was the reduction in the “potential errors” between our working structures as Calibre was the first real Hierarchical verification. In 1996 after a few successful tapeouts, I wrote a press release with Mentor about Calibre capabilities. For the following 2 years I personally helped Sundaram Natarajan, Mentor California sales guy, explaining customers why this solution is superior to all others offered at that time. Sundaram sold so much Calibre in these 2 years that he wanted to reward me somehow. I wanted the layout team to feel proud of their achievements so I convinced him to use this “reward” by subsidizing 220 t-shirts (one for each MOSAID employee) with the design attached in this article! Yes, another not “layout task” …

    We solved the duration and errors issues but working with a new tool meant that the fabs, specifically the memory ones, did not have a Calibre PDK ready, we had to invent something else to make our life easier. We called it Process Independent Setup (PIS). We got from Germany a CAD expert, Britta Krueger, and together with the layout team prepared the specifications of this PIS. What we wanted first was the DRC. We built in layout all the possible design rules test cases in one chosen process and we wrote the code to verify them, obviously by different people. In parallel Britta and some helpers wrote all the Design Rules in a parameterized fashion in witch the numbers are actually parameters.

    When a process comes in, one person takes the manual and inserts the values in the parameters page. When done just compiles the parameterized DECK and obtains the real values in Calibre command file. If a value is “0” this rule will not exist in the compile DECK. We knew that with this flow we can cover 80-85% of all design rules for any new process. We planned to add every time there are new design rules new test structures and their rules, in the original PDK and follow the flow. The intention was to reduce the new process setup from 100% new to 15% new. After a few months of work, we reduced any new process DECK generation from 3 weeks to 1 week when the processes were very different and to 1-day whey they were close. Then we started to enhance the PIS to get into LVS, Device generators, router setup, etc.

    More about productivity enhancements next month!

    Also Read Pushing Polygons 1-4