@head: Coverage-DrivenMethodology for SoC Development Critical for Success

@deck: Dr. Andreas Dieckmann of Siemens AG presents a case study on adopting Specman "e", formal property and equivalence checking, emulation and prototyping, and the coverage driven methodology developed by his team for the verification of SoC projects.

@text: Methodologies that address highly automated and integrated solutions have really become the key to successful verification of complex system-on-chip (SoC) designs. With the many verification tools, techniques, and languages available today, choosing the right path or set of solutions can be daunting.

The verification methodology we have chosen in the Automation and Drives (A&D) group of Siemens AGhas really evolved over our last few projects and has reached its current coverage-driven form used in the verification of two related ASIC SoC designs. Each ofthese designs contained about 4M logic gates and numerous small memories totaling about 1MB of SRAM. These two chips were significantly more complex than previous projects, theyrequired a multi-site development team, and they fueled our methodology evolution.

<b>A Little Background</b>

Long ago,our team chose VHDLas our RTL design language due to its early standardization and its superior capabilities (user-defined and enumerated types, package and generate statements, library support, etc.) as compared to the original Verilog language.

Our choice of VHDL was also made with verification in mind. Its advanced constructs allowed us to build more sophisticated and more reusable testbenches than was possible with Verilog. Thus &ndash; up until 2001 &ndash; our verification environments were VHDL-centric, with both the RTL design and the majority of the testbench code in VHDL.

In 2001,we took a major step in the evolution of our verification process, when we chose the <i<b>e</b</i> language and the Specman Elite testbench automation solution (now available from Cadence). We had found many benefits to this approach including the ability to add randomization to our VHDL testbenches, find bugs more quickly, and constrained-random stimulus generation capabilities making it much easier to thoroughly exercise our designs.

Over time, ourverification plans gradually shifted from test-focused to feature-focused. Figure 1 shows an example of a traditional verification plan that lists the tests to be written for each major functional unit in a chip, and tracks the status of test completion.

Functional Unit / Test Name / Spec Written / Test Written / Test Passed
Bus Interface / read_sequence_a / X / X / X
read_sequence_b / X / X / X
write_sequence / X
r_w_intermixed
Cache controller / cache_hits / X / X / X
cache_misses
cache_flush / X
Interrupt FSM / exercise_all_states / X / X

Figure 1. The traditional verification plan

The problem with this traditional approach was that it required a precise mapping from functional units to specific tests. That makes sense when the tests are hand-written for specific areas of the design. However, constrained-random stimulus generation may exercise many areas of the design at once and can run as long as the user chooses. So the notion of an individual test is no longer useful.

<b>Coverage Metrics</b<br>

So how can verification engineers tell what a constrained-random test run is actually exercising?They can't unless they adopt some sort of coverage metric to provide a quantitative measure of verification effectiveness. With such metrics in place, we can say that a test run verified all areas that it covered and, ideally, we can combine the results from all test runs to get an overall view of coverage.

For our designs, we have found that functional coverage points provides the best measure for determining what each test run has accomplished and where we stand in terms of overall verification completeness.

<b>Adding the Management Element</br>

As we began the project with the two 4M-gate SoCs, we decided to add Cadence's Incisive Enterprise Manager to further automate our process. In concert with Specman, the testbench automation solution, Enterprise Manager provides a mechanism for capturing features in our design. It also reports functional coverage results against these features in a clean graphical way. Figure 2 shows a screen shot of one such report in HTML format.

Figure 2. A Modern Verification Plan (vPlan)

This combination of powerful software solutions including testbench automation and verification management has enabled us to take advantage of a true coverage-driven methodology. Our team has put a great deal of effort into defining detailed, corner-case features in our verification plans and in specifying functional coverage points to track the exercise of these features helping us to determine when to "tape out" successfully.

<b>Looking towards Future Development</b>

We have been very pleased with the results of using the coverage-driven verification methodology on our two latest SoC projects. One measure of our success is our consistent discovery of bugs throughout the verification process, as shown by the example in Figure 3.

Figure 3. Defects detected over time

The nature of constrained-random stimulus generation means that we can continually run additional tests, experiment with different seeds to vary the random behavior, or tweak biases to produce a better mix of stimulus (such as the ratio of reads and writes on a bus) as long as we keep finding bugs. Observing the bug-discovery rate and tracking coverage metrics are both important contributors to the tape-out decision.

We embraced the coverage-driven methodology and made significant strides in schedule predictability, design quality and reuse. The faster and more thorough bug discovery achieved on the two-ASIC projects satisfied the predictability and quality aspects. Wewere also able to reuse many components of our verification environment from the module level to the full-chip level, and many will be reusable on future projects as well.

As future projects get more complex, we expect that significantly more verification cycles will be needed. As our regression tests get longer, we will likely need to run on server farms rather than only a few machines, a capability supported by Enterprise Manager. Also, we will probably want to use the test-ranking to automatically select subsets of our full regression suites for rapid verification of RTL changes.

<b>Summary</b>

In summary, the last five years or so have been a period of rapid evolution for our verification team at Siemens A&D. We have moved from VHDL-based directed tests into a constrained-random, coverage-driven approach complemented by other technologies such as formal analysis that are all tied together by a comprehensive plan-to-closure methodology. We are confident that our verification methodology will continue to evolve to keep pace with our future project demands.

<i<b>Note:</b> A more complete discussion of this topic can be found in </i>"Metric Driven design Verification: An Engineer's and Executive'sGuide to First Pass Success"<i> by Hamilton Carter and Shankar Hemmady, New York, New York, Springer, 2007. (ISBN 978-0-387-3851-0).</i>

------@author: ------

Dr. Andreas Dieckmann lives with his family in Nürnberg, Germany. In 1995, after obtaining his MA at the University of Erlangen and his PhD in Electronic Engineering at Technical University of Munich, he began working at Siemens AG. Initially he was involved in board and fault simulation. From 1997, Dr. Dieckmann gained expertise in system simulation and verification of ASICs. Since 2001, he has been in charge of coordinating and leading several verification projects employing simulation with VHDL and Specman "e", formal property and equivalence checking, emulation and prototyping. The case study described here is an extension of the coverage driven methodology developed by his team for the verification of SoC projects.