0706EF3-Magma.doc

Keywords: DFM, DFY, OPC, DRC, unified data model, lithography

Editorial Feature Tab: Methods & Tool – DFM/DFY Models

@head: True DFM/DFY Solutions Require More Than OPC And DRC

@deck: To make sure that all of the tools in the dataflow have real-time access to design data, lithographic-aware R&R engines and statistical-analysis engines need unified data models.

@text: One core problem underlies the vast majority of today’s design-for-manufacturing (DFM) or design-for-yield (DFY) issues. That problem revolves around the IC features that are now smaller than the wavelength of light used to create them (see Figure 1). This is akin to trying to paint a 1/4-in.-wide line using a 1-in.-diameter paintbrush. Currently, device manufacturers address this dilemma by post-processing a GDSII file with a variety of resolution-enhancement techniques (RETs). Examples of these RETs include optical proximity correction (OPC) and phase shift mask (PSM). In the case of OPC, for example, the tool modifies the GDSII file by augmenting existing features or adding new ones--known as sub-resolution assist features (SRAFs)--to obtain better printability.

In the context of the final GDSII files and photomasks, the problem is that every structure in the design is affected by other structures in close proximity. Say, for example, that two geometric shapes are isolated from one another in the GDSII file and photomask. These shapes will print in a certain way. But if the same shapes are located near each other, the resulting shadowing will modify each shape--often in non-intuitive ways. The shadowing interrupts the light used to create these shapes.

To employ an effective DFM/DFY solution, IC designers must address manufacturing or yield problems caused by the catastrophic or parametric failures that are either systematic (feature-driven) or statistical (random) in nature (see Figure 2). Unfortunately, existing design flows don’t adequately address these DFM or DFY problems for 90-nm and below technology nodes.

A full RTL-to-GDSII design flow is needed in which all of the design and analysis engines are DFM/DFY-aware. In particular, a true DFM/DFY solution should feature lithographic-aware placement and routing engines coupled with statistical analysis engines. Such a solution should include a unified data model that gives all of the tools in the flow immediate and concurrent access to exactly the same data. Such tools range from synthesis to placement and routing, timing, extraction, power, and signal-integrity analysis.

The combination of lithographic-aware placement and routing engines will minimize the need for resolution-enhancement techniques (RETs), such as post-layout OPC. They also will increase the effectiveness of any such OPC used. In addition, these engines can mark the portions of the layout where OPC isn’t required and pass this information to subsequent OPC tools. In doing so, they prevent unnecessary OPC. Because OPC is minimized or anticipated in advance during the design process, its impact on timing and area will be minimal. Correct-by-construction design closure will be achieved.

In the past, the design and manufacturing worlds have been treated as separate, distinct entities. Until now, designers have been shielded from the intricacies of the fabrication process by the use of “design rules” and “recommended rules” provided by the foundry. In earlier technology nodes, designers could safely assume that the chip could be manufactured if they and their tools rigorously met these rules. Any yield problems were considered the foundry’s responsibility and were addressed by improving the capabilities of the fabrication process or bringing that process under tighter control. In the case of today’s ultra-deep-submicron technologies, however, these rules no longer reflect the underlying physics of the fabrication process. Even if designers meticulously follow all of the rules provided by the foundry, the chips may still suffer unacceptable yields.

Limitations Associated with Design Rules

Design rules are becoming much more complex with every new technology generation. At the 130-nm node, for example, the design rules were relatively few and simple. The rules started to proliferate and become significantly more complicated at the 90-nm node. At the 65-nm node, they have become extremely complicated. For example, even a simple end-of-line rule now has a plethora of parameters (see Figure 3).

The end result is that the number and complexity of such design rules is spiraling out of control. In addition, employing these rules consumes huge amounts of memory and requires excessive run times. As previously noted, these rules aren’t even capturing the underlying physics of what’s happening.

Current design flows are based on the use of design rules and recommended rules to generate initial GDSII files. Those files are then post-processed via a variety of resolution-enhancement techniques like OPC and PSM. The problem is that RET takes place following layout (place and route), which is too late in the design flow. When the input to the RET tools is poor--as it is with existing flows--data sizes and run times explode. As a result, mask-generation costs skyrocket. In some cases, it’s simply not possible to satisfy the amount of RET required (for example, structures that need to be added) using the initial design. A time-consuming physical design cycle must then modify the design in order to create room for the RET, which in turn causes modifications to the design’s performance characteristics.

Within any IC fabrication process, there will be unavoidable process variations that lead to fluctuations in device topology, behavior, and performance. If the performance falls outside of the specification, it’s classified as a form of yield failure known as parametric yield loss. In the case of conventional design methodologies, such variations are addressed by defining worst-case conditions and then ensuring that the performance will meet the specification under any condition. This worst-case approach is facing serious challenges at tighter design nodes. The only real solution available for conventional design flows is to guard-band the design by including excessive safety margins in the specification. But this approach makes it harder to successfully complete the design. It also leaves an unacceptable amount of performance “on the table.”

Summary of Limitations

The limitations associated with traditional DFM/DFY can be summarized by returning to the original matrix of manufacturing and yield problems (see Figure 4). In the systematic-catastrophic category, the problem is that the design rules used by the layout (place-and-route) tools don’t have the ability to account for complex lithographic interactions and effects. Placing a component or track near another component or track may negatively affect the printing of both structures. Similarly, analysis cannot account for lithographic interactions and effects in the systematic-parametric category. In this case, placing a component or track near another component or track may affect the properties and timing of both structures.

In the statistical-catastrophic category, recommended rules like the adding of redundant vias don’t have the ability to account for lithographic interactions and effects. The result is that adding a particular via may create a lithographically unfriendly situation that negatively impacts the yield--the exact opposite of what was intended. Finally, in the statistical-parametric category, the design has to be created using worst-case scenarios because the conventional analysis tools cannot account for statistical effects. Performance and yield are then impacted.

Requirements for True DFM/DFY

A central point has been largely missed by conventional DFM/DFY approaches: In both of these terms, the “D” stands for design. That is, DFM/DFY implies analysis, prevention, correction, and verification during the design phase. It doesn’t imply post-GDSII fixes like OPC.

Ideally, manufacturability and yield considerations should be brought forward all the way into the synthesis stage in the design flow. Conventional synthesis engines perform their selections and optimizations based on the timing, area, and power characteristics of the various cells in the library coupled with the design constraints provided by the designer. If cell libraries also were characterized in terms of yield, synthesis engines could perform tradeoffs in terms of timing, area, power, and yield to produce optimum performance with better yield.

A number of key technology enablers are required for yield-aware synthesis to achieve its goal. They include accurate yield models and analysis, concurrent optimization, and the use of a unified data model. Each cell should be associated with an accurate yield model in relation to the other cells in the library. This step ensures that the yield estimation during the subsequent design optimization will be reliable. Concurrent optimization should be available to facilitate continuous tradeoffs between competing design objectives. Meanwhile, a unified data model is essential to ensure the real-time availability of the most up-to-date information required by the concurrent optimization algorithms.

The printability of cells has to be addressed by a more intelligent manufacturing and yield-aware placement engine. Such an engine must have knowledge of the limitations and requirements of the downstream OPC. Embedding a printability (lithographic) analysis capability in the placement engine will allow this engine to recognize patterns that must be avoided. It also will be able to identify locations where extra space must be added for use by downstream OPC. Often, such analysis will need to be performed “on-the-fly.” As a result, its algorithms must be extremely efficient in terms of run time and memory usage. They also should employ as many physics-based models as possible.

In addition to design-rule-related yield failure, two other major failure mechanisms can take place in the routed portion of a design. One is failure due to defects on the wafer at random locations. The other is caused by printability problems that become more pronounced as design rules shrink further into the submicron realm. As stated earlier, the combination of lithographic-aware placement and routing engines will minimize the need for post-layout OPC. They also will increase the effectiveness of any such OPC required, minimizing its impact on timing and area and enabling correct-by-construction design closure.

Inevitable variations in the manufacturing process and environment also will cause chip performance to vary in what can be described as a distribution. Parametric yield loss occurs when chip performance drifts out of specification. In order to maximize parametric yield, a new statistical design methodology must be employed. This methodology must include placing the mean of the distribution in the middle of the specification window--a technique called design centering. It also has to keep the spread of the distribution within the window, which is known as design desensitization.

New analysis tools will employ statistical methods using these parametric models and extraction. For example, a statistical static-timing analysis (SSTA) can be used to calculate the statistical timing associated with each path and node in the design. These timings are no longer represented by single values, but by distributions determined by the distributions of the variational parameters. Such parametric models and extractions can be used to account for both intra-die and inter-die variations. Any variation in the process or environment can be directly linked to variations in design performance.

To summarize the requirements associated with a DFM/DFY design environment that can address the needs of today’s (and future) ultra-deep-submicron technology nodes, simply return to the original matrix of manufacturing and yield problems (see Figure 5). A true DFM/DFY solution, which should feature lithographic-aware placement and routing engines coupled with statistical analysis engines, also requires the use of a unified data model. This model will ensure that all of the tools in the flow have real-time access to the same design data. The combination of lithographic-aware placement and routing engines will lessen the need for post-layout OPC. It also will increase the effectiveness of post-layout OPC while leading to the use of simpler design rules. When coupled with analysis tools that can account for timing variations caused by lithographic and statistical effects, a true DFM/DFY environment can meet the design-for-manufacturing and yield requirements of today’s technology nodes and the technologies of the future.

Behrooz Zahiri is the Senior Director of Business Development and Marketing, Design Implementation Business Unit, at Magma Design Automation. He has been with Magma since June 2003. The author of numerous IC industry articles, Zahiri has 14 years of experience in computer and IC chip design. He has an MS in Electrical Engineering from StanfordUniversity along with a BS in Electrical Engineering and Computer Science from UC Berkeley.

++++++++++++++

Captions:

Figure 1: What you see is not always what you get.

Figure 2: Manufacturing and yield problems fall into four main categories as shown.

Figure 3: At 65 nm, design rules are extremely complicated.

Figure 4: This stop conveys some of the limitations associated with traditional DFM/DFY approaches

Figure 5: Here, the basic requirements associated with a true DFM/DFY approach are shown.

© 2005 Magma Design Automation, Inc.Page 1