0806EF2-Sonic-IP

Keywords: IP, cores, interface, ASIC, OCPIP

Editorial Tabs: Digital -- Link IP Cores

@head: Link IP Cores with Smart Interconnects for Complex SoCs

@deck: High-level “what-if” analysis tools are gaining popularity with model accuracy improvements and the availability of third-party co-simulation, verification, and optimization programs.

@text: Everyone is familiar with Moore’s Law and its implications for growing system-on-a-chip (SoC) complexity, feature proliferation, falling price points, and affordable yet sophisticated gadget ubiquity and evolution. Unfortunately, the electronic-design-automation (EDA) community hasn’t kept pace. Gate counts have grown faster than designer productivity. With the need for multi-generational device design with heterogeneous multi-core processors, the evolution and integration of intellectual property (IP) and assorted technologies have been accelerated for each step in the design flow. A series of consortia has evolved to close in on standards and thereby achieve the ideal plug-and-play compatibility of heterogeneous processing blocks. Currently, private companies and public consortia lead these efforts.

The Open Core Protocol (OCP) organization probably provides the most flexible and extensible standard. Although successful SoC design is still a long way from Lego Land, smart interconnects and socket-based design—with the simple integration of OCP-compliant cores—has simplified and expedited successful design-completion efforts. For example, Sonics’ SMX provides an IP smart-interconnect generator framework for quickly instantiating and optimizing a superset of OCP 2.0 features within a flow. In doing so, it provides a simple interface to industry-standard synthesis, simulation, and verification EDA tools. This trend has been further eased and accelerated by the growing use of widely used processor, DSP, and graphics cores as well as verified and optimized dedicated processors for specific standards-based functions (e.g., H.264 video or AAC music compression or decompression). These processors boast multi-source programming, development, simulation, and verification tools.

Consider the confluence of technologies involved in the functional equivalence of a highly successful mobile-phone SoC (see Figure 1). The fabric that connects the different heterogeneous processing and I/O blocks is separated from the blocks by a sophisticated interconnect. That interconnect isolates these blocks from the fabric while enabling sophisticated dataflow services (e.g., dynamic security settings for Digital Rights Management (DRM) requirements, different quality-of-service (QoS) requirements, and the optimization of dataflow efficiencies between DRAM and multiple blocks with different bus widths, speeds, etc.). The implications of this functional separation have major design implications for initial design, IP integration, and simplified re-use.

Challenges in the Flow

Now consider the tasks and other issues required from concept to production silicon (see Figure 2). Books could be written (indeed, many have been) about most of the identified tasks. Some of these tasks could be mapped to architectural exploration while some belong to the back end. In addition, some tasks are more global in implication. From concept to timing closure and tapeout, however, successful functional design is becoming more of an iterative process. Costs and challenges are reducing the number of application-specific-integrated-circuit (ASIC) starts. As a result, the SoC designs that are begun usually anticipate high volumes and significant re-use for multi-generational exploitation.

If the designer knows from the beginning that multiple design generations are planned, the optimization of different design aspects becomes easier to justify. A stronger and more compelling case is made by the following: analysis of return on investment (ROI) on efforts to improve testbench automation; reduce gate counts or power consumption for separate blocks; improve fault coverage to allow a better night’s rest while waiting a few months for the return of silicon from the fab; improving the flow and correlation between different levels of abstraction to simplify subsequent design exploration; sensitivity analysis for safe but not excessive margining of important parameters; and more. Recognizing the breadth of this canvas has weighty implications for the distribution of tasks and the confrontation of challenges.

The Joy of Sockets

Often, semiconductor suppliers find themselves at the bottom of the food chain. Yet this basement is clearly occupied by a lot of very creative and bright people. With increased needs for productivity improvements, reliability, and the oft-elusive profitability has come improved algorithms, proliferating standards, and socket-based design approaches. They amplify the advantages of smart interconnects. As mentioned, difficult dataflow challenges have differing heterogeneous multi-processors, which require different burst rates, quality of service, security requirements, etc. Now, however, they can be more simply connected, verified, augmented, and re-used (see Figure 3).

Note that the bus interfaces may be different sizes and the blocks may be masters, slaves, or both. By inserting intelligent agents between the blocks of IP and the internal fabric, the functions have been separated and abstracted from the fabric. With standard interconnects like OCP on the boundaries of the processor blocks, Masters (Initiators) and Slaves (Targets) can be effectively and functionally isolated from the fabric in terms of clock boundaries, clock gating for power reduction, etc. Simplified additions or modifications of IP blocks result. Note that the processing blocks may be masters, slaves, or both. Although Figure 3 shows three blocks for clarity, existing SoCs with over 30 sophisticated heterogeneous processing blocks are already shipping in high volumes. Somewhere around 100 are in design for subsequent generations.

Perhaps more important are the implications of this separation for integration, verification, and optimization. With I/O functionality like SMX/OCP previously added to the HMPs or—to a lesser extent—with AMBA AHB or AXI (OCP gaskets do exist for AXI and AHB), they also can be created for other I/O structures. They will then simplify the interconnection circuitry and exploit the multi-threading, non-blocking interconnect advantages of OCP. With a non-blocking crossbar and multi-variant QoS shared-link generators available, one should think about a number of key questions when choosing an interconnect strategy (see the Table).

What IS and What SHOULD BE in “The Package”

In the breadth of offerings accompanying proffered IP, there has been a spectrum of what end users want as well as what IP merchants provide (typically, with some significant distance between the two). However, “[t]o be [easily] re-used, a component needs to be well-defined, correctly encapsulated, validated, and conform to an interchange standard” (CoFluent Design). Because it’s difficult to be all things to all people, there exists a question of balance in terms of what almost every integrator requires and what facets will typically require some customization by each user.

This question is of significantly more than academic interest. Some IP vendors provide SystemC code or some variant with some RTL, hardened blocks, complete test suites, etc. Building testbenches and assuring full verification before tapeout can take far longer than completion of the basic design. For many sub-segments of multimedia designs (e.g., G.72x voice compression, Dolby Digital, H.26x, etc.), there are test suites that should be easily linkable for subsystem operational verification. Obviously, some questions that invite answers include how easily these test suites are linked to normal test flows, how pricey they are, how transferable, etc. Clear paths for these answers and tradeoffs would be a welcome component in the package.

A growing third-party business in IP packaging and verification indicates the challenges that are inherent in this integration. A sharp knife edge of tradeoffs pervades high-volume SoC design efforts—especially those destined for the rapidly evolving consumer design space. Delaying availability precludes market entry when margins are still large and competition is feeble. Rushing to silicon may mean purchasing additional, expensive, non-final mask sets and even longer delays for painful re-spins.

Evolution and Intelligent Design

Extrapolating from the last 50 years, it’s fairly safe to assume that product proliferation and feature enhancements for high-volume consumer, industrial, and automotive electronic systems will continue. Standards bodies also will continue to proliferate—be it from a desire for wider assured interoperability, for a broader market over which to amortize developments and sales, or from a desire to argue long into the night over esoteric points while sipping cold coffee. Although these parties may be partially influenced by short-sighted commercial interests, they typically bring together a spectrum of interested parties that influence the final result in a fashion that’s usually for the best.

Despite the old saying about the camel being a horse designed by a committee, a good standards committee will usually encompass chip designers, software developers, systems users and ODMs, test and verification personnel, and an oversight (at least at the membership/participation time and fees signoff level) of senior management. Without standards efforts, the broad proliferation of H.26x audio and video usage couldn’t have occurred. In addition, the virtuous cycle of higher volumes and lower prices driven by competitive worldwide efforts wouldn’t have enabled affordable HDTVs, DVD players, or sophisticated, multifunction cell phones at tractable price points.

With this proliferation comes the opportunity for SoC developers and third-party vendors to address the design, verification, and optimization for the IP blocks that implement these standards at all stages of the flow. If one knows a priori that an H.264 AVC core will be used for many years and in multiple designs, one might more easily justify developing or purchasing high-level/SystemC models and simulation/co-simulation tools from companies like CoFluent Design, Summit, or CoWare. This also is true for the fractious audio or video test suites from folks like Sarnoff Labs, third-party test software for more complex profiling, pre-/post-silicon debug tools like FS2, etc. Ideally, the synergy of mixing should be easy (e.g., running a video test suite rapidly through a high-level design model during the preliminary exploration phase).

Ideally, IP developers, providers, and third-party implicit collaborators would increase interoperability efforts to simplify “menu” shopping. In doing so, they would put together a complete test, verification, and optimization suite for IP re-use. Standard interfaces like OCP and augmented interfaces like Sonics’ SMX greatly simplify a “Lego Block” approach. But plugging in the appropriate software tools for simulation, verification, and optimization at all stages of the design flow—from conceptual investigations to or after tapeout—is clearly motivated. A sophisticated and extensively parameterizable, dataflow-interconnect, IP-generation product like SMX further motivates this ubiquitous proliferation. After all, plugging in these relevant point solutions becomes even further simplified with collaborative standards and interoperability efforts.

Future Trends and Desires

If one were to list the desired characteristics of an SoC (or virtually any complex system), some of the elements would obviously be mutually exclusive (e.g., super-fast, tremendous processing headroom, quite cheap, extremely reliable, admirably efficient, trivially upgradeable, etc.). Creating, evaluating, and optimizing appropriate cost functions for a design is a difficult art. Different levels of abstraction can be created that permit more rapid exploration of the potential design space. The return on these efforts will clearly be greater where multi-generational design is involved. With all of the above factors comprehended, one is reminded that, “Good enough is oft times the enemy of great!” As volumes grow and margins shrink, however, it becomes increasingly critical to approach a more rigorous optimality.

In an ideal world—with infinite time and processing power—every aspect of the design could be optimized. Present and near-term computing power will probably not allow us to splice a billion transistor-circuit SoCs and size each transistor appropriately. Yet an intelligently implemented, socket-based design does greatly simplify grosser tradeoffs and optimization in terms of both analysis and synthesis. If different die regions are allowed to easily run asynchronously at different clock rates and clock boundaries are crossed with ease, the designer is able to run each function as fast as necessary (and no faster). The designer therefore reduces dynamic power consumption and requisite cooling while increasing battery life.

Smart interconnects also simplify the ability to dynamically gate off the clocks or provide the signal for power controllers to cut or reduce voltage to the individual inactive blocks. (For those with a long-term perspective, they also slow the entropic heat death of the universe.) These power sequences are accompanied by the intelligent sequencing of control and data information storage so that the re-awakening of the circuitry—when required—will be quick and robust.

It appears that generating a newer version of a multi-generational IC ported to a new process technology won’t become practically a push-button effort by 1998, as was prognosticated by some at the Design Automation Conference (DAC) in 1988. With standards and related verification efforts, however, smart interconnects already help to greatly reduce the number of buttons and keys that need to be pressed to achieve such generation. Simplifying interoperability wisely through an intelligent fabric should accelerate this trend. Hopefully, it will allow productivity to grow much faster than gate count for a change.

As SystemC and related high-level language modeling becomes more pervasive, there will be greater availability of different transaction-level and bus-functional models. We’ll then begin to see a growing availability of third-party simulation tools for the co-simulation, verification, and optimization that exploits the faster and more variable processing times (vs. detailed level of model accuracy). In doing so, they allow greater architectural exploration and iterative bi-directional design flows. We are realizing a critical mass of technologies and business imperatives. The broader what-if analyses prior to more detailed, lower-level design efforts are becoming much easier, more automated, and increasingly important.

Jeff Haight is the Director of Technical Marketing at Sonics. His background includes systems, signal processing, and semiconductor design, management, and marketing at LinkaBit, TRW,VLSI,Toshiba, Cadence, and a few startups. Although his plastic logic template gathers dust, he is still an avid technophile. Haight tries in vain to keep up in a broad array of related technical disciplines when he’s not chauffeuring his two teenagers.

++++++++++++

Captions:

Figure 1: This functional block diagram highlights the variety of technologies that must come together to form a successful mobile-phone SoC.

Figure 2: Developing today’s complex SoCs from concept to silicon production requires a full life cycle worth of tasks, which are often set within a global framework.

Figure 3: As the name suggests, “socket-based” designs take advantage of smart interconnects to simplify the connections of complex dataflows between heterogeneous systems.

Table: Here are some important questions that must be addressed concerning the implications of IP integration, verification, and optimization.

1