"I'm the verification lead on the biggest SoC our company has ever done. It's taping out today and I've got that familiar sick feeling in my gut. When will the first undiscovered functional bugs start cropping up?
-In next week's ongoing simulation runs? Please, let it just be a metal layer fix.
-In the first hour after silicon comes back? Everyone will blame me as if anyone could have caught something so obvious.
-After the lead customer gets first silicon? Please no, I can't take another beating from my VP.
I'm going down with this chip and there's nothing I can do about it. I hate my job."
If you're a verification engineer you know what it's like to wake up in a cold sweat with those thoughts running through your head. The worst thing is that you know it's not just a bad dream. It is your job to make sure the chips work. It is true that they almost never do. It is true that you will get blamed at some level. But know this: the real kicker is that there is something you can do about it.
Once your pulse slows down and you think clearly about it, the problem is that your big SoC, that ground-breaking, company-saving, giant hulk of a chip, is just too darn big. Or viewed from the opposite perspective, the simulator is just too darn slow to drive enough test sequences through the design and collect enough functional coverage to enable decent verification. You need much faster verification performance. Being 50% faster, or 2x faster, or even 10x faster just isn't enough. You need to get to 20x or 50x or 100x faster to make a dent in this problem-and sleep better at night.
To see how to get that type of performance gain, you need to have a clear understanding of all the tools and approaches available to verify a large chip. Let's step back and look at the big picture. Figure 1 shows the several stages of system development and the major verification methods that are brought to bear in each.
Figure 1: Stages of product development and verification methods
There's a lot going on in this picture, so let's work through it. Each row represents a different stage of product development. At the bottom, row 1 depicts the development and verification of the IP blocks that comprise an SoC. That might mean verifying that your new USB 3.0 IP fully complies with the USB specification. Row 2 represents the integration of the IP blocks into an SoC. This could mean verifying that all the IP blocks in an applications processor play well together given the near infinite number of possible stimulus input and state transitions.
Row 3 depicts the hardware/software integration stage. This might involve booting an OS on an applications processor for a smartphone. Row 4 covers developing software applications and running them on the system. To carry on with the smartphone analogy, this could mean running Angry Birds and verifying that the phone doesn't latch-up. As you can see, this covers the whole spectrum of product development.
Let's now work from left to right across the diagram. Each rectangle depicts an action that requires a tool for design, pre-silicon verification, or post-silicon validation. Note that some boxes span horizontal rows. That means the tool in question can do double or triple duty as the product development progresses. Nearly every current verification technique is listed here including virtual prototyping, formal analysis, logic simulation, hardware acceleration, emulation, and FPGA prototyping. This diagram is the universe of verification solutions. If you are somehow going to achieve a big verification performance improvement, the answer is in this picture.
Since the context of this discussion, and source of our anxiety, is SoC verification, let's cut to the chase and look at the sweet spot of SoC-level verification: Verification acceleration (the orange rectangle in Figure 1). This is your lifeboat, the way out of your current no-win situation.
To see why, we have to know what is meant by verification acceleration. This means using a special-purpose computer-a hardware accelerator-to perform logic simulation. Hardware accelerators such as the Cadence® Palladium® XP Verification Computing Platform deliver performance throughput on the order of 10,000x that of logic simulators.
That's incredibly fast performance, but it only applies to the design and any testbench elements that can be compiled into the accelerator. A typical SoC verification environment will use a high-level verification language like SystemVerilog, e, or SystemCTM built upon a standard methodology such as the Universal Verification Methodology (UVM) to drive randomized test sequences into the SoC. Unfortunately, these high-level languages cannot be compiled to run in an accelerator. So, while all the RTL code that comprises an SoC can be accelerated, the testbench must stay behind on the user's workstation. This combined use of a simulator and accelerator is called "simulation acceleration."
The effective performance boost obtained with simulation acceleration is a function of two major components: the performance of the testbench running in the logic simulator on the host workstation, and the performance of the design running on the accelerator. Since the accelerator is running on the order of 10,000x faster than the logic simulator, it contributes a negligible amount to the overall simulation time.
For example, let's assume an SoC and its testbench are running on a logic simulator at a reference speed of 1x. The simulation takes a given number of simulated SoC clock cycles to complete, which we will define to be 100%. Let's further assume that in a medium-sized SoC, the testbench consumes 5% of the overall simulation cycles and the design consumes 95% of the simulation cycles. In this example, the overall simulation acceleration time would be: 5% x 1 + 95% x 1/10,000 = 5.0095% of the original simulation time, or basically the time taken by the testbench alone. That's about a 20x performance boost. On big SoCs, the performance gain could increase to 100x or more since the relative proportion of the cycles spent in the testbench declines.
There is an alternative view of boosting verification performance that focuses on optimizing certain aspects of the testbench, particularly the verification IP (VIP) used to model standard interfaces. Some VIP suppliers have suggested they can achieve a significant reduction in the simulation cycles consumed by the VIP to speed the verification of large SoCs. However, even if such gains in VIP performance could be achieved, this will not result in a significant reduction in overall simulation time.
For example, take the case of a small design such as a single IP block, where the testbench consumes a proportionately larger percentage of the total simulation cycles. Let's say the testbench for this IP block consumes 35% of the total simulation cycles. If we further assume that the verification IP is a large portion of the testbench, say half, then the percent of the total simulation cycles consumed by the VIP is 35% x 1/2 = 17.5%. Speeding this up by 4x, for example, would mean that the cycles consumed by the VIP are reduced to 17.5% x 1/4 = 4.375%. This scenario would result in the total simulation time being reduced by only 17.5% - 4.375% = 13.125%. That is the best-case performance improvement, and the benefit approaches zero for even a small SoC-level simulation. So, as shown in the preceding paragraph, the only way to achieve sufficient performance gains for SoC verification is via simulation acceleration.
There is a secondary performance consideration for simulation acceleration, and that is the performance of the interface between the logic simulator and the accelerator. Some years ago the industry addressed this link by creating the SCE-MI standard. This standard and its subsequent evolution has provided a highly efficient communication connection between the logic simulator and accelerator, ensuring that the link does not materially limit performance.
Despite the obvious benefit of simulation acceleration and the availability of a standard and efficient link between the simulator and accelerator, broad use of simulation acceleration has been limited by insufficient ability to "feed" the SoC interfaces with stimulus at a rate that keeps up with acceleration performance. To date, accelerated verification IP (AVIP) implementations have attempted to solve this problem by providing synthesizable bus functional models of standard protocols that are compiled into the accelerator along with lightweight "proxies" that couple into the testbench running on the logic simulator. This approach is simple and effective but typically requires substantial reworking of a UVM testbench to incorporate. Since the testbench is already a high-effort development task, spending additional time to change it just when the SoC is coming together under the pressure to reach verification closure is seldom an attractive proposition.
Fortunately, recent breakthroughs in AVIP architecture that enable UVM testbench reuse in acceleration have already shown the ability to unlock the potential of simulation acceleration. The availability of such UVM-compatible AVIP supporting a wide variety of standard protocols will make achieving verification performance gains of 20x, 50x, or even 100x practical and will make simulation acceleration an increasingly common approach to SoC or subsystem verification. And when you finally have this technology in your hands, you can say goodbye to your bad dreams!
If you want to achieve silicon success, work with Cadence® to choose the right IP solution and capture its full value in your SoC design.