42d9b50d0182fcd79bbc2457ca0c1310.ppt
- Количество слайдов: 123
CSE 497 A Spring 2002 Functional Verification Lecture 2/3 Vijaykrishnan Narayanan CSE 497 A Lecture 2. 1 © Vijay, PSU, 2000
Course Administration l l l Instructor Vijay Narayanan (vijay@cse. psu. edu) 229 Pond Lab Office Hours: T 10: 00 -11: 00; W 1: 00 -2: 15 Tool Support Jooheung Lee (joolee@cse. psu. edu) TA TBA Laboratory 101 Pond Lab Materials www. cse. psu. edu/~vijay/verify/ Texts » J. Bergeron. Writing Testbenches: Functional Verification of HDL Models. Kluwer Academic Publishers. » Class notes - on the web CSE 497 A Lecture 2. 2 © Vijay, PSU, 2000
Grading l Grade breakdown » Midterm Exam: » Final Exam: 40% » Homework (~3) l 25% » Verification Projects (~4): l 20% 15% No late homeworks/project reports will be accepted Grades will be posted on course home page » Written/email request for changes to grades » April 25 deadline to correct scores CSE 497 A Lecture 2. 3 © Vijay, PSU, 2000
Secret of Verification (Verification Mindset) CSE 497 A Lecture 2. 4 © Vijay, PSU, 2000
The Art of Verification n Two simple questions è I driving all possible input Am scenarios? è How will I know when it fails? CSE 497 A Lecture 2. 5 © Vijay, PSU, 2000
Three Simulation Commandments Thou shalt stress thine logic harder than it will ever be stressed again Thou shalt place checking upon all things CSE 497 A Lecture 2. 6 Thou shalt not move onto a higher platform until the bug rate has dropped off © Vijay, PSU, 2000
General Simulation Environment Testcase C/C++ HDL Testbenches Specman e Synopsis' VERA Testcase Driver Compiler (not always required) Environment Data Simulator Event simulator Cycle simulator Emulator Initialization Run-time requirements Design Source VHDL Verilog Output Testcase results Model Event Simulation compiler Cycle simulation compiler. . Emulator Compiler CSE 497 A Lecture 2. 7 © Vijay, PSU, 2000
Run Foreground Simulation Run Background Simulation Configure Environment Release Environment Debug Fail Debug Environment Logic Designer Environment Developer Verification Engineer View Trace Monitor Batch Simulation Specify Batch Simulation Transfer Testcase Answer Defect Redirect Defect Create Defect Release Model Builder Regress Fails Define Project CSE 497 AGoals Lecture 2. 8 Verify Defect Fix Project Status Report © Vijay, PSU, 2000 Project Manager
Some lingo Facilities: a general term for named wires (or signals) and latches. Facilities feed gates (and/or/nand/nor/invert, etc) which feed other facilities. n EDA: Engineering Design Automation--Tool vendors. n CSE 497 A Lecture 2. 9 © Vijay, PSU, 2000
More lingo Behavioral: Code written to perform the function of logic on the interface of the design-under-test n Macro: 1. A behavioral 2. A piece of logic n Driver: Code written to manipulate the inputs of the design-under-test. The driver understands the interface protocols. n Checker: Code written to verify the outputs of the design-under-test. A checker may have some knowledge of what the driver has done. A check must also verify interface protocol compliance. n CSE 497 A Lecture 2. 10 © Vijay, PSU, 2000
Still more lingo Snoop/Monitor: Code that watches interfaces or internal signals to help the checkers perform correctly. Also used to help drivers be more devious. n Architecture: Design criteria as seen by the customer. The design's architecture is specified in documents (e. g. POPS, Book 4, Infiniband, etc), and the design must be compliant with this specification. n Microarchitecture: The design's implementation. Microarchitecture refers to the constructs that are used in the design, such as pipelines, caches, etc. n Escape: An error that appears in test floor escaping verification CSE 497 A Lecture 2. 11 © Vijay, PSU, 2000 n
Typical Verification diagram Struct: Header Payload checking Checking framework Scoreboard xlate predict DUT (bridge chip) gen packet drive packet post packet Coverage Data Sequence s CSE 497 A Lecture 2. 12 Conversation ror Device FSMs conditions transactions transitions Er Stimulus types latency address sequences Bus Packet Protocol © Vijay, PSU, 2000
Verification Cycle Create Testplan Develop environment Debug hardware Escape Analysis Regression Hardware debug Fabrication CSE 497 A Lecture 2. 13 © Vijay, PSU, 2000
Verification Testplan n Team leaders work with design leaders to create a verification testplan. The testplan includes: è Schedule è Specific tests and methods by simulation level è Required tools è Input criteria è Completion criteria è What is expected to be found with each test/level è What's not covered by each test/level CSE 497 A Lecture 2. 14 © Vijay, PSU, 2000
Verification is a process used to demonstrate the functional correctness of a design. Also called logic verification or simulation. CSE 497 A Lecture 2. 15 © Vijay, PSU, 2000
Reconvergence Model l l Conceptual representation of the verification process Most important question – What are you verifying? Transformation Verification CSE 497 A Lecture 2. 16 © Vijay, PSU, 2000
What is a testbench? l A “testbench” usually refers to the code used to create a predetermined input sequence to a design, then optionally observe the response. » Generic term used differently across industry » Always refers to a testcase » Most commonly (and appropriately), a testbench refers to code written (VHDL, Verilog, etc) at the top level of the hierarchy. The testbench is often simple, but may have some elements of randomness l Completely closed system » No inputs or outputs » effectively a model of the universe as far as the design is concerned. l Verification challenge: » What input patterns to supply to the Design Under Verification and what is expected for the output for a properly working design CSE 497 A Lecture 2. 17 © Vijay, PSU, 2000
l Show Multiplexer Testbench CSE 497 A Lecture 2. 18 © Vijay, PSU, 2000
Importance of Verification l Most books focus on syntax, semantics and RTL subset » Given the amount of literature on writing synthesizeable code vs. . writing verification testbenches, one would think that the former is a more daunting task. Experience proves otherwise. l 70% of design effort goes to verification » Properly staffed design teams have dedicated verification engineers. » Verification Engineers usually outweigh designers 2 -1 l 80% of all written code is in the verification environment CSE 497 A Lecture 2. 19 © Vijay, PSU, 2000
The Line Delete Escape: A problem that is found on the test floor and therefore has escaped the verification process n The Line Delete escape was a problem on the H 2 machine n è S/390 Bipolar, 1991 è Escape shows example of how a verification engineer needs to think CSE 497 A Lecture 2. 20 © Vijay, PSU, 2000
The Line Delete Escape (pg 2) n Line Delete is a method of circumventing bad cells of a large memory array or cache array è array mapping allows for removal of An defective cells for usable space CSE 497 A Lecture 2. 21 © Vijay, PSU, 2000
The Line Delete Escape (pg 3) If a line in an array has multiple bad bits (a single bit usually goes unnoticed due to ECC-error correction codes), the line can be taken "out of service". In the array pictured, row 05 has a bad congruence class entry. 05 . . . CSE 497 A Lecture 2. 22 © Vijay, PSU, 2000
The Line Delete Escape (pg 4) Data in ECC Logic Data enters ECC creation logic prior to storage into the array. When read out, the ECC logic corrects single bit errors and tags Uncorrectable Errors (UEs), and increments a counter corresponding to the row and congruence class. 05 . . . ECC Logic Counters Data out CSE 497 A Lecture 2. 23 © Vijay, PSU, 2000
The Line Delete Escape (pg 5) Data in ECC Logic When a preset threshhold of UEs are detected from a array cell, the service controller is informed that a line delete operation is needed. 05 . . . ECC Logic Counters Data out Threshhold Service Controller CSE 497 A Lecture 2. 24 © Vijay, PSU, 2000
Data in The Line Delete Escape (pg 6) ECC Logic Line delete control Storage Controller configuration registers 05 . . . ECC Logic The Service controller can update the configuration registers, ordering a line delete to occur. When the configuration registers are written, the line delete controls are engaged and writes to row 5, congruence class 'C' cease. However, because three other cells remain good in this congruence class, the sole repercussion of the line delete is a slight decline in performance. Counters Data out Threshhold Service Controller CSE 497 A Lecture 2. 25 © Vijay, PSU, 2000
Data in The Line Delete Escape (pg 7) ECC Logic How would we test this logic? Line delete control Storage Controller configuration registers What must occur in the testcase? What checking must we implement? 05 . . . ECC Logic Counters Data out Threshhold Service Controller CSE 497 A Lecture 2. 26 © Vijay, PSU, 2000
Verification is on critical path CSE 497 A Lecture 2. 27 © Vijay, PSU, 2000
Want to minimize Verification Time! CSE 497 A Lecture 2. 28 © Vijay, PSU, 2000
Ways to reduce verification time l Verification can be reduced through: » Parallelism: Add more resources » Abstraction: Higher level of abstraction (i. e. C vs. . Assembly) – Beware though – this means a reduction in control » Automation: Tools to automate standard processes – Requires standard processes – Not all processes can be automated CSE 497 A Lecture 2. 29 © Vijay, PSU, 2000
Hierarchical Design System Chip . . . Unit Macro Allows design team to break system down into logical and comprehendable components. Also allows for repeatable components. CSE 497 A Lecture 2. 30 © Vijay, PSU, 2000
Ways to reduce verification time l Verification can be reduced through: » Parallelism: Add more resources » Abstraction: Higher level of abstraction (i. e. C vs. . Assembly) – Beware though – this means a reduction in control/additional training – Vera, e are examples of verification languages » Automation: Tools to automate standard processes – Requires standard processes – Not all processes can be automated CSE 497 A Lecture 2. 33 © Vijay, PSU, 2000
Human Factor in Verification Process l An individual (or group of individuals) must interpret specification and transform into correct function. Specification Interpretation CSE 497 A Lecture 2. 34 RTL Coding Verification © Vijay, PSU, 2000
Need for Independent Verification n The verification engineer should not be an individual who participated in logic design of the DUT èBlinders: If a designer didn't think of a failing scenario when creating the logic, how will he/she create a test for that case? èHowever, a designer should do some verification on his/her design before exposing it to the verification team n Independent Verification Engineer needs to understand the intended function and the interface protocols, but not necessarily the implementation CSE 497 A Lecture 2. 35 © Vijay, PSU, 2000
Verification Do's and Don'ts n DO: è Talk to designers about the function and understand the design first, but then è to think of situations the designer might Try have missed è Focus on exotic scenarios and situations – e. g try to fill all queues while the design is done in a way to avoid any buffer full conditions è Focus on multiple events at the same time CSE 497 A Lecture 2. 36 © Vijay, PSU, 2000
n Verification Do's and Don'ts (continued) è everything that is not explicitly forbidden Try è Spend time thinking about all the pieces that you need to verify è Talk to "other" designers about the signals that interface to your design-under-test Don't: è Rely on the designer's word for input/output specification è Allow RIT Criteria to bend for sake of schedule CSE 497 A Lecture 2. 37 © Vijay, PSU, 2000
Ways to reduce humanintroduced errors l Automation » Take human intervention out of the process l Poka-Yoka » Make human intervention fool-proof l Redundancy » Have two individuals (or groups) check each others work CSE 497 A Lecture 2. 38 © Vijay, PSU, 2000
Automation l Obvious way to eliminate human-introduced errors – take the human out. » Good in concept » Reality dictates that this is not feasible – Processes are not defined well enough – Processes require human ingenuity and creativity CSE 497 A Lecture 2. 39 © Vijay, PSU, 2000
Poka-Yoka l l Term coined in Total Quality Management circles Means to “mistake-proof” the human intervention Typically the last step in complete automation Same pitfalls as automation – verification remains an art, it does not yield itself to welldefined steps. CSE 497 A Lecture 2. 40 © Vijay, PSU, 2000
Redundancy l Duplicate every transformation » Every transformation made by a human is either: – Verified by another individual – Two complete and separate transformations are performed with each outcome compared to verify that both produced the same or equivalent result l l l Simplest Most costly, but still cheaper than redesign and replacement of a defective product Designer should NOT be in charge of verification! CSE 497 A Lecture 2. 41 © Vijay, PSU, 2000
What is being verified? l l Choosing a common origin and reconvergence points determines what is being verified and what type of method to use. Following types of verification all have different origin and reconvergence points: » » Formal Verification Model Checking Functional Verification Testbench Generators CSE 497 A Lecture 2. 42 © Vijay, PSU, 2000
Formal Verification l l Once the end points of formal verification reconvergence paths are understood, then you know exactly what is being verified. 2 Types of Formal: » Equivalence » Model Checking CSE 497 A Lecture 2. 43 © Vijay, PSU, 2000
Equivalence Checking l Compares two models to see if equivalence » » Netlists before and after modifications Netlist and RTL code (verify synthesis) RTL and RTL (HDL modificiations) Post Synthesis Gates to Post PD Gates – Adding of scan latches, clock tree buffers l Proves mathematically that the origin and output are logically equivalent » Compares boolean and sequential logic functions, not mapping of the functions to a specific technology l Why do verification of an automated synthesis tool? CSE 497 A Lecture 2. 44 © Vijay, PSU, 2000
Equivalence Reconvergence Model Synthesis RTL Gates Check CSE 497 A Lecture 2. 45 © Vijay, PSU, 2000
Model Checking l l Form of formal verification Characteristics of a design are formally proven or disproved » Find unreachable states of a state machine » If deadlock conditions will occur » Example: If ALE will be asserted, either DTACK or ABORT signal will be asserted l Looks for generic problems or violations of user defined rules about the behavior of the design » Knowing which assertions to prove is the major difficulty CSE 497 A Lecture 2. 46 © Vijay, PSU, 2000
Steps in Model Checking l l Model the system implementation using a finite state machine The desired behavior as a set of temporal-logic formulas Model checking algorithm scans all possible states and execution paths in an attempt to find a counterexample to the formulas Check these rules » Prove that all states are reachable » Prove the absence of deadlocks l Unlike simulation-based verification, no test cases are required CSE 497 A Lecture 2. 47 © Vijay, PSU, 2000
Problems with Model Checking l l l Automatic verification becomes hard with increasing number of states 10^100 states (larger than number of protons in the universe) but still does not go far beyond 300 bits of state variables. Absurdly small for millions of transistors in current microprocessors Symbolic model checking explores a larger set of states concurrently. IBM Rulebase (Feb 7) is a symbolic Model Checking tool CSE 497 A Lecture 2. 48 © Vijay, PSU, 2000
Model Checking Reconvergence Model RTL Specification RTL Interpretation Model Assertions Checking CSE 497 A Lecture 2. 49 © Vijay, PSU, 2000
Functional Verification l Verifies design intent » Without, one must trust that the transformation of a specification to RTL was performed correctly l Prove presence of bugs, but cannot prove their absence CSE 497 A Lecture 2. 50 © Vijay, PSU, 2000
Functional Reconvergence Model Specification RTL Functional Verification CSE 497 A Lecture 2. 51 © Vijay, PSU, 2000
Testbench Generators l l l Tool to generate stimulus to exercise code or expose bugs Designer input is still required RTL code is the origin and there is no reconvergence point Verification engineer is left to determine if the testbench applies valid stimulus If used with parameters, can control the generator in order to focus the testbenches on more specific scenarios CSE 497 A Lecture 2. 52 © Vijay, PSU, 2000
Testbench Generation Reconvergence Model Code Coverage/Proof RTL Testbench Metrics Testbench Generation CSE 497 A Lecture 2. 53 © Vijay, PSU, 2000
Functional Verification Approaches l l l Black-Box Approach White-Box Approach Grey-Box Approach CSE 497 A Lecture 2. 54 © Vijay, PSU, 2000
Black-Box • The black box has inputs, outputs, and performs some function. • The function may be well documented. . . or not. • To verify a black box, you need to understand the function and be able to predict the outputs based on the inputs. • The black box can be a full system, a chip, a unit of a chip, or a single macro. • Can start early CSE 497 A Lecture 2. 55 © Vijay, PSU, 2000
White-Box l White box verification means that the internal facilities are visible and utilized by the testbench stimulus. » Quickly setup interesting cases » Tightly integrated with implementation » Changes with implementation l Examples: Unit/Module level verification CSE 497 A Lecture 2. 56 © Vijay, PSU, 2000
Grey-Box l l Grey box verification means that a limited number of facilities are utilized in a mostly black-box environment. Example: Most environments! Prediction of correct results on the interface is occasionally impossible without viewing an internal signal. CSE 497 A Lecture 2. 57 © Vijay, PSU, 2000
Perfect Verification l To fully verify a black-box, you must show that the logic works correctly for all combinations of inputs. This entails: » Driving all permutations on the input lines » Checking for proper results in all cases l Full verification is not practical on large pieces of designs, but the principles are valid across all verification. CSE 497 A Lecture 2. 58 © Vijay, PSU, 2000
Reality Check n Macro verification across an entire system is not feasible for the business è There may be over 400 macros on a chip, which would require about 200 verification engineers! è That number of skilled verification engineers does not exist è The business can't support the development expense n Verification Leaders must make reasonable tradeoffs è Concentrate on Unit level è Designer level on riskiest macros CSE 497 A Lecture 2. 59 © Vijay, PSU, 2000
Typical Bug rates per level CSE 497 A Lecture 2. 60 © Vijay, PSU, 2000
Cost of Verification l Necessary Evil » Always takes too long and costs too much » Verification does not generate revenue l Yet indispensable » To create revenue, design must be functionally correct and provide benefits to customer » Proper functional verification demonstrates trustworthiness of the design CSE 497 A Lecture 2. 61 © Vijay, PSU, 2000
Verification And Design Reuse l l Won’t use what you don’t trust. How to trust it? » Verify It. l For reuse, designs must be verified with more strict requirements » All claims, possible combinations and uses must be verified. » Not just how it is used in a specific environment. CSE 497 A Lecture 2. 62 © Vijay, PSU, 2000
When is Verification Done? l l Never truly done on complex designs Verification can only show presence of errors, not their absence Given enough time, errors will be uncovered Question – Is the error likely to be severe enough to warrant the effort spent to find the error? CSE 497 A Lecture 2. 63 © Vijay, PSU, 2000
When is Verification Done? (Cont) l Verification is similar to statistical hypothesis. – Hypothesis – Is the design functionally correct? CSE 497 A Lecture 2. 64 © Vijay, PSU, 2000
Hypothesis Matrix Errors Bad Design Good Design No Errors Type II (False Positive) Type I (False Negative) CSE 497 A Lecture 2. 65 © Vijay, PSU, 2000
Tape-Out Criteria n Checklist of items that must be completed before tape-out è Verification items, along with Physical/Circuit design criteria, etc è Verification criteria is based on – Function tested – Bug rates – Coverage data – Clean regression – Time to market CSE 497 A Lecture 2. 66 © Vijay, PSU, 2000
Verification VS. Test l l l Two often confused Purpose of test is to verify that the design was manufactured properly Verification is to ensure that the design meets the functionality intent CSE 497 A Lecture 2. 67 © Vijay, PSU, 2000
Verification and Test Reconvergence Model HW Design Fabrication Specification Silicon Net list Verification Test CSE 497 A Lecture 2. 68 © Vijay, PSU, 2000
Verification Tools l l l Automation improves the efficiency and reliability of the verification process Some tools, such as a simulator, are essential. Others automate tedious tasks and increase confidence in the outcome. It is not necessary to use all the tools. CSE 497 A Lecture 2. 71 © Vijay, PSU, 2000
Verification Tools l l l Improve efficiency e. g. spell checker Improve reliability Automate portion of the verification process Some tools such as simulators are essential Some tools automate the most tedious tasks and increase the confidence in outcome » Code coverage tool » Linting tols » Help ensure that a Type II mistake does not occur CSE 497 A Lecture 2. 72 © Vijay, PSU, 2000
Verification Tools l l l l l Linting Tools Simulators Third Party Models Waveform Viewers Code Coverage Verification Languages (Non-RTL) Revision Control Issue Tracking Metrics CSE 497 A Lecture 2. 73 © Vijay, PSU, 2000
Linting Tools l UNIX C utility program » » l Parses a C program Reports questionable uses Identifies common mistakes Makes finding those mistakes quick and easy Lint identified problems » Mismatched types » Misatched argument in function calls – either number of or type CSE 497 A Lecture 2. 74 © Vijay, PSU, 2000
The UNIX C lint program l l l Attempts to detect features in C program files that are likely to be bugs, non-portabe or wasteful Checks type usage more strictly than a compiler Checks for » » » » Unreachable statements Loops not entered at top Variables declared but not used Logical expressions whse value is constant Functions that return values in some places but not others Functions called with a varying number or type of args Functions whose value is not used CSE 497 A Lecture 2. 75 © Vijay, PSU, 2000
Advantages of Lint Tools l Know about problems prior to execution (simulation for VHDL code) » Checks are entirely static l l l Do not require stimulus Do not need to know expected output Can be used to enforce coding guidelines and naming conventions CSE 497 A Lecture 2. 76 © Vijay, PSU, 2000
Pitfalls l l l Can only find problems that can be statically deduced Cannot determine if algorithm is correct Cannot determine if dataflow is correct Are often too paranoid – err of side of caution – Type I/II? ? errors – good design but error reported – filtering output Should check and fix problems as you go – don’t wait till entire model/code is complete CSE 497 A Lecture 2. 77 © Vijay, PSU, 2000
Linting VHDL source code l l VHDL is strongly typed – does not need linting as much as Verilog (Can assign bit vectors of different lenghts to each other) An area of common problems is use of STD_LOGIC CSE 497 A Lecture 2. 79 © Vijay, PSU, 2000
VHDL Example Library ieee; Use ieee. std_logic_1164. all; Entity my_entity is port (my_input: in std_logic) End my_entity Warning: file x. vhd: Signal “s 1” is multiply defined Warning: file x. vhd: Signal “sl” Has no drivers Architecture sample of my_entity is signal s 1: std_logic; signal sl: std_logic; Begin stat 1: s 1 <= my_input; stat 2: s 1 <= not my_inputs; End sample; CSE 497 A Lecture 2. 80 © Vijay, PSU, 2000
Naming Conventions l l Use a naming convention for signals with multiple drivers Multiple driven signals will give warning messages but with a naming convention can be ignored CSE 497 A Lecture 2. 81 © Vijay, PSU, 2000
Cadence VHDL Lint Tool CSE 497 A Lecture 2. 82 © Vijay, PSU, 2000
HAL Checks l Some of the classes of errors that the HAL tool checks for include: Interface Inconsistency; Unconnected ports; Incorrect number or type of task/function arguments Incorrect signal assignments to input ports; Unused or Undriven Variables; Undriven primary output; Unused task/function/parameters; Event variables that are never triggered; 2 State versus 4 -State Issues; Conditional expressions that use x/z incorrectly; Case equality (===) that is treated as equality (==); Incorrect assignment of x/z values; Expression Inconsistency; Unequal operand lengths; Real/time values that are used in expressions; Incorrect rounding/truncation; Case Statement Inconsistency; Case expressions that contain x or z logic ; Case expressions that are out of range; Correct use of parallel_case and full_case constructs; Range and Index Errors; Single-bit memory words; Bit/part selects that are out of range; Ranged ports that are re-declared CSE 497 A Lecture 2. 83 © Vijay, PSU, 2000
Code reviews l l l Objective: Identify functional and coding style errors prior to functional verification and simulation Source code is reviewed by one or more reviewers Goal: Identify problems with code that an automatd tool would not identify CSE 497 A Lecture 2. 84 © Vijay, PSU, 2000
Simulators l l Simulators are the most common and familiar verification tool Simulation alone is never the goal of an industrial project Simulators attempt to create an artificial universe that mimics the environment that the real design will see Only a approximation of reality » Digital values n std logic have 9 values » Reality – signal is a continuous value between GND and Vdd CSE 497 A Lecture 2. 85 © Vijay, PSU, 2000
Simlators l l Execute a description of the design Description limited to well defined language with precise semantics Simulators are not a static tool – require the user to set up an environment in which the design will find itself – this setup is often called testbench Provides inputs and monitors results CSE 497 A Lecture 2. 86 © Vijay, PSU, 2000
Simulators l l Simulation outputs are validated externally against design intent (specification) Two types: » Event based » Cycle based CSE 497 A Lecture 2. 87 © Vijay, PSU, 2000
Event Based Simulators l l l Event based simulators are driven based on events An attempt to increase the simulated time per unit of wall time Outputs are a function of inputs » The outputs change only when the inputs do » Moves simulation time ahead to the next time at which something occurs » The event is the input changing » This event causes simulator to re-evaluate and calculate new output CSE 497 A Lecture 2. 88 © Vijay, PSU, 2000
Cycle Based Simulators l Simulation is based on clock-cycles not events » All combinational functions collapsed into a single operation l Cycle based simulators contain no timing and delay information » Assumes entire design meets setup and holdtime for all FF’s » Timing is usually verified by static timing analyzer l Can handle only synchronous circuits » Only ‘event’ is active edge of clock » All other inputs are aligned with clock (cannot handle asynchronous events) » Moore machine state changes whenver clk changes; mealy machines they also depend on inputs which can change asynchronously l Much faster than event based CSE 497 A Lecture 2. 89 © Vijay, PSU, 2000
Types of Simulators (con't) n Simulation Farm è Multiple computers are used in parallel for simulation n Acceleration Engines/Emulators è Quickturn, IKOS, AXIS. . . è Custom designed for simulation speed (parallelized) è Accel vs. Emulation – – True emulation connects to some real, in-line hardware Real software eliminates need for special testcase CSE 497 A Lecture 2. 90 © Vijay, PSU, 2000
Speed compare n Influencing Factors: Relative Speed of different Simulators è Hardware Platform Frequency, Memory, . . . è Model content – Size, Activity, . . . è Interaction with Environment è Model load time è Testpattern è Network utilization – Event Simulator 1 Cycle Simulator 20 Event driven cycle Simulator 50 Acceleration 1000 Emulation 100000 CSE 497 A Lecture 2. 91 © Vijay, PSU, 2000
Speed - What is fast? n Cycle Sim for one processor chip è 1 sec realtime = 6 month n Sim Farm with a few hundred computers è 1 sec realtime = ~ 1 day n Accelerator/Emulator è 1 sec realtime = ~ 1 hour CSE 497 A Lecture 2. 92 © Vijay, PSU, 2000
Co-Simulation l Co-simulators are combination of event, cycle, and other simulators (acceleration, emulation) – Both simulators progress along time in lockstep fashion l l Performance is decreased due to inter tool communication. Ambiguities arise during translation from one simulator to the other. » Verilog’s 128 possible states to VHDL’s 9 » Analog’s current and voltage into digital’s logic value and strength. CSE 497 A Lecture 2. 93 © Vijay, PSU, 2000
Third Party Models l l l Many designs use off the shelf parts To verify such a design, must obtain a model to these parts Often must get the model from a 3 rd party Most 3 rd party models are provided as compiled binary models Why buy 3 rd party models? » Engineering resources » Quality (especially in the area of system timing) CSE 497 A Lecture 2. 94 © Vijay, PSU, 2000
Hardware Modelers l l Are for modeling new hardware. Some hardware may be too new for models to available » Example: In 2000 still could not get a model of the Pentium III Sometimes cannot simulate enough of a model in an acceptable period of time CSE 497 A Lecture 2. 95 © Vijay, PSU, 2000
Hardware Modelers (cont) l Hardware modeler features » Small box that connects to network that contains a real copy of the physical chip » Rest of HDL model provides inputs to the chip and obtains the chips output to return to your model CSE 497 A Lecture 2. 96 © Vijay, PSU, 2000
Waveform Viewers l l Lets you view transitions on multiple signals over time The most common of verification tools Waveform can be saved in a trace file In verification » need to know expected output and whenever the simulated output is not as expected – both the signal value and the signal timing » use the testbench to compare the model output with the expected CSE 497 A Lecture 2. 97 © Vijay, PSU, 2000
Coverage n Coverage techniques give feedback on how much the testcase or driver is exercising the logic è Coverage makes no claim on proper checking n All coverage techniques monitor the design during simulation and collect information about desired facilities or relationships between facilities CSE 497 A Lecture 2. 98 © Vijay, PSU, 2000
Coverage Goals Measure the "quality" of a set of tests n Supplement test specifications by pointing to untested areas n Help create regression suites n Provide a stopping criteria for unit testing n Better understanding of the design n CSE 497 A Lecture 2. 99 © Vijay, PSU, 2000
Coverage Techniques n People use coverage for multiple reasons è Designer wants to know how much of his/her macro is exercised è Unit/Chip leader wants to know if relationships between state machine/microarchitectural components have been exercised è Sim team wants to know if areas of past escapes are being tested è Program manager wants feedback on overall quality of verification effort è Sim team can use coverage to tune regression buckets CSE 497 A Lecture 2. 100 © Vijay, PSU, 2000
Coverage Techniques n Coverage methods include: è Line-by-line coverage – Has each line of VHDL been exercised? (If/Then/Else, Cases, states, etc) è Microarchitectural cross products – Allows for multiple cycle relationships – Coverage models can be large or small CSE 497 A Lecture 2. 101 © Vijay, PSU, 2000
Code Coverage l l A technique that has been used in software engineering for years. By covering all statements adequately the chances of a false positive (a bad design tests good) are reduced. Never 100% certain that design under verification is indeed correct. Code coverage increases confidence. Some tools may use file I/O aspect of language and others have special features built into the simulator to report coverage statistics. CSE 497 A Lecture 2. 102 © Vijay, PSU, 2000
Adding Code Coverage l l If built into simulator - code is automatically instrumented. If not built in - must add code to testbench to do the checking CSE 497 A Lecture 2. 103 © Vijay, PSU, 2000
Code Coverage l Objective is to determine if you have overlooked exercising some code in the model » If you answer yes then must also ask why the code is present l l Coverage metrics can be generated after running a testbench Metrics measure coverage of » statements » possible paths through code » expressions CSE 497 A Lecture 2. 104 © Vijay, PSU, 2000
Report Metrics for Code Coverage l Statement (block): » Measures which lines (statements have been executed) by the verification suite l Path: » Measures all possible ways to execute a sequence of instructions l Expression Coverage: » Measures the various ways paths through the code are executed CSE 497 A Lecture 2. 105 © Vijay, PSU, 2000
Statements and Blocks l l Statement coverage can also be called block coverage The Model Sim simulator can show many times a statement was executed Also need to insure that executed statements are simulated with different values And there is code that was not meant to be simulated (code specifically for synthesis for example) CSE 497 A Lecture 2. 106 © Vijay, PSU, 2000
Path Coverage l l Measures all possible ways you can execute a sequence of statements Example has four possible paths CSE 497 A Lecture 2. 107 © Vijay, PSU, 2000
Path Coverage Goal l l Desire is to take all possible paths through code It is possible to have 100% statement coverage but less than 100% path coverage Number of possible paths can be very, very large => keep number of paths as small as possible Obtaining 100% path coverage for a model of even moderate complexity is very difficult CSE 497 A Lecture 2. 108 © Vijay, PSU, 2000
Expression Coverage l l A measure of the various ways paths through code are taken Example has 100% statement coverage but only 50% expression coverage CSE 497 A Lecture 2. 109 © Vijay, PSU, 2000
100% Code Coverage l What do 100% path and 100% expression coverage mean? » Not much!! Just indicates how thoroughly verification suite exercises code. Does not indicate the quality of the verification suite. » Does not provide an indication about correctness of code l l Results from coverage can help identify corner cases not exercised Is an additional indicator for completeness of job » Code coverage value can indicate if job is not complete CSE 497 A Lecture 2. 110 © Vijay, PSU, 2000
Functional Coverage is based on the functionality of the design n Coverage models are specific to a given design n Models cover n è The inputs and the outputs è Internal states è Scenarios è Parallel properties è Bug Models CSE 497 A Lecture 2. 111 © Vijay, PSU, 2000
Interdependency-Architectural Level The Model: We want to test all dependency types of a resource (register) relating to all instructions n n The attributes èI - Instruction: add, add. , sub. , . . . èR - Register (resource): G 1, G 2, . . . èDT - Dependency Type: WW, WR, RW, RR and None The coverage tasks semantics èA coverage instance is a quadruplet
Interdependency-Architectural Level (2) n Additional semantics è The distance between the instructions is no more than 5 n Restrictions è Not all combinations are valid Fixed point instructions cannot share FP registers CSE 497 A Lecture 2. 113 © Vijay, PSU, 2000
Interdependency-Architectural Level (3) Size and grouping: Original size: ~400 x 100 x 5 Let the Instructions be divided into disjoint groups I 1. . . In è Let the Resources be divided into disjoint groups R 1. . . Rk è After grouping: ~60 x 10 x 5 = 180000 CSE 497 A Lecture 2. 114 © Vijay, PSU, 2000
The Coverage Process n Defining the domains of coverage èWhere do we want to measure coverage èWhat attributes (variables) to put in the trace n Defining models èDefining tuples and semantic on the tuples èRestrictions on legal tasks n Collecting data èInserting traces to the database èProcessing the traces to measure coverage n Coverage analysis and feedback èMonitoring progress and detecting holes èRefining the coverage models èGenerating regression suites CSE 497 A Lecture 2. 115 © Vijay, PSU, 2000
Coverage Model Hints Look for the most complex, error prone part of the application n Create the coverage models at high level design èImprove the understanding of the design èAutomate some of the test plan n Create the coverage model hierarchically èStart with small simple models èCombine the models to create larger models. n Before you measure coverage check that your rules are correct on some sample tests. n Use the database to "fish" for hard to create conditions. Try to generalize as much as possible from the data: – X was never 3 is much more useful than the task (3, 5, 1, 2, 2, 2, 4, 5) was never covered. n CSE 497 A Lecture 2. 116 © Vijay, PSU, 2000
Future Coverage Usage n One area of research is automated coverage directed feedback è testcases/drivers can be automatically If tuned to go after more diverse scenarios based on knowledge about what has been covered, then bugs can be encountered much sooner in design cycle è Difficulty lies in the expert system knowing how to alter the inputs to raise the level of coverage. CSE 497 A Lecture 2. 117 © Vijay, PSU, 2000
Verification Languages l l Specific to verification principles Deficiencies in RTL languages (Verilog and VHDL) » Verilog was designed with a focus on describing low-level hardware structures – No support for data structures (records, linked lists, etc) – Not object oriented » VHDL was designed for large design teams – Encapsulates all information and communicates strictly through well-defined interfaces l These limitations get in the way of efficient implementation of a verification strategy CSE 497 A Lecture 2. 118 © Vijay, PSU, 2000
Verification Languages (cont) l Some examples of verification languages » » l Verisity’s Specman Elite Synopsys’ Vera Chronology’s Rave System C Problem is that these are all proprietary, therefore buying into one will lock one into a vendor. CSE 497 A Lecture 2. 119 © Vijay, PSU, 2000
Verification Languages l Even with a verification language still » need to plan verification » design verification strategy and design verification architecture » create stimulus » determine expected response » compare actual response versus expected response CSE 497 A Lecture 2. 120 © Vijay, PSU, 2000
Revision Control l l Need to insure that model verified is model used for implementation Managing a HDL-based hardware project is similar to managing a software project Require a source control management system Such systems keep last version of a file and a history of previous versions along with what changes are present in each version CSE 497 A Lecture 2. 121 © Vijay, PSU, 2000
Configuration Management l l Wish to tag (identify) certain versions of a file so multiple users can keep working Different users have different views of project CSE 497 A Lecture 2. 122 © Vijay, PSU, 2000
File Tags l CSE 497 A Lecture 2. 123 Each file tag has a specific meaning © Vijay, PSU, 2000
Issue Tracking l l l It is normal and expected to find functional irregularities in complex systems Worry if you don’t!!! Bugs will be found!!! An issue is anything that can affect the functionality of the design » » l Bugs during execution of the testbench Ambiguities or incompleteness of specifications A new and relevant testcase Errors found at any stage Must track all issues if a bad design could be manufactured were the issue not tracked CSE 497 A Lecture 2. 124 © Vijay, PSU, 2000
Issue Tracking Systems l The Grapevine » Casual conversation between members of a design team in which issues are discussed » No-one has clear responsibility for solution » System does not maintain a history l The Post-it System » » The yellow stickies are used to post issues Ownership of issues is tenuous at best No ability to prioritize issues System does not maintain a history CSE 497 A Lecture 2. 125 © Vijay, PSU, 2000
Issue Tracking Systems (cont. ) l The Procedural System » Issues are formally reported » Outstanding issues are reviewed and resolved during team meetings » This system consumes a lot of meeting time l Computerized Systems » » Issues seen through to resolution Can send periodic reminders until resolved History of action(s) to resolve is archived Problem is that these systems can require a significant effort to use CSE 497 A Lecture 2. 126 © Vijay, PSU, 2000
Code Related Metrics l l Code Coverage Metrics - how thoroughly does verification suite exercise code Number of Lines of Code Needed for Verification Suite - a measure of the level of effort needed Ratio of Lines of Verification Code to Lines of Code in the Model - measure of design complexity Number of source code changes over time CSE 497 A Lecture 2. 127 © Vijay, PSU, 2000
Quality Related Metrics l l Quality is subjective Examples of quality metrics » Number of known outstanding issues » Number of bugs found during service life l Must be very careful to interpret and use any metric correctly!!! CSE 497 A Lecture 2. 128 © Vijay, PSU, 2000