Скачать презентацию Design Cost Modeling and Data Collection Infrastructure Andrew Скачать презентацию Design Cost Modeling and Data Collection Infrastructure Andrew

b6e7090e8479ad89c5b6b1ac2ad5a4e9.ppt

  • Количество слайдов: 37

Design Cost Modeling and Data Collection Infrastructure Andrew B. Kahng and Stefanus Mantik* UCSD Design Cost Modeling and Data Collection Infrastructure Andrew B. Kahng and Stefanus Mantik* UCSD CSE and ECE Departments (*) Cadence Design Systems, Inc. http: //vlsicad. ucsd. edu/

ITRS Design Cost Model l Engineer cost/year increases 5% / year ($181, 568 in ITRS Design Cost Model l Engineer cost/year increases 5% / year ($181, 568 in 1990) l EDA tool cost/year (per engineer) increases 3. 9% / year l Productivity due to 8 major Design Technology innovations ¤ RTL methodology ¤… ¤ Large-block reuse ¤ IC implementation suite ¤ Intelligent testbench ¤ Electronic System-level methodology l Matched up against SOC-LP PDA content: ¤ SOC-LP PDA design cost = $20 M in 2003 ¤ Would have been $630 M without EDA innovations

SOC Design Cost SOC Design Cost

Outline l Introduction and motivations l METRICS system architecture l Design quality metrics and Outline l Introduction and motivations l METRICS system architecture l Design quality metrics and tool quality metrics l Applications of the METRICS system l Issues and conclusions

Motivations l How do we improve design productivity ? l Is our design technology Motivations l How do we improve design productivity ? l Is our design technology / capability better than last year? l How do we formally capture best known methods, and how do we identify them in the first place ? l Does our design environment support continuous improvement, exploratory what-if design, early predictions of success / failure, . . . ? l Currently, no standards or infrastructure for measuring and recording the semiconductor design process ¤ Can benefit project management ¢ accurate resource prediction at any point in design cycle ¢ accurate project post-mortems ¤ Can benefit tool R&D ¢ feedback on tool usage and parameters used ¢ improved benchmarking

Fundamental Gaps l Data to be measured is not available ¤ Data is only Fundamental Gaps l Data to be measured is not available ¤ Data is only available through tool log files ¤ Metrics naming and semantics are not consistent among different tools l We do not always know what data should be measured ¤ Some metrics are less obviously useful ¤ Other metrics are almost impossible to discern

Purpose of METRICS l Standard infrastructure for the collection and the storage of design Purpose of METRICS l Standard infrastructure for the collection and the storage of design process information l Standard list of design metrics and process metrics l Analyses and reports that are useful for design process optimization METRICS allows: Collect, Data-Mine, Measure, Diagnose, then Improve

Outline l Introduction and motivations l METRICS system architecture ¤ Components of METRICS System Outline l Introduction and motivations l METRICS system architecture ¤ Components of METRICS System ¤ Flow tracking ¤ METRICS Standard l Design quality metrics and tool quality metrics l Applications of the METRICS system l Issues and conclusions

METRICS System Architecture Tool Transmitter wrapper Inter/Intra-net API XML Web Server DB DAC 00 METRICS System Architecture Tool Transmitter wrapper Inter/Intra-net API XML Web Server DB DAC 00 Java Applets Reporting Data Mining Metrics Data Warehouse

METRICS Server Internet/Intranet Input Form Receiver Servlet Reporting Servlet Data translator Apache + Servlet METRICS Server Internet/Intranet Input Form Receiver Servlet Reporting Servlet Data translator Apache + Servlet Receiver Java Beans Decryptor DB JDBC Dataminer XML Parser Reporting External Interface

Example Reports nexus 12 2% nexus 11 2% nexus 10 1% nexus 4 95% Example Reports nexus 12 2% nexus 11 2% nexus 10 1% nexus 4 95% % aborted per machine synthesis ATPG 22% 20% post. Synt. TA 13% placed. TA physical 7% 18% BA 8% func. Sim 7% LVS 5% % aborted per task CPU_TIME = 12 + 0. 027 NUM_CELLS Correlation = 0. 93

Flow Tracking Task sequence: T 1, T 2, T 3, T 4, T 2, Flow Tracking Task sequence: T 1, T 2, T 3, T 4, T 2, T 1, T 2, T 4 S T 1 T 2 T 3 T 4 F

Testbeds: Metricized P&R Flow DEF Capo Placer Placed DEF Cadence PKS flow UCLA + Testbeds: Metricized P&R Flow DEF Capo Placer Placed DEF Cadence PKS flow UCLA + Cadence flow LEF Legal DEF QP QP ECO Placed DEF Incr. WRouted DEF Congestion Map Incr WRoute Final DEF Congestion. Analysis Synthesis & Tech Map Pre-placement Opt DEF LEF GCF, TLF CTGen M E T R I C S Clocked DEF Constraints QP Optimized DEF WRoute Pearl Routed DEF QP Post-placement Opt Ambit PKS GRoute WRoute Cadence SLC flow

METRICS Standards l Standard metrics naming across tools name « same meaning, independent of METRICS Standards l Standard metrics naming across tools name « same meaning, independent of tool supplier ¤ generic metrics and tool-specific metrics ¤ no more ad hoc, incomparable log files ¤ same l Standard schema for metrics database l Standard middleware for database interface

Generic and Specific Tool Metrics Partial list of metrics now being collected in Oracle Generic and Specific Tool Metrics Partial list of metrics now being collected in Oracle 8 i

Open Source Architecture l METRICS components are industry standards ¤ e. g. , Oracle Open Source Architecture l METRICS components are industry standards ¤ e. g. , Oracle 8 i, Java servlets, XML, Apache web server, PERL/TCL scripts, etc. l Custom generated codes for wrappers and APIs are publicly available ¤ collaboration in development of wrappers and APIs ¤ porting to different operating systems l Codes are available at: http: //www. gigascale. org/metrics

Outline l Introduction and motivations l METRICS system architecture l Design quality metrics and Outline l Introduction and motivations l METRICS system architecture l Design quality metrics and tool quality metrics l Applications of the METRICS system l Issues and conclusions

Tool Quality Metric: Behavior in the Presence of Input Noise [ISQED 02] l Goal: Tool Quality Metric: Behavior in the Presence of Input Noise [ISQED 02] l Goal: tool predictability ¤ Ideal scenario: can predict final solution quality even before running the tool ¤ Requires understanding of tool behavior l Heuristic nature of tool: predicting results is difficult l Lower bound on prediction accuracy: inherent tool noise l Input noise "insignificant" variations in input data (sorting, scaling, naming, . . . ) that can nevertheless affect solution quality l Goal: understand how tools behave in presence of noise, and possibly exploit inherent tool noise

Monotone Behavior l Monotonicity solutions w. r. t. inputs Quality ¤ monotone Parameter Monotone Behavior l Monotonicity solutions w. r. t. inputs Quality ¤ monotone Parameter

Monotonicity Studies l Optimization. Level: 1(fast/worst) … 10(slow/best) Opt Level 1 2 3 4 Monotonicity Studies l Optimization. Level: 1(fast/worst) … 10(slow/best) Opt Level 1 2 3 4 5 6 7 8 9 QP WL 2. 50 0. 97 -0. 20 -0. 11 1. 43 0. 58 1. 29 0. 64 1. 70 QP CPU -59. 7 -51. 6 -40. 4 -39. 3 -31. 5 -31. 3 -17. 3 -11. 9 -6. 73 WR WL 2. 95 1. 52 -0. 29 0. 07 1. 59 0. 92 0. 89 0. 94 1. 52 Total CPU 4. 19 -6. 77 -16. 2 -15. 2 -7. 23 -10. 6 -6. 99 -3. 75 -0. 51 l Note: Optimization. Level is the tool's own knob for "effort"; it may or may not be well-conceived with respect to the underlying heuristics (bottom line is that the tool behavior is "non-monotone" from user viewpoint)

Noise Studies: Random Seeds l 200 runs with different random seeds ¤ ½-percent spread Noise Studies: Random Seeds l 200 runs with different random seeds ¤ ½-percent spread in solution quality due to random seed -0. 05%

Noise: Random Ordering & Naming l Data sorting no effect on reordering l Five Noise: Random Ordering & Naming l Data sorting no effect on reordering l Five naming perturbation ¤ random cell names without hierarchy (CR) ¢ ¤ ¤ random net names without hierarchy (NR) random cell names with hierarchy (CH) ¢ ¤ ¤ E. g. , AFDX|CTRL|AX 239 CELL 00134 E. g. , AFDX|CTRL|AX 129 ID 012|ID 79|ID 216 random net names with hierarchy (NH) random master cell names (MC) ¢ E. g. , NAND 3 X 4 MCELL 0123

Noise: Random Naming (contd. ) l Wide range of variations (± 3%) Number of Noise: Random Naming (contd. ) l Wide range of variations (± 3%) Number of Runs l Hierarchy matters % Quality Loss

Noise: Hierarchy l Swap hierarchy ¤ AA|BB|C 03 XX|YY|C 03 XX|YY|Z 12 AA|BB|Z 12 Noise: Hierarchy l Swap hierarchy ¤ AA|BB|C 03 XX|YY|C 03 XX|YY|Z 12 AA|BB|Z 12 Number of Runs ¤ % Quality Loss

Outline l Introduction and motivations l METRICS system architecture l Design quality and tool Outline l Introduction and motivations l METRICS system architecture l Design quality and tool quality l Applications of the METRICS system l Issues and conclusions

Categories of Collected Data l Design instances and design parameters ¤ attributes and metrics Categories of Collected Data l Design instances and design parameters ¤ attributes and metrics of the design instances ¤ e. g. , number of gates, target clock frequency, number of metal layers, etc. l CAD tools and invocation options ¤ list of tools and user options that are available ¤ e. g. , tool version, optimism level, timing driven option, etc. l Design solutions and result qualities ¤ qualities of the solutions obtained from given tools and design instances ¤ e. g. , number of timing violations, total tool runtime, layout area, etc.

Three Basic Application Types Design instances and design parameters CAD tools and invocation options Three Basic Application Types Design instances and design parameters CAD tools and invocation options Design solutions and result qualities l Given and , estimate the expected quality of ¤ e. g. , runtime predictions, wirelength estimations, etc. l Given and , find the appropriate setting of ¤ e. g. , best value for a specific option, etc. l Given and , identify the subspace of that is “doable” for the tool ¤ e. g. , category of designs that are suitable for the given tools, etc.

Estimation of QP CPU and Wirelength l Goal: ¤ Estimate QPlace runtime for CPU Estimation of QP CPU and Wirelength l Goal: ¤ Estimate QPlace runtime for CPU budgeting and block partition ¤ Estimate placement quality (total wirelength) l Collect QPlace metrics from 2000+ regression logfiles l Use data mining (Cubist 1. 07) to classify and predict, e. g. : ¢ ¢ ¢ Rule 1: [101 cases, mean 334. 3, range 64 to 3881, est err 276. 3] if ROW_UTILIZATION <= 76. 15 then CPU_TIME = -249 + 6. 7 ROW_UTILIZATION + 55 NUM_ROUTING_LAYER - 14 NUM_LAYER Rule 2: [168 cases, mean 365. 7, range 20 to 5352, est err 281. 6] if NUM_ROUTING_LAYER <= 4 then CPU_TIME = -1153 + 192 NUM_ROUTING_LAYER + 12. 9 ROW_UTILIZATION - 49 NUM_LAYER Rule 3: [161 cases, mean 795. 8, range 126 to 1509, est err 1069. 4] if NUM_ROUTING_LAYER > 4 and ROW_UTILIZATION > 76. 15 then CPU_TIME = 33 + 8. 2 ROW_UTILIZATION + 55 NUM_ROUTING_LAYER - 14 NUM_LAYER l Data mining limitation sparseness of data

Cubist 1. 07 Predictor for Total Wirelength Cubist 1. 07 Predictor for Total Wirelength

Optimization of Incremental Multilevel FM Partitioning l Motivation: Incremental Netlist Partitioning l Scenario: Design Optimization of Incremental Multilevel FM Partitioning l Motivation: Incremental Netlist Partitioning l Scenario: Design changes (netlist ECOs) are made, but we want the top-down placement result to remain similar to previous result Clustering Refinement

Optimization of Incremental Multilevel FM Partitioning l Motivation: Incremental Netlist Partitioning l Scenario: Design Optimization of Incremental Multilevel FM Partitioning l Motivation: Incremental Netlist Partitioning l Scenario: Design changes (netlist ECOs) are made, but we want the top-down placement result to remain similar to previous result l Good approach [Caldwell. KM 00]: “V-cycling” based multilevel Fiduccia-Mattheyses l Our goal: What is the best tuning of the approach for a given instance? ¢ ¢ ¢ break up the ECO perturbation into multiple smaller perturbations? #starts of the partitioner? within a specified CPU budget?

Optimization of Incremental Multilevel FM Partitioning (contd. ) l Given: initial partitioning solution, CPU Optimization of Incremental Multilevel FM Partitioning (contd. ) l Given: initial partitioning solution, CPU budget and instance perturbation ( I) l Find: number of stages of incremental partitioning (i. e. , how to break up I ) and number of starts S ¤ Ti T 1 T 2 T 3 . . . Tn F = incremental multilevel FM partitioning ¤ Self-loop multistart ¤ n number of breakups ( I = 1 + 2 + 3 +. . . + n)

Flow Optimization Results l If (27401 < num edges 34826) and (143. 09 < Flow Optimization Results l If (27401 < num edges 34826) and (143. 09 < cpu time 165. 28) and (perturbation delta 0. 1) then num_inc_stages = 4 and num_starts = 3 l If (27401 < num edges 34826) and (85. 27 < cpu time 143. 09) and (perturbation delta 0. 1) then num_inc_stages = 2 and num_starts =1 l. . . Up to 10% cutsize reduction with same CPU budget, using tuned #starts, #stages, etc. in ML FM

Outline l Introduction and motivations l METRICS system architecture l Design quality and tool Outline l Introduction and motivations l METRICS system architecture l Design quality and tool quality l Applications for METRICS system l Issues and conclusions

METRICS Deployment and Adoption l Security: proprietary and confidential information cannot pass across company METRICS Deployment and Adoption l Security: proprietary and confidential information cannot pass across company firewall may be difficult to develop metrics and predictors across multiple companies l Standardization: flow, terminology, data management l Social: “big brother”, collection of social metrics l Data cleanup: obsolete designs, old methodology, old tools l Data availability with standards: log files, API, or somewhere in between? l “Design Factories” are using METRICS

Conclusions l METRICS System : automatic data collection and realtime reporting l New design Conclusions l METRICS System : automatic data collection and realtime reporting l New design and process metrics with standard naming l Analysis of EDA tool quality in presence of input noise l Applications of METRICS: tool solution quality estimator (e. g. , placement) and instance-specific tool parameter tuning (e. g. , incremental partitioner) l Ongoing works: ¤ Construct active feedback from METRICS to design process for automated process improvement ¤ Expand the current metrics list to include enterprise metrics (e. g. , number of engineers, number of spec revisions, etc. )

Thank You Thank You