497c96d62b18af319f69e8f46cd8d386.ppt
- Количество слайдов: 50
Predictability Issues in Aircraft Analysis, Design, and Certification Chris L. Pettit, Ph. D. , P. E. Multidisciplinary Technologies Center Air Vehicles Directorate, Air Force Research Laboratory JHU Predictability Workshop, November 13 -14, 2003
About This Presentation … • Organizers’ goal: “Synthesize a template for quantitative processes related to predictability and UQ” • My goals as moderator: Define context and highlight key questions to motivate group discussion – Describe prediction problems being confronted in (military) aircraft design and certification, including: » Nonlinear, multidisciplinary, and multi-scale problems » Prediction difficulties that limit the performance and health of current systems and the development of future systems – Promote discussion and feedback on key topics: » Definitions of predictability and predictability-aware models » How to assess predictability and what to do with it • Error vs. uncertainty • Roles of testing during various phases of design and life cycle • Role of predictability assessment in aircraft systems engineering and decision-making (What is the risk associated with low predictability? ) » Current impediments to predictability » Benefits of higher predictability (e. g. , cost, performance, safety)
About This Presentation (cont) … • I’ll try to avoid injecting unnecessary bias into what often are controversial philosophical issues – I hope to learn more from you than you will from me • But, I will assume … – We want to measure our ability to model complex systems – We are uncertain about all processes » Models are not reality » Natural and man-made physical systems are never deterministic – Assessing predictability requires uncertainty quantification (UQ) – Each model has a limited range of validity » Model validation ultimately depends on UQ and model usage – Predictability Model Validity » Is the reverse true? – Physics-based models promote predictability assessment » Error estimators, safer extrapolation, etc. – Context does matter: military aircraft prediction is part of the Do. D acquisition process Models should help to assess system -level risks!
Some tough prediction problems designers and analysts are facing …
Prediction-Critical Disciplines for Current and Future Aircraft Systems These are disciplines that lead to severe performance restrictions, high required margins, and re-designs • • Extreme environments (e. g. , thermoacoustic loads) Nonlinear aeroelasticity Flow control and mixing Signature reduction – Radar cross-section (RCS) – Thermal • Structural integrity – Fatigue, fracture, corrosion, delamination, battle damage, etc. – Strongly dependent on other disciplines and usage for loads – Sensitivity to manufacturing tolerances • Structural instability • Others? ? ? (e. g. , dynamics of UAV swarms, human behavior)
Common Complicating Factors in Prediction-critical phenomena commonly involve … … complex processes that span multiple spatial and temporal scales At which scales can we be predictive? … nonlinear processes … multidisciplinary interactions … relatively high epistemic and aleatory uncertainty … low observability in experiments and tests … sensitivity to BCs and ICs Each of these factors … … complicates our attempts to predict … impedes our efforts to assess predictability
Current and Expected Practical Prediction Challenges (1/3) • Low acceptance of predictive ability – Especially for safety-critical and multi-scale phenomena – Model validation is a low priority – Risk assessment is not trusted • Accelerated testing requires – More dependable predictions – Less subjective risk estimation – Model and test integration • Nonlinear systems can be very sensitive to variability in system’s properties, loads, and BC’s – Bifurcations – Hard to model in complex systems
Current and Expected Practical Prediction Challenges (2/3) • Non-robust optima in aeroelastic tailoring and laminar flow wings – Manufacturing variability – Off-design conditions • Non-traditional design concepts, and highly variable or extreme operating environments – Little historical basis for assessing loads and sensitivities – Difficult to estimate risks of new technology or design concepts (e. g. , TRL assessment) – Untapped potential because of the low predictability? ? ? » Are dated safety reqs holding back existing and new technologies?
Current and Expected Practical Prediction Challenges (3/3) • Designer materials and non-traditional structures – Prediction of properties across length scales – Ensuring adequate performance in non-ideal conditions – Avoiding unintended failure modes • Multi-functional structures and systems integration – Structurally integrated (i. e. , load-bearing) antennas – Distributed control surfaces and shape control » Optimization of control laws in multiple flight regimes » Load redistribution for non-aerodynamic or non-structural purposes (e. g. , antenna pointing or RCS management)? – Integrated vehicle health management (IVHM) systems » Data fusion and model-based sensor placement optimization » On-line modification of control laws for loads management » Design of self-healing materials – Airframe-propulsion integration in hypersonic vehicles – System-level performance metrics » Defining trade-offs given multiple energy flow paths » Multiple performance modes requires multidisciplinary models
Predictability in the context of aircraft design and certification …
How the Tough Problems Affect Processes and Frameworks • Current aircraft systems already stress design and certification methods to (or beyond? ) their practical limits – Unique design concepts suggest increased importance of nonlinear multidisciplinary physics clearly beyond capability of current design tools and certification processes • Physical and computational complexity of nonlinear multidisciplinary models obscures the propagation of uncertainty through networks of models – Difficult to dependably assess sensitivities and risks w/o a clear UQ process that is consistently implemented • For airframes: This has resulted in a process-centric approach to risk management instead of a knowledgecentric approach – This is untenable for future Air Force needs
Multidisciplinary Problems • Very hard to predict and validate – Multi-scale, nonlinear physics » The “correct” uncertainty model often depends on physics modeling choices and measurement limitations • e. g. , stress FE models vs. dynamics FE models – Highly variable operating environments, loads, and material properties – Complicated and expensive tests • Crucial to the success and safety of highperformance military aircraft Computational multidisciplinary analyses are always suspect, as is any resulting risk prediction
Predictability in Systems Engineering (SE) • Prediction must be performed and assessed in the context of systems engineering – Purpose of SE: manage system-level risks from cradle to grave – Risk results from uncertainty and error – Risk management demands good data and good predictions Risk management requires predictability assessment • SE entails a risk allocation or flow-down from program level to system, sub-system, and component levels – Usually implicit and qualitative for complex systems • This flow-down parallels a similarly implicit flow-down of uncertainty in multidisciplinary design problems – Modeling and data-gathering decisions automatically allocate uncertainty and error to constituent analyses – Uncertainty and error budgets are never described explicitly and are extremely difficult to quantify Predictability assessment ultimately needs UQ
Uncertainty Flow-Down 1. How much uncertainty can be tolerated in the top-level prediction of a multidisciplinary process? 2. How much uncertainty can be attributed to each subdiscipline in the network of models that comprise the multidisciplinary analysis? • Must address epistemic and aleatory sources 3. How much uncertainty can be tolerated in each subdiscipline analysis? • How do the modeled physics amplify input uncertainty? 4. What test, computer, and training resources must be invested to assess and control the uncertainty in each sub-discipline? Do these work for error flow-down also? Do these really help in assessing predictability?
Prediction and Information. Management Tools • Design and test cycles of military aircraft now exceed 20 years! – Many airframe designers now work on only a few new aircraft programs during their entire career – Opportunities to gain practical experience are extremely limited – Can no longer depend on “old-timers” as the primary storehouses of corporate knowledge » Retirements and overwhelming demands on their time » Even they may not have insight into non-traditional problems » Worse yet: They can be “nay-sayers” – Analyses used to support design decisions may be obsolete by the time the aircraft is certified – Do. D Acquisition Reform: Mandated evolutionary acquisition and spiral development processes institute definite needs for more complete knowledge to support future upgrades • How can prediction frameworks be structured to overcome these difficulties? ? ?
Closing Remarks about Aircraft Predictability • Predictability must be assessed in terms of which questions are being answered by the model • Prediction-critical aircraft phenomena share many complicating characteristics • Ability to be predictive and to assess predictability is fundamental to future military aircraft systems and acquisition processes • UQ and error estimation are fundamental to predictability • Predictability depends as much on the practical details of modeling and testing process (e. g. , best practices, ability to measure key data) as it does on theory
What’s next?
Breakout Group Plan of Action • I will present several suggested topics of discussion – Summarize first, then cover each separately in detail • Each topic addresses some of the concerns I’ve discussed already • Try to step through the topics one-by-one for group discussion • We have little time – Please try to confine your remarks to the question at hand – I encourage open discussion, but I will press ahead if we do not move through the topics quickly enough. Please don’t be insulted if I abruptly terminate a portion of the discussion. • Remember: Our ultimate goal is to begin developing a template for aircraft predictability assessment in the context of uncertainty and error
Suggested Topics of Discussion
Suggested Topics of Discussion • What is the working definition of predictability in the context of aircraft analysis, design, and certification? • What is the current state of UQ and predictability awareness for aircraft? • How can aircraft predictability be assessed objectively? • What are the dominant modeling, testing, and validation challenges that impede aircraft predictability? • What content must a “predictability-aware” model of a complex aircraft system offer? • What “new things” could be accomplished in aircraft analysis if predictability were substantially improved?
Topic #1 What is the working definition of predictability in the context of aircraft analysis, design, and certification? 1. Does it differ substantially between the Critical Disciplines cited earlier? 2. Do we need to clarify the relationship between predictability, model validity, error estimation, and UQ? 3. “I can’t define what it means to be predictive, but I know it when I see it. ”
Topic #2 What is the current state of UQ and predictability awareness for aircraft? 1. Does it differ substantially between the Critical Disciplines? 2. Research vs. practice? 3. Do decision-makers place sufficient priority on predictability assessment?
Topic #3 (1/2) How can predictability be assessed objectively? 1. What are the appropriate metrics? Is model validity truly a prerequisite? 2. What is the role of experimental evidence in understanding, measuring, and controlling predictability? 3. How is uncertainty related to error estimation? a. Numerical error vs. statistical error? b. Is a “converged” deterministic grid automatically good for UQ? 4. How should the error and uncertainty budgets be decomposed to clarify predictability assessment? 5. Global vs. local measures of predictability? a. Throughout the design parameter space? b. Throughout the spatio-temporal extent of a given design and its model? 6. Scaling issues in comparing tests and models? a. Will the common scale factors (Re, Fr, etc. ) remain the most important as non-traditional designs are developed? Note: This is already an issue for aeroelastic wind-tunnel models.
Topic #3 (2/2) 7. Are acceptable confidence measures available for error estimates? a. What is their nature (e. g. , fuzzy vs. subjective probability? ) b. Is there agreement on how to combine componentand discipline-level error estimates to obtain systemlevel error estimates? c. How should these be communicated to decisionmakers?
Topic #4 (1/2) What are the dominant modeling, testing, and validation challenges that impede aircraft predictability? 1. Where in the prediction chain do the limitations enter? a. Availability of accurate input data and its variability? (e. g. , constitutive properties, geometry, etc. ) b. A priori knowledge of input errors/uncertainty and their consequences? c. Math models? Could include unresolved physics … d. Algorithmic implementation of math models? This could include discipline coupling. e. Numerical sensitivity (grid and time step, convergence criteria, etc. ) f. Short-term vs. long-term accuracy? Dependable error estimators? g. Post-processing and interpretation? Model validation and integration with testing? Availability of dependable test data? h. Can we trade some full-scale tests for more coupon and component tests to improve UQ and error estimation? i. How are the challenges shaped by the push to reduce test resources and streamline certification decision-making?
Topic #4 (2/2) 2. Which important measurements cannot be made with current capabilities? a. Are these limitations controlled by physics, technology, cost, resource prioritization, or something else? b. How can validation plans be adjusted to mitigate these limitations? 3. Given that aircraft normally admit some testing throughout the design process, how should these test resources be allocated to estimate model errors and uncertainty? 4. Other impediments to predictability assessment not mentioned yet? ? ?
Topic #5 What content must a “predictability-aware” model of a complex system offer? 1. How does this depend on the purpose of the model? a. Who will use it? When? Why? 2. How can information and high-fidelity analysis frameworks be structured to promote predictability? a. Which types of prediction difficulties are best addressed through process structuring and control? b. How can frameworks be used to promote communication between analysts and test personnel in estimating predictability? 3. Should the model carry supporting data in parallel to support predictability assessment? 4. What about enforced recording of modeling assumptions and decisions? 5. Multiple spatial and temporal scales?
Topic #6 What “new things” could be accomplished in aircraft analysis if predictability were substantially improved? 1. How is aircraft performance predictability-limited? 2. Reduce required margins and safety factors? a. How much of a safety factor is allocated to cover modeling errors and missing info vs. inherent variability? 3. Is a system-level risk or uncertainty budget a practical concept? a. Can it be allocated rationally to components or modeling disciplines? b. Should predictability goals be tied to different stages in the design and certification process? c. Can predictability become a trade-off variable in the systems engineering process? Is this a function of the size of the production run?
Anything else?
Backup Slides
Impediments to Reliable Risk Analysis of MD Aircraft Problems • System-level risks generally involve incommensurate types of ignorance whose relative importance is problem -dependent and discipline-dependent • No universally accepted way to measure and combine these types of ignorance consistently • Industry mindset often prefers wrong answers that come quickly to better answers that take longer – Design process is perceived as a time and resource sink that must be tolerated in order to generate revenue downstream • Certification processes automatically biased toward the technological status quo – Potentially delays transition of beneficial new structures and materials technologies
The Role of Processes and Frameworks • We need a comprehensive approach to storing models, traditional analysis results, UQ results, and any other info used to support design and certification decisions (e. g. , expert opinions) – Must facilitate guided access for “future generations” to support: » Future expansions of operational capabilities » Life-extension programs » Insight into the sources and solutions of unexpected problems – Should also promote informative modeling and analysis practice (including UQ) by requesting » Key inputs and outputs, including their uncertain aspects » Documentation of modeling decisions
Our Perspective • UQ-based analyses are needed to help reveal unexpected failure modes and to assess their risk – We already do a reasonable job of preventing most wellknown structural failure modes in traditional designs – Could be critical for non-traditional designs • UQ-centric processes promote maximum payoff from models and tests at all scales (e. g. , coupon-level to fullscale) – Motivation behind test-planning should be transformed to emphasize model validation in addition to (or instead of? ) certification criteria • This will require substantial modification of traditional R&D and program funding profiles – Allocate additional funds during conceptual and preliminary design stages to support additional data gathering and analysis activities Need to fill the pool of knowledge as early as possible!
What Should Our Goals Be? • USAF needs to increase reliance on multidisciplinary analysis earlier in the design process – Detect genuinely avoidable problems before full-scale ground and flight tests – Achieve operational capabilities and efficiencies by enabling access to portions of the design space that are precluded by current certification requirements and precautionary biases • Ideal outcome: Dependable quantification of technical and performance risk early on lead to – Informed assessment of competing technologies – Accelerated insertion of new material, manufacturing, and assembly processes – Proactive prevention of problems instead of compromise fixes after problems are uncovered during testing
Systems Engineering Concepts • System*: An integrated composite of people, products, and processes that provide a capability or satisfy a stated need or objective • Systems Engineering (SE)*: An interdisciplinary engineering management process that evolves and verifies an integrated, life-cycle balanced set of system solutions that satisfy customer needs • Our premise: The goal of SE is to make informed decisions that efficiently mitigate risks while meeting goals – Every goal induces risk! Risks results from uncertainty! UQ should be part of SE * Systems Engineering Handbook, DAU Press, 2000.
Uncertainty and Systems Engineering for Aeroelasticity
Airframe Certification (1/2) • Certification: The end result of a structured process for identifying and managing risk from conception to regular operation • Current processes: – Little reliance on analysis for risk assessment – Fail to promote interaction between tests and analyses – Inadequate for future materials, technologies, and design concepts • Result? Structures certified through safety-factor design and expensive “building block” tests – Additional $$$$$ spent to certify repairs (e. g. , fatigue hot spots) and operational modifications (e. g. , aeroelastic stability with new external stores) – Even certified airframes still have many unexpected problems – How can we learn tomorrow what we’re not learning today? ? ?
Airframe Certification (2/2) • ASIP: USAF certification process for structural integrity – Reasonably successful but several shortcomings » Time-consuming and manpower intensive » Dependent on historical database » Risk assessment is too qualitative and subjective • USAF striving to increase reliance on analysis in airframe certification through – – Higher-fidelity modeling earlier in design process Uncertainty quantification (UQ) for risk analysis Verification and validation of models Streamlining and expanding knowledge generation and management processes • Why? – Increase safety and likelihood of achieving performance goals – Save time and money by reducing or eliminating some tests and accelerating iterative design processes – Avoid costly changes late in design cycle • Will these benefits actually be realized? TBD …
Prediction and Information. Management Tools (2/2) • Need tools that actively promote the gathering and retrieval of relevant information – Knowledge-bases for capturing and accessing … » Conceptual design support info (e. g. , historical requirements and capabilities) » Concept maps and influence diagrams for system-level interactions – Product-centric, object-oriented design environments that capture the methods operating on each product » Could include enforced documentation of modeling decisions • Also need tools that support consistent and rational data fusion, inference, risk assessment, and decision-making – Automated best practices and guided model-checking – Measures of confidence associated with expert opinions – Consistent model validation processes
2 -DOF Airfoil LCO: Problem Description • Subcritical Hopf bifurcation – pitch spring – k 3 < 0 destabilizing 5 th-order • MCS on a(t = 0), k 3, k 5 – 4, 000 realizations at each reduced velocity • Incompressible, unsteady aero (Jones’ approximation) Kh K a V q q W MCS
Current Issues in Uncertainty Quantification for Airframes
Context for Identifying Research Challenges Analysis is a tool to support decision making in design and certification, which is a process of managing risk while trying to achieve performance goals There are many kinds of risk: safety, performance, cost, and schedule Research justified on scientific grounds must also recognize non-technical priorities
Overview of Challenges • Technical Challenges – Probably familiar to most researchers • Non-Technical Challenges – Often a result of “institutional issues” – Non-technical because they can’t be resolved by technical advancement alone » Not always exclusive of technology because established design and certification practice often reflect assumed technical capabilities • Our Focus: Areas in which targeted research can lead to success given available computing and testing methods – No “Unobtainium” allowed!
Aerodynamic Uncertainties (1/2) • Typical modeling issues won’t go away, but should they be re-prioritized for stochastic considerations? – Domain discretization and approximation of BC’s: How much “precision” is justified given aleatory uncertainties? – Simulation vs. design » What is “appropriate” level of fidelity or complexity? » Where and how to insert uncertainty models? – Sensitivity to IC’s » Structure? Flow? • Importance of non-stationary or extreme gust loads? – Many assumptions commonly made to work around uncertainty in atmospheric turbulence » von Karman spectrum and gust length scale are imposed compromises – Nonlinear instabilities sensitive to level of disturbance – Extreme gust events are not captured
Aerodynamic Uncertainties (2/2) • Stochastic CFD for computational aeroelasticity – Model problem currently under study: 2 DOF airfoil with polynomial chaos for response » Modeled aero only (Jones approximation) » Uncertain: ka, kh, IC’s (a 0) – Which problems would require or benefit from this? » Subcritical bifurcations bimodal response pdf • Bifurcation sensitive to parametric uncertainty • Second-moment reliability methods not very “reliable” here • Need to know the nature of the bifurcation just to define what constitutes failure. – Integration with reduced-order solvers? – Institutional issues or roadblocks? » Training of analysts? » Integration with existing design tools and processes?
Structural and Other Issues • Structural damping models – Perhaps a key factor in observed limit-cycles, but poorly understood • Which issues don’t we consider now but need to if certification required quantitative risk estimates of aeroelastic stability and performance? – More off-design conditions? – Representation of variable fuel and stores loads? – Uncertainty in composite lay-ups for aeroelastic tailoring? » Maybe not for low drag, but what about for embedded sensors (e. g. , Sensorcraft) • Certain people don’t want to know about the uncertainties • Opportunities? – Active aeroelastic wing = built-in risk mitigation? ? ?
Certification Philosophy (1/3) • Cert needs to be recognized by all as a structured dialogue that includes: – Designers and analysts – Test and manufacturing personnel – Cert Officials and users • This dialogue establishes perceived levels of acceptable risk for a given aircraft program – Safety and performance – Cost and schedule – Political • Cert officials haven’t declared how to use UQ to support cert decisions
Certification Philosophy (2/3) • • Trade-off studies suggest much potential for UQ here Issues that impede use of risk analysis for airframes: 1. 2. 3. 4. 5. 6. Current analysis and manufacturing capabilities Availability of statistically significant input data Background and mindset of decision makers Legal and societal perception of quantified risk Cost and time of design and cert process is high but “known” Safety factors implicitly cover many sources of uncertainty n n Parametric uncertainty, model errors, non-safety concerns (e. g. , serviceability and performance), and “unknowns” How to allocate these in quantitative risk design criteria?
Certification Philosophy (3/3) • Proposed innovative designs offer many unknowns w. r. t. current certification procedures – Identification of critical failure modes – Required testing to ensure safety in these modes – Required safety factors for UAV’s … no pilot to protect • Not yet clear if a “risk-informed” approach would be adequate for airframes – ~ Probabilistic safety factors (similar to LRFD in CE) – Airframe failure modes can be harder to identify a priori than those of civil structures – Hard to integrate new analysis methods and account for reduced risk associated with validated models » Hard to test airframes, but much easier than testing buildings!
Other Considerations • Education and Training – Aerospace engineers get little training in probability and none in formal risk analysis » Undergrad curriculum does poor job of discussing practical failure modes and processes – Management often uninitiated also hard sell! – Widespread high-fidelity analysis will require more sophistication of designers/analysts • Cost – Potential cost savings of risk-based cert hard to estimate – Inadequacies of current cert process often not evident until after long-term operation » Perhaps the true cost of current design and cert processes should be recalculated to include downstream consequences
497c96d62b18af319f69e8f46cd8d386.ppt