Скачать презентацию Systems Testing in the SDLC Dr Robin Poston Скачать презентацию Systems Testing in the SDLC Dr Robin Poston

6bfde9952e398b0ecc2ffd83e29fcb70.ppt

  • Количество слайдов: 81

Systems Testing in the SDLC Dr. Robin Poston, Associate Director of the System Testing Systems Testing in the SDLC Dr. Robin Poston, Associate Director of the System Testing Excellence Program Fed. Ex Institute of Technology University of Memphis 1

Agenda ¢ ¢ ¢ Testing in Structured Design Tester in the SDLC Testing in Agenda ¢ ¢ ¢ Testing in Structured Design Tester in the SDLC Testing in Rapid Application and Agile Development Fed. Ex Institute of Technology University of Memphis 2

Objectives ¢ ¢ ¢ Understand Testing in Structured Design Understand the role of the Objectives ¢ ¢ ¢ Understand Testing in Structured Design Understand the role of the Tester in the SDLC Understand Testing in Rapid Application and Agile Development Understand Testing in Enterprise Systems Development Learn the System Testing Phases Fed. Ex Institute of Technology University of Memphis 3

Testing in Structured Design The Waterfall Model ¢ The V Model ¢ Communication project Testing in Structured Design The Waterfall Model ¢ The V Model ¢ Communication project initiation & requirements gathering Planning Estimating, scheduling, tracking Modeling analysis & design Construction code & test Deployment delivery, support, feedback 4 Source: Modern Systems Analysis and Design by J. A. Hoffer , J. F. George, J. S. Valacich, Prentice Hall 2005 Fed. Ex Institute of Technology University of Memphis 4

SDLC Phases ¢ Planning ¢ Analysis ¢ Design ¢ Implementation & Maintenance Fed. Ex SDLC Phases ¢ Planning ¢ Analysis ¢ Design ¢ Implementation & Maintenance Fed. Ex Institute of Technology University of Memphis 5

Source: http: //www. coleyconsulting. co. uk/testtype. htm Fed. Ex Institute of Technology University of Source: http: //www. coleyconsulting. co. uk/testtype. htm Fed. Ex Institute of Technology University of Memphis 6

Testing in Structured Design (cont’d. ) ¢ ¢ Common life-cycle model of software development Testing in Structured Design (cont’d. ) ¢ ¢ Common life-cycle model of software development Development work goes in sequential phases down left side of the V Test execution phases occurs up the right side of the V Tests are planned and developed as soon as corresponding development phase begins. l Ex: requirements are the basis for acceptance testing so preparing for testing can start immediately after requirements are captured. Source: Pressman, R. S. Software Engineering: A Practitioner's Approach, Sixth Education, Mc. Graw Hill, New York, 2005. Fed. Ex Institute of Technology University of Memphis 7

The SDLC ¢ Generally, the SDLC consists of 3 major stages 1. Analysis: Definition The SDLC ¢ Generally, the SDLC consists of 3 major stages 1. Analysis: Definition 2. Design: Development & Installation 3. Implementation: Operation ¢ These stages of the SDLC can be further decomposed into 10 phases. 1. Definition consists of 3 Phases: service request/project viability analysis (SR/PVA), system requirements definition (SRD) and system design alternatives (SDA). 2. Development consists of 4 Phases: system external specifications (SES), system internal specifications (SIS), program development (PD) and testing (TST) 3. Testing is interspersed throughout the entire SDLC. 4. Installation and Operation begins with a conversion (CNV) phase, then implementation (IMPL) and ends with a post-implementation review/maintenance (PIR/MN) phase. Source: Li, 1990 Fed. Ex Institute of Technology University of Memphis 8

Source: Li, E. , in Journal of Systems Management (1990) Fed. Ex Institute of Source: Li, E. , in Journal of Systems Management (1990) Fed. Ex Institute of Technology University of Memphis 9

SDLC Considerations ¢ ¢ ¢ Sequential Orientation: Only when a SDLC phase is completed SDLC Considerations ¢ ¢ ¢ Sequential Orientation: Only when a SDLC phase is completed and signed off on can the next phase proceed. How does this impact testing? SDLC a. k. a. Waterfall model is generally a document–driven model: How can these documentation orientation be used to facilitate testing. Tremendous time and cost is incurred by a "major" change in the document of an earlier SDLC phase: Is this a testing issue? Testing forces changes to earlier phases and documents. Advanced tools such as CASE tools, automated code generators etc. have facilitated the SDLC. Has testing stayed abreast of these technologies. Variations of the SDLC to include agile concepts, rapid application methods, joint development techniques have failed to reconsider the role of testing. Source: Li, 1990 Fed. Ex Institute of Technology University of Memphis 10

Source: Li, 1990 Fed. Ex Institute of Technology University of Memphis 11 Source: Li, 1990 Fed. Ex Institute of Technology University of Memphis 11

Roles Across SDLC Role Activities Performed Test Practitioner Execute defined tests within an established Roles Across SDLC Role Activities Performed Test Practitioner Execute defined tests within an established test environment, and document the results. Test Analyst Run front-end of testing processes, which includes determining testing needs. Check that the requirements are adequate for starting testing processes. Test Designer Using requirements, design test cases following published and agreed to techniques. Test Builder Assemble the equipment needed to run the test cases. Test Inspector Confirm that testing tasks have been performed correctly according to standards and procedures. Test Environmentalist Manage and support the test environment. Test Specialist Execute testing tasks across various phases of the testing life cycle. Fed. Ex Institute of Technology University of Memphis 12

Tester Tasks in the SDLC Life Cycle Activities Performed Deliverables Analysis Identify scope and Tester Tasks in the SDLC Life Cycle Activities Performed Deliverables Analysis Identify scope and strategy for testing, including refining requirements. Test plan, updated requirements. Design Create test cases and organize test effort. Test cases, test data sets. Implementation Assemble the equipment needed to conduct tests. Test procedures/scripts, drivers. Execution Execute tests and capture test results. Test results. Evaluation Verify results against expectation and follow-up Test log updates, as needed, including reporting, debugging, etc. modified code. Fed. Ex Institute of Technology University of Memphis 13

Tester Involvement in SDLC Developers Testers Design Code Build 1 Build 2 Prepare Test Tester Involvement in SDLC Developers Testers Design Code Build 1 Build 2 Prepare Test Plans …. . Test Build 1&2 Code Build N …. . Cleanup Test Full System Acceptance Testing ¢ ¢ ¢ Combine system testing and acceptance testing Test group does functional testing--people with domain expertise Begin testing early, as soon as first build is complete Testers test recent completed build while developers move on to next Developers are responsible for low level of integration testing of each build before delivered to testers At end testers perform end-to-end test Fed. Ex Institute of Technology University of Memphis Source: NASA’s Software Engineering Laboratory 14

Thought Exercise ¢ As a tester, what potential problems do you see with the Thought Exercise ¢ As a tester, what potential problems do you see with the Li-model? Also think about potential solutions to these problem. Fed. Ex Institute of Technology University of Memphis 15

Risks of Structured Design ¢ Difficult to plan accurately that far in advance Requirements Risks of Structured Design ¢ Difficult to plan accurately that far in advance Requirements or resources for the project may change by the time testing starts l 20% of medium and 70 -90% of large sized projects are delayed or cancelled l ¢ Model is driven by schedule and budget risks, with quality taking back seat l If get behind schedule, test time shortens Source: Estimating Software Costs by Capers Jones, Mc. Graw-Hill Companies, July 22, 1998, ISBN: 0079130941. Fed. Ex Institute of Technology University of Memphis 16

Risks of Structured Design Testing should start early with groups cooperating and giving feedback Risks of Structured Design Testing should start early with groups cooperating and giving feedback ¢ Test cycles must be planned realistically including the number of cycles needed to find and fix errors/defects ¢ Entry/exit criteria are critical for project status ¢ Revisit features/requirements at entry points or use change controls ¢ (Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis 17

Testing in Rapid Application Development The Incremental Model (below) ¢ The Spiral Model ¢ Testing in Rapid Application Development The Incremental Model (below) ¢ The Spiral Model ¢ 18 Source: Modern Systems Analysis and Design by J. A. Hoffer , J. F. George, J. S. Valacich, Prentice Hall 2005 University of Memphis Fed. Ex Institute of Technology

(Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis (Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis 19

Testing in RAD ¢ ¢ Process progresses in spiral with iterative path More complete Testing in RAD ¢ ¢ Process progresses in spiral with iterative path More complete software is built as iteration occurs through all phases The first iteration is the most important, as possible risk factors, constraints, requirements are identified and the next iterations get to more complete software Evolution of the software towards a complete system (Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis 20

Advantages of Testing in RAD ¢ ¢ RAD solves The Structured Design problem of Advantages of Testing in RAD ¢ ¢ RAD solves The Structured Design problem of predicting the future Features are not committed to until we know what is possible User interface prototypes get the users involved in early testing which mitigates usability risks Users get confused because RAD prototypes lead to redesigns not a final product at the end of each test period (Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis 21

Testing in Agile Development Extreme Programming (XP) (below) ¢ (aka) Agile Development ¢ Fed. Testing in Agile Development Extreme Programming (XP) (below) ¢ (aka) Agile Development ¢ Fed. Ex Institute of Technology Source: Systems Analysis and Design: An Applied Approach, A. Dennis, B. H. Wixom, and R. M. Roth, 2006. 22 University of Memphis

Testing in Agile Development ¢ Agile Testing: Treats developers as the customer of testing Testing in Agile Development ¢ Agile Testing: Treats developers as the customer of testing l Emphasizes a test-first way to create software l Is incremental because software expands from core features that are delivered on a predetermined date l Features are added in chunks to a stable core l As new features are added, new system is stabilized through testing and debugging l Fed. Ex Institute of Technology University of Memphis 23

Testing in Agile Development ¢ Agile testing: Abandons the notion that testers get requirements Testing in Agile Development ¢ Agile testing: Abandons the notion that testers get requirements and design documents, and give back test plans and bug reports l The documents used for testing are usually flawed - incomplete, incorrect, and ambiguous l Testers in the past have insisted that the documents be produced better l But "better" will never be good enough l Fed. Ex Institute of Technology University of Memphis 24

Testing in Agile Development ¢ Design documents can't be an adequate representation of working Testing in Agile Development ¢ Design documents can't be an adequate representation of working code l Agile methods encourage ongoing project conversations, where typically: • Testers and developers sit in the same bullpen, share offices, or work in neighboring cubicles • Many testers are assigned to help particular developers, rather than being assigned to test pieces of the product l l Test plan grows through a series of short, lowpreparation, informal discussions Results in short memos about specific issues Fed. Ex Institute of Technology University of Memphis 25

Risks of Agile Development ¢ ¢ ¢ Need more testers Testing starts earlier because Risks of Agile Development ¢ ¢ ¢ Need more testers Testing starts earlier because need to thoroughly test the core before adding new increments Vs. Structured Design Model: l Entry criteria for system testing for Structured Model is all features complete & unit tested and requirements set l Agile model can only ask that core features are complete and unit tested. Requirements won’t be set until the end Automated regression tools can reduce need for staff It is important to develop organized but flexible test processes and best practices Fed. Ex Institute of Technology University of Memphis 26

Thought Exercise ¢ ¢ ¢ Test-driven programmers create tests in technology facing language. Their Thought Exercise ¢ ¢ ¢ Test-driven programmers create tests in technology facing language. Their tests talk about programmatic objects, not business concepts. They learn about business needs through conversations with business experts. But it is hard to learn everything through these conversations. Very often business experts are surprised as something was left out that was obvious. There is no way to eliminate surprises and agile projects make it easier to correct, but it is disappointing. How can we improve these conversations? (Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis 27

References ¢ ¢ ¢ Black, R. , “Managing the Testing Process. ” Wiley Publishing, References ¢ ¢ ¢ Black, R. , “Managing the Testing Process. ” Wiley Publishing, New York, NY. , (2002). Kaner, C. , Bach, J. , and Pettichord, B. , “Lessons Learned in Software Testing: A Context-Driven Approach. ” John Wiley & Sons, Inc. , New York, NY. , (2002). Camarinha-Matos, L. M. , and Afsarmaneshz, H. 2008. "On Reference Models for Collaborative Networked Organizations, " International Journal of Production Research (46: 9), pp. 2453 -2469. Chapurlat, V. , and Braesch, C. 2008. "Verification, Validation, Qualification, and Certification of Enterprise Models: Statements and Opportunities, " Computers in Industry (59), pp. 711 -721. Kim, H. M. , Fox, M. S. , and Sengupta, A. 2007. "How to Build Enterprise Data Models to Achieve Compliance to Standards Or Regulatory Requirements, " Journal of the AIS (8: 2), pp. 105 -128. Kim, T. Y. , Lee, S. , Kim, K. , and Kim, C. H. 2006. "A Modeling Framework for Agile and Interoperable Virtual Enterprises, " Computers in Industry (57), pp. 204 -217. Fed. Ex Institute of Technology University of Memphis 28

Systems Testing in the Review of Requirements Dr. Robin Poston, Associate Director of the Systems Testing in the Review of Requirements Dr. Robin Poston, Associate Director of the System Testing Excellence Program Fed. Ex Institute of Technology University of Memphis 29

Agenda ¢ ¢ ¢ ¢ ¢ Business Requirements Specification Role of Requirements in Testing Agenda ¢ ¢ ¢ ¢ ¢ Business Requirements Specification Role of Requirements in Testing Tester Involvement Obtaining Requirements in Testing Heuristics and Other Test Tools Other Sources of Requirements Test-Driven Development Software Requirements Specification (SRS) Validation and Verification Fed. Ex Institute of Technology University of Memphis 30

Objectives ¢ ¢ ¢ Understand what Business Requirements Specification are Understand the Role of Objectives ¢ ¢ ¢ Understand what Business Requirements Specification are Understand the Role of Requirements in Testing Understand Tester Involvement in Requirements Learn How to Obtain Requirements in Testing Learn about Test-Driven Development Understand what Software Requirements Specification are Fed. Ex Institute of Technology University of Memphis 31

Business Requirements Specification What is a Requirement? Statement of what the system must do Business Requirements Specification What is a Requirement? Statement of what the system must do ¢ Statement of characteristics system must have ¢ Focus is on business user needs during analysis phase ¢ Requirements will change as project moves from analysis -> design -> implementation ¢ Source: Dennis, Wixom and Roth, Systems Analysis and Design, 2006, Wiley Fed. Ex Institute of Technology University of Memphis 32 32

Role of Requirements in Testing ¢ ¢ ¢ Requirements “a quality or condition that Role of Requirements in Testing ¢ ¢ ¢ Requirements “a quality or condition that matters to someone who matters” (Kaner et al. , 2002) Interesting fictions – useful but never sufficient Won’t automatically receive all the requirements Don’t ask for items unless you will use them Helps guide the testing process Fed. Ex Institute of Technology University of Memphis 33 33

Tester Involvement with Requirements Analysis ¢ Requirements “analysis” occurs early in the SDLC l Tester Involvement with Requirements Analysis ¢ Requirements “analysis” occurs early in the SDLC l ¢ It is a representation of user’s needs but usually is not completely accurate After requirements are set, designers build a picture of a solution through a “design” step Fed. Ex Institute of Technology University of Memphis 34

Tester Involvement (cont’d. ) ¢ At the requirements analysis stage, testers: Design a few Tester Involvement (cont’d. ) ¢ At the requirements analysis stage, testers: Design a few tests but add more after the design phase l Add requirements because they are incomplete l Make clerical checks for consistent, unambiguous and verifiable requirements yet designers can do this l • I. e. , ‘the transaction shall be processed within three day’ is ambiguous—workdays or calendar days? (Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis 35

Tester Involvement (cont’d. ) ¢ At requirements analysis stage, testers should: l l Understand Tester Involvement (cont’d. ) ¢ At requirements analysis stage, testers should: l l Understand business domain and create user stories If untrained users create requirements, tester is needed If designers are weak or scarce, testers can compensate As long as testers are trusted and respected by developers, are convincing when finds issues, and are able to envision the architectural implications ok, if not wait to get involved (Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis 36

Obtaining Requirements in Testing ¢ 3 primary ways testers obtain requirements information Conference l Obtaining Requirements in Testing ¢ 3 primary ways testers obtain requirements information Conference l Inference l Reference l ¢ It is your job to seek out the information you need for testing Fed. Ex Institute of Technology University of Memphis 37

Obtaining Requirements (cont’d. ) ¢ Explicit Requirements Gathering l ¢ Acknowledged as authoritative by Obtaining Requirements (cont’d. ) ¢ Explicit Requirements Gathering l ¢ Acknowledged as authoritative by users Implicit Requirements Gathering l Useful source of information not acknowledged by users Fed. Ex Institute of Technology University of Memphis 38

Obtaining Requirements (cont’d. ) Implicit Requirements Gathering ¢ Authority comes from persuasiveness and credibility Obtaining Requirements (cont’d. ) Implicit Requirements Gathering ¢ Authority comes from persuasiveness and credibility of content, not from the users l l l l l Competing & Related products Older versions of the same product Email discussions about the project Comments by customers Magazine articles (old reviews of product) Textbooks on related subjects GUI style guides O/S compatibility requirements Your own experience Fed. Ex Institute of Technology University of Memphis 39

Heuristics ¢ Use heuristics to quickly generate ideas and tests l Testing Examples: • Heuristics ¢ Use heuristics to quickly generate ideas and tests l Testing Examples: • Test at the boundaries • Test every error messages • Test configurations that are different than the programmer’s • Run tests that are annoying to set up • Avoid redundant tests Fed. Ex Institute of Technology University of Memphis 40

Other Test Tools ¢ ¢ Manage bias Confusion is a test tool l l Other Test Tools ¢ ¢ Manage bias Confusion is a test tool l l ¢ ¢ ¢ Is the requirement confusing? Is the product confusing? Is the user documentation confusing? Is the underlying problem difficult to understand? Fresh eyes find failure Watch for other people’s bias One outcome testing is a better, smarter tester Can’t master testing unless you reinvent it Use the user as a tester Fed. Ex Institute of Technology University of Memphis 41

Other Sources of Requirements ¢ If you don’t have a good requirements take advantage Other Sources of Requirements ¢ If you don’t have a good requirements take advantage of other sources of information l l l User manual draft Product marketing literature Marketing presentations Software change memos (new internal versions) Internal memos Published style guide and user interface standards Published standards & regulations Third party product compatibility test suites Bug reports (responses to them) Interviews (development lead, tech writer, etc. ) Header files, source code, database table definitions Prototypes and lab notes Fed. Ex Institute of Technology University of Memphis 42

Thought Exercise ¢ ¢ ¢ A new jet fighter was being tested. The test Thought Exercise ¢ ¢ ¢ A new jet fighter was being tested. The test pilot strapped in, started the engines, and flipped the switch to raise the landing gear. The plane wasn’t moving, but the avionics software dutifully raised the landing gear. The plane fell down and broke. It is reasonable to assume the bug in the code was that some code was missing—”if the plane is on the ground, issue an error message”. This is an error/fault/bug of an omission of a requirement and thus lack of adequate testing. Discuss with a partner how you would go about catching these types of “bugs” before the code is released to users. (Source: www. testing. com, Michael Hunter, Microsoft Corporation) Fed. Ex Institute of Technology University of Memphis 43

Test-Driven Development Unit tests performed before coding ¢ Before developer builds new function, first Test-Driven Development Unit tests performed before coding ¢ Before developer builds new function, first creates test case that verifies it ¢ Test all inputs, outputs, boundary cases, error conditions l Unexpected inputs or error conditions l ¢ Planning for ways code might fail first results in catching more when coding software (Source: www. testing. com, Brian Marick) Fed. Ex Institute of Technology University of Memphis 44

Test-Driven Development (cont’d. ) ¢ Developer understand requirements better l ¢ ¢ ¢ But Test-Driven Development (cont’d. ) ¢ Developer understand requirements better l ¢ ¢ ¢ But more fun to jump in & start coding! Creating unit tests firms up requirements in developer’s mind and see what’s missing Ensures testing gets done Poor interface designs become apparent early Complete suite of unit test make it easier to refactor software Unit test are run after refactoring to see if new defects were added in updating the code Fed. Ex Institute of Technology University of Memphis 45

Software Requirements Specification (SRS) ¢ Description of the behavior of the software to be Software Requirements Specification (SRS) ¢ Description of the behavior of the software to be developed, includes: l l l ¢ Use cases describe interactions users have with the software Functional requirements define internal workings of software (calculations etc. ) Non-functional requirements define constraints on design and implementation (performance etc. ) Coding and testing work is based on the SRS Fed. Ex Institute of Technology University of Memphis 46

SRS (cont’d. ) ¢ ¢ Performed by the system analyst Process followed to create SRS (cont’d. ) ¢ ¢ Performed by the system analyst Process followed to create the SRS document Source: Pfleeger and Atlee, Software Engineering: Theory and Practice, 2006, Pearson/ Prentice Hall Fed. Ex Institute of Technology University of Memphis 47

SRS Checklist Making Requirements Testable ¢ Fit criteria form objective standards for deciding whether SRS Checklist Making Requirements Testable ¢ Fit criteria form objective standards for deciding whether a solution satisfies the requirements Easy to set fit criteria for objective requirements l Hard for subjective quality requirements l ¢ Three ways to help make requirements testable Identify a quantitative description for each adverb and adjective l Replace pronouns with specific names of entities l Ensure every noun is defined in just one place in the requirements documents l Fed. Ex Institute of Technology University of Memphis 48

Validation and Verification Requirements Review ¢ ¢ ¢ Examine the original goals / objectives Validation and Verification Requirements Review ¢ ¢ ¢ Examine the original goals / objectives of the system Compare the requirements with the goals / objectives Examine the environment of the system Examine the information flow and proposed functions Assess and document risks, discuss alternatives Assess how to test the system: how will the requirements be re-validated as they change Fed. Ex Institute of Technology University of Memphis 49

Thought Exercise ¢ What do you think it takes to be a great tester? Thought Exercise ¢ What do you think it takes to be a great tester? Fed. Ex Institute of Technology University of Memphis 50

Summary ¢ ¢ ¢ You should Understand what Business Requirements Specification are You should Summary ¢ ¢ ¢ You should Understand what Business Requirements Specification are You should Understand the Role of Requirements in Testing You should Understand Tester Involvement in Requirements You should Know How to Obtain Requirements in Testing You should Know Test-Driven Development You should Understand what Software Requirements Specification are Fed. Ex Institute of Technology University of Memphis 51

Testers Dr. Robin Poston, Associate Director of the System Testing Excellence Program 52 Fed. Testers Dr. Robin Poston, Associate Director of the System Testing Excellence Program 52 Fed. Ex Institute of Technology University of Memphis

Agenda ¢ Critical Tester Skills ¢ Types of Testers ¢ Test Staffing Variations ¢ Agenda ¢ Critical Tester Skills ¢ Types of Testers ¢ Test Staffing Variations ¢ Attitude Matters 53 Fed. Ex Institute of Technology University of Memphis

Critical Tester Skills ¢ General qualifications Education and good verbal and written communication skills Critical Tester Skills ¢ General qualifications Education and good verbal and written communication skills in formal and informal settings l Ability to read closely with significant attention to detail l Testing skills ¢ Application domain knowledge ¢ Computer Technical expertise ¢ 54 (Source: Michael Hunter, Microsoft Corporation) Fed. Ex Institute of Technology University of Memphis

Critical Tester Skills (cont’d. ) Application Domain Knowledge Business Analysts Test Technicians Test Administrators Critical Tester Skills (cont’d. ) Application Domain Knowledge Business Analysts Test Technicians Test Administrators Test Tool Experts Manual Test Specialist Automated Test Specialist Computer Technical Expertise 55 Fed. Ex Institute of Technology University of Memphis Testing Skills

Types of Testers ¢ ¢ 56 Test Engineers (Leads) l Skilled, educated, and experienced Types of Testers ¢ ¢ 56 Test Engineers (Leads) l Skilled, educated, and experienced in all areas of testing (planning, design, implementation, and execution) l Possesses skills, expertise and knowledge Test Technician and Specialist l Less skilled, educated, and experienced in critical skills areas (testing, application domain and technology). l Qualified for scripted manual test execution and other straightforward technician-level tasks. Fed. Ex Institute of Technology University of Memphis

Test Participants & Roles The participants of testing vary with the size of the Test Participants & Roles The participants of testing vary with the size of the project. There are five possible roles played : 1. Test Data Administrator: Controls file structure and data base contents; manages availability and recoverability of data; supports developing and implementing the test data requirements strategy. 2. Test Execution Lead: Prepares test-case conditions; manages test data and files; executes desk checking and test runs; reports test results and discrepancies (errors). 3. Test Team Leader: Supports test planning; manages test preparation, execution and evaluation; creates test plan and test-case summaries; manages testing support; oversees activities of test team; provides technical assistance to test team; participates in quality assurance reviews and inspections. 4. Test Team Member: Supports defining test conditions for test-case designs and reviewing test-case specifications and test results. 5. Test Supervisor/Coordinator: Assigns tests to test teams; reviews and approves all the relevant test materials. Source: National Occupational Classification; Human Resources & Skills Development Canada 2006 57 Fed. Ex Institute of Technology University of Memphis

Systems Testing Technicians Systems testing technicians execute test plans to evaluate the performance of Systems Testing Technicians Systems testing technicians execute test plans to evaluate the performance of software applications and information and telecommunications systems. They are employed in information technology units throughout the private and public sectors. Related Titles application tester application testing technician software test coordinator software tester systems testing technician user acceptance tester Main duties Systems testing technicians perform some or all of the following duties: Develop and document software testing plans Install software and hardware and configure operating system software in preparation for testing Execute, analyze and document results of software application tests and information and telecommunication systems tests Develop and implement software and information system testing policies, procedures and scripts. Employment requirements Completion of a college program in information systems, computer science, computer programming or network administration is usually required. College or other courses in computer programming or network administration are usually required. Certification or training provided by software vendors may be required by some employers. Source: National Occupational Classification; Human Resources & Skills Development Canada 2006 58 Fed. Ex Institute of Technology University of Memphis

Test Staffing Variations ¢ Temporary Assignment—people employed elsewhere join testing team l l l Test Staffing Variations ¢ Temporary Assignment—people employed elsewhere join testing team l l l ¢ Rotation—developers, testers and technical support all rotate among their roles l l l 59 Users or technical support staff Add application domain expertise Add personal workflows & scenarios Management and HR must agree Must be positioned as positive & supported Minimizes test skills and has high turnover Fed. Ex Institute of Technology University of Memphis

Test Staffing Variations (cont’d. ) Archetype Definition Sequence Development is a effort based on Test Staffing Variations (cont’d. ) Archetype Definition Sequence Development is a effort based on a linear set of discrete tasks. People work in specialized functions with formalized interactions. People are valued for their particular specialized skills. Group Development is development and production combined where a set of discrete tasks are repeated until the software is complete. Developers and testers work in interdependent groups and are rewarded for their skills and ability to work together. Network Development is a process of constant development with a focus on the outcome. Tasks are not sequential and are tied to individuals (or small groups) whose participation is based on interaction. Group members are rewarded for what they produce. This involves a network of people and hub-and-spoke management. Source: Software Development Teams: Three Archetypes and Their Difference, Steve Sawyer. 60 Fed. Ex Institute of Technology University of Memphis

Test Staffing Variations (cont’d. ) Archetype Aspect Group Network Perspective Process first Product first Test Staffing Variations (cont’d. ) Archetype Aspect Group Network Perspective Process first Product first Belief mode Control Conflict Interaction Orientation Prescriptive Normative Descriptive View of task Production & development Development Implied method Linear & sequential Iterative & sequential Emergent & non-linear Tie to context Prescribed boundary Permeable boundary Embedded People’s actions Prescribed Role & goal driven Individual & linked Examples 61 Sequence SDLC, SEI/CMM Spiral, RAD, JAD Open source, Chief programmer Source: Software Development Teams: Three Archetypes and Their Difference, Steve Sawyer. Fed. Ex Institute of Technology University of Memphis

Attitude Matters ¢ ¢ ¢ Professional pessimism—approach testing with assumption that bugs exist and Attitude Matters ¢ ¢ ¢ Professional pessimism—approach testing with assumption that bugs exist and we will find them l Advocate for quality in a non-adversarial, impersonal, non-disruptive way Curiosity—find issues when writing & running tests l Avoid perfunctory action and explore more Focus—understand quickly changing priorities General and supportive role—steady, consistent test process which may not have glory and glamour Work hard—schedules and budgets compress testing Advocate quality (Source: www. testing. com, Brian Marick) 62 Fed. Ex Institute of Technology University of Memphis

Thought Exercise ¢ 63 A good tester understand how users spend their days. Why Thought Exercise ¢ 63 A good tester understand how users spend their days. Why is this useful? Fed. Ex Institute of Technology University of Memphis

Summary Know Critical Tester Skills ¢ Know the Types of Testers ¢ Know Test Summary Know Critical Tester Skills ¢ Know the Types of Testers ¢ Know Test Staffing Variations ¢ Know that Attitude Matters ¢ 64 Fed. Ex Institute of Technology University of Memphis

Users and Testers Dr. Robin Poston, Associate Director of the System Testing Excellence Program Users and Testers Dr. Robin Poston, Associate Director of the System Testing Excellence Program 65 Fed. Ex Institute of Technology University of Memphis

Agenda ¢ Who are Users ¢ Communicating with Users ¢ User Testing ¢ Test Agenda ¢ Who are Users ¢ Communicating with Users ¢ User Testing ¢ Test Coverage ¢ Testing With 5 Users ¢ Why Not With 1 User ¢ When To Test With More ¢ User Testing Process 66 Fed. Ex Institute of Technology University of Memphis

Objectives ¢ ¢ ¢ ¢ 67 Understand Who are Users Understand how to Communicate Objectives ¢ ¢ ¢ ¢ 67 Understand Who are Users Understand how to Communicate with Users Know how Users are involved in Testing Understand Test Coverage Learn about Testing With 5 Users and not With 1 User Know When To Test With More Users Understand the User Testing Process Fed. Ex Institute of Technology University of Memphis

Who are Users ¢ Business analysts ¢ Help desk ¢ Customer support ¢ Technical Who are Users ¢ Business analysts ¢ Help desk ¢ Customer support ¢ Technical support ¢ End customers and business users 68 Fed. Ex Institute of Technology University of Memphis

Communicating with Users Traditional focus groups ¢ Electronic focus groups ¢ Iterative surveys ¢ Communicating with Users Traditional focus groups ¢ Electronic focus groups ¢ Iterative surveys ¢ Exploratory surveys ¢ Scenario-building ¢ 69 Fed. Ex Institute of Technology University of Memphis

User Testing Acceptance—performed by the customer prior to the customer accepting delivery of the User Testing Acceptance—performed by the customer prior to the customer accepting delivery of the system, using black-box testing. l Pilot—establishes readiness for release, in terms of starting to use the system to do real work. l 70 Fed. Ex Institute of Technology University of Memphis

User Testing (cont’d. ) l Alpha—actual operational testing by users/ customers or an independent User Testing (cont’d. ) l Alpha—actual operational testing by users/ customers or an independent test team at the company site. • Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before it goes to beta testing. l Beta—software is released to a limited audience. • Released so further testing ensures it has few faults. • Available to the public to increase feedback received from a lot of future users. 71 Fed. Ex Institute of Technology University of Memphis

Test Coverage Workflows ¢ Data sets ¢ Configurations ¢ Field conditions ¢ Gives usability Test Coverage Workflows ¢ Data sets ¢ Configurations ¢ Field conditions ¢ Gives usability issues that might not occur to testers ¢ Quality can be inconsistent ¢ 72 Fed. Ex Institute of Technology University of Memphis

Testing With 5 Users Usability is costly and complex and user tests may be Testing With 5 Users Usability is costly and complex and user tests may be reserved for web design projects with large budgets and long time schedules. ¢ Highly structured usability tests are a waste of resources. ¢ The best results come from testing with no more than 5 users with as many small tests as possible. ¢ See next slide for graph ¢ (Source: Deming 1986 and Ishakawa 1985; www. testing. com, Brian Marick) 73 Fed. Ex Institute of Technology University of Memphis

Testing With 5 Users (cont’d. ) (Source: Deming 1986 and Ishakawa 1985; www. testing. Testing With 5 Users (cont’d. ) (Source: Deming 1986 and Ishakawa 1985; www. testing. com, Brian Marick) 74 Fed. Ex Institute of Technology University of Memphis

Testing With 5 Users (cont’d. ) ¢ ¢ ¢ ¢ Test with 15 users Testing With 5 Users (cont’d. ) ¢ ¢ ¢ ¢ Test with 15 users to find usability problems 5 good to use limited resources across many small tests instead of a big one Budget for 15 representative customers and they test system spending time on 3 tests with 5 users each Run multiple tests because the goal of testing is to improve the design and document weaknesses After first test with 5 users find 85% of the problems, redesign Test again Second test with 5 users will discover remaining 15% of the problems not found in the first test (Source: Deming 1986 and Ishakawa 1985; www. testing. com, Brian Marick) 75 Fed. Ex Institute of Technology University of Memphis

Why Not With 1 User ¢ ¢ Is 15 tests with 1 user better Why Not With 1 User ¢ ¢ Is 15 tests with 1 user better than 3 tests with 5 users? Curve shows we learn much more from the first user than from any subsequent users, so why use more than 1 user to test? Two reasons: l l Risk of being misled by 1 person who may perform certain actions by accident or in an unrepresentative manner. 3 users are enough to get diversity in behavior Cost-benefit of user testing provides optimal ratio around 5 users. Better to use investment in setting up to get multiple findings. (Source: Deming 1986 and Ishakawa 1985; www. testing. com, Brian Marick) 76 Fed. Ex Institute of Technology University of Memphis

When To Test With More ¢ ¢ ¢ More users when several highly distinct When To Test With More ¢ ¢ ¢ More users when several highly distinct groups of users For example, children and parents, the two groups of users have different behavior Even when they different, there will be similarities in behavior across the groups Many problems are related to the way people use the system There is no need to include as many members of each group as you would in a single test of a single group Overlap will ensure a better outcome from testing with a smaller number of people in each group, for example: l l 4 users from each category if testing with two groups of users 3 users from each category if testing three or more groups of users (Source: Deming 1986 and Ishakawa 1985; www. testing. com, Brian Marick) 77 Fed. Ex Institute of Technology University of Memphis

User Testing Process ¢ Evaluate software using a variety of methods: l l l User Testing Process ¢ Evaluate software using a variety of methods: l l l ¢ ¢ Testing with users is effective. Testing provides useful information used to make effective modifications. User testing typically done with minimal resources to produce useful results. Other methods supplement the results providing more accurate assessments l ¢ Usage statistics, Analysis of feedback provided, and Structured interviews with users. expert evaluation, heuristic review, usability walk-through, surveys and monitoring software. User testing involves users working through a set of tasks using the application. (Source: Deming 1986 and Ishakawa 1985; www. testing. com, Brian Marick) 78 Fed. Ex Institute of Technology University of Memphis

User Testing Process (cont’d. ) ¢ ¢ ¢ Usability testing sessions conducted in a User Testing Process (cont’d. ) ¢ ¢ ¢ Usability testing sessions conducted in a lab or at the person's desk One person per test session, lasting 30 -60 minutes Outcome is to evaluate the ease of use and intuitiveness of the system Users provide feedback about what they like and dislike and about difficulties they may have Use feedback to revise the application (Source: Deming 1986 and Ishakawa 1985; www. testing. com, Brian Marick) 79 Fed. Ex Institute of Technology University of Memphis

Thought Exercise ¢ 80 Who does the testing team serve and how does it Thought Exercise ¢ 80 Who does the testing team serve and how does it do that? Fed. Ex Institute of Technology University of Memphis

Summary ¢ ¢ ¢ ¢ 81 You should Know Who are Users Know how Summary ¢ ¢ ¢ ¢ 81 You should Know Who are Users Know how to Communicate with Users Know how Users are involved in Testing Know Test Coverage Know about Testing With 5 Users and not With 1 User Know When To Test With More Users Know the User Testing Process Fed. Ex Institute of Technology University of Memphis