System Development Chapter

Approach – this section will describe the testing methods which will be used. * Pass/fail criteria – used in the test plan to define a global pass/fail criterion. For example, if any error is found in any of the series of tests, the whole series will be regarded as having failed. * Suspension criteria – the project manager/analyst must plan on the basis of successful tests (tests that find errors). * Resumption conditions – the definition of each type of suspension criterion and the conditions under which the tests can be resumed. Test deliverables – this is a list of all the documents associated with he test plan. * Environmental needs – a summary of the hardware required to carry out the test series, the data which is assumed to be already available (I. E. Static data) and any special test tools. * Responsibilities – the people and organizations who will be charged with undertaking all aspects of the test plan: * staffing and training needs – identify the people carrying out the tests, and any training needs; * schedules – a project plan for the tests, identifying milestones, test dates, estimates * risks and contingencies – identify where particularly high risk assumptions have been made. Approvals – this is the Quality Approval sign off.

Test Design The test design describes how a particular system feature will be tested and defines a series of test cases which will be used. There can be many test designs for a test plan. The typical contents of a test design are: lest eagles Indolently – a unique lantern Tort tins set AT tests Features to De tested – the specific feature that this test design is intended to test. For each feature to be tested, a reference should be provided to the specification item which supports the feature. * Test identification – lists the test cases which are needed, together tit a brief description. * Feature pass/fail criteria – a lower level pass/fail criterion, overriding the global criterion.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Test Case The purpose of the test case is to define specific tests. A test case may be used by many test designs. A test case consists of the following items: * Test case identifier – a unique identifier for the test case. * Test description – a brief description of the test purpose. * Input specification – a definition of the data or input conditions which are required to carry out the test. The specification should name all the tables and fields which are to be used. Output specifications – describes what is expected to happen as a result of running the test. * Environmental needs – describes any special requirements for carrying out this test, in addition to those given in the test plan. Inter-case dependencies – all dependent tests which can only be performed when this test has been performed, possibly without error; all the tests which must be performed prior to this one. Test Procedure The test procedure describes how the test should be carried out. This is important because the people carrying out the test may have little familiarity with the system, ND may have only limited IT experience. A test procedure consists of the following: * Test procedure identifier. * Purpose – lists the entire test cases which are associated with the procedure. * Procedure steps – describes what the tester should do. This is very important when the testers are distinct from the development team.

Acceptance testing is a good example of this condition. The steps should include: * Log- describes any special requirements for logging the test result. * Start up – describes how to set up the system before testing. * Procedure – special procedures to be followed during the tests. Measurement – describes how any measurements should be made (for example timings). * Shut down – describes how to terminate the series of tests. * Restart – describes the conditions under which the series of tests can be restarted, perhaps after an error. * Wrap up – describes how to bring the entire environment back to a controlled condition after the series of tests.

This is a critically important stage, since every test should start from known, controlled conditions. * Contingencies – this section describes what to do if problems occur during the test. For example, if a test caused the system to crash, the ester might be unsure of what to do next. A development team contact point is a possible general contingency action. Test Log I nee test log gives a campanological record AT ten test cases Ana Is winner test results are recorded. The execution of each test procedure will result in the production of a test log record. The test log is typically one line per test case and consists of the following items: * Test log identifier – each test log will have a unique identifier. Description – a brief description of any observation which applies to all the tests, but does not warrant an exception report. Individual Test Logs contain: * Date. * Time. * Name of person (all in the team) carrying out each test. * Result – actual observed output from the test. * Error flag- if the result is an error, this should be recorded. * Incident report identifier. Test Incident Report An incident is any test result that was not expected in the test case or procedure. When such an incident happens, an incident report is recorded. An error does not require the production of an incident report. This is consistent with the initial definition of testing objectives; by that definition, an error is not an unexpected event.