Showing posts with label Regression. Show all posts
Showing posts with label Regression. Show all posts

Tuesday 2 April 2013

Regression Testing


When some errors occur in a program then these are rectified.  For rectification of these errors, changes are made to the program.  Due to these changes some other errors may be incorporated in the program.  Therefore, all the previous test cases are tested again.  This type of testing is called regression testing.
In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that supports it) is changed. Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.
Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools. Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison.

  • The regression test suite (the subset of tests to be executed) contains three different classes of test cases:
  •  A representative sample of tests that will exercise all software functions.
  •  Additional tests that focus on software functions that are likely to be affected by the change.
  • Tests that focus on the software components that have been changed.
  •  As integration testing proceeds, the number of regression tests can grow quite large.
Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program function once a change has occurred.

For regression testing, some test cases that have been executed on the old system are maintained, along with the output produced by the old system. These test cases are executed again on the modified system and its output compared with the earlier output to make sure that the system is working as before on these test cases.  This, frequently, is a major task when modifications are to be made to existing systems.


A consequence of this is that the test cases for systems should be properly documented for future use.  Often, when we test our programs, the test cases are treated as “throw way” cases; after testing is complete, test cases and their outcomes are thrown away. With this practice, every time regression testing has to be done, the set of test cases will have to be re-created, resulting in increased cost. In fact, for many systems that are frequently changed, regression testing “scripts” are used to automatically perform the regression testing after some changes.  A regression testing script contains all the inputs given by the test cases and the output produced by the system for these test cases.  These scripts are typically produced during the system testing, as regression testing is generally done only for complete systems or subsystem. When the system is modified, the scripts and comparing the outputs with the outputs given in the scripts.  Given the scripts, though the use of tools, regression testing can be largely automated.