Tuesday, 16 April 2013

What are advantages or disadvantages of automated testing?


Following are the advantages for using automated tools: 
  • Fast: Human be slow but Automated Testing Tool runs tests significantly faster than human users.
  • Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human error.
  • Repeatable: You can test how the software reacts under repeated execution of the same operations.
  • Programmable: You can program sophisticated tests that bring out hidden information from the application.
  • Reusability: You can reuse the Automated Test scripts, user defined functions etc. 

Automated tools have following disadvantages: 
  • Usability Testing: Usability of software can not be automated so automated tools cannot rate the usability of an application.
  • Cost:  Usually these are licensed so automated tools are quite expensive
  • Programming knowledge required: To customize test scripts according to test requirement. Test maintenance is costly: In case of playback methods. Even though a minor change occurs in the GUI, the test script has to be rerecorded or replaced by a new test script.

What is a test cases review?

Software Testing plays a vital role in ensuring the quality of a Software product. Test cases are the key tools for testing. Test cases review has to be thorough in order to ensure that effective and adequate testing is done

What is the importance of test cases review?
  • Test cases are written with the intent to detect the defects
  • Understanding of the requirement is correct
  • Impact areas are identified and brought under test
  • Test data is correct and represent every possible class of the domain
  • Positive and negative scenarios are covered
  • Expected behavior is documented correctly
  • Test coverage is adequate
What is the methodology of test cases review?

Typically, reviews should be done during each phase in the Testing Life cycle
The phases involved in Software testing lifecycle are

Requirements understanding --During the Requirements Understanding phase, review of requirements is an activity that should be undertaken with utmost care and such review should be done systematically to ensure the clarity, correctness and testability of the requirements.
Test preparation--During the Test preparation phase, after the test scenarios are identified and test conditions and cases are built for each scenario, it is advisable to do a thorough and detailed review. Checklist for Test cases review during the Test Preparation Phase
Test Execution--During the test execution phase, doing a review after the cases are executed is very important .
Test Reporting--During the Test Reporting phase, it would help to ensure that all the required documents are prepared, metrics are collated and all the project specific formalities are completed.

What are the most common test case review defects?
  • Incomplete Test Cases
  • Missing negative test cases
  • No Test Data
  • Inappropriate/Incorrect Test data
  • Incorrect Expected behavior
  • Grammatical errors
  • Typos
  • Inconsistent tense/voice
  • Incomplete results/number of test runs
  • Defect details not updated
  • Changes to requirements not updated in Test case

Monday, 15 April 2013

Difference between Severity and Priority?

Severity
Priority
In simple words, severity depends on the harshness of the bug.
In simple words, priority depends on the urgency with which the bug needs to be fixed.
It is an internal characteristic of the particular bug. Examples of High severity bugs include the application fails to start, the application crashes or causes data loss to the user.
It is an external (that is based on someone's judgment) characteristic of the bug.
Examples of high priority bugs are the application does not allow any user to log in, a particular functionality is not working or the client logo is incorrect. As you can see in the above example, a high priority bug can have a high severity, a medium severity or a low severity.
Its value is based more on the needs of the end-users.
Its value is based more on the needs of the business.
Its value takes only the particular bug into account. For example, the bug may be in an obscure area of the application but still have a high severity.
Its value depends on a number of factors (e.g. the likelihood of the bug occurring, the severity of the bug and the priorities of other open bugs).
Its value is (usually) set by the bug reporter.
Its value is initially set up by the bug reporter. However, the value can be changed by someone else (e.g. the management or developer) based on their discretion.
Its value is objective and therefore less likely to change.
Its value is subjective (based on judgment). The value can change over a period of time depending on the change in the project situation.
A high severity bug may be marked for a fix immediately or later.
A high priority bug is marked for a fix immediately.
The team usually needs only a handful of values (e.g. Showstopper, High, Medium and Low) to specify severity.
In practice, new values may be designed (typically by the management) on a fairly constant basis. This may happen if there are too many high priority defects. Instead of a single High value, new values may be designed such as Fix by the end of the day, Fix in next build and Fix in the next release.

How to do exhaustive testing?


Testing a software application (except maybe a very simple program a few lines long) may well be an impossible task due to large number of:

1. All possible inputs
2. All possible input validations
3. All possible logic paths within the application
4. All possible outputs
5. All possible sequences of operations
6. All possible sequences of workflows
7. All possible speeds of execution And the above for just with a single user
8. All combinations of types of users
9. All possible number of users
10. All possible lengths of time each user may operate the application
And so on (we have not even touched the types of test environments on which the tests could be run).

However, it is possible to exhaustively execute your test suite using the following tips:

1. Your test suite should have test cases covering each documented requirement. Here my assumption is that each requirement is documented clearly.
2. The test cases should be specific, concise and efficient. Each test case should have clear and unambiguous steps and expected results.
3. The configuration data, input test data and output test data should be clearly specified.
4. You should have a clean and stable test environment in which to execute your test suite.
5. In a perfectly working application, it should be possible to execute each test case in the suite.
6.. Each confirmed bug (found during testing or found by the client) should result in another test case written or an existing test case updated.
7. Important: You should not assume the correctness and completeness of your test suite by yourself. Review of such test suite by peers, business people, managers, clients and users may provide you valuable inputs to correct your test suite.
8. Discipline in maintaining your test suite and executing it would go a long way in preventing bugs leaked to the clients/ users of your application.