Tuesday 9 April 2013

Quality Assurance Process



I.        PRAD

The Product Requirement Analysis Document is the document prepared/reviewed by marketing, sales, and technical product managers. This document defines the requirements for the product, the "What". It is used by the developer to build his/her functional specification and used by QA as a reference for the first draft of the Test Strategy.

II.       Functional Specification

The functional specification is the "How" of the product. The functional specification identifies how new features will be implemented. This document includes items such as what database tables a particular search will query. This document is critical to QA because it is used to build the Test Plan.

QA is often involved in reviewing the functional specification for clarity and helping to define the business rules.

III.      Test Strategy

The Test Strategy is the first document QA should prepare for any project. This is a living document that should be maintained/updated throughout the project. The first draft should be completed upon approval of the PRAD and sent to the developer and technical product manager for review.

The Test Strategy is a high-level document that details the approach QA will follow in testing the given product. This document can vary based on the project, but all strategies should include the following criteria:
-        Project Overview - What is the project.

-        Project Scope - What are the core components of the product to be tested

-        Testing - This section defines the test methodology to be used, the types of testing to be executed (GUI, Functional, etc.), how testing will be prioritized, testing that will and will not be done and the associated risks. This section should also outline the system configurations that will be tested and the tester assignments for the project.

-        Completion Criteria - These are the objective criteria upon which the team will decide the product is ready for release

-        Schedule - This should define the schedule for the project and include completion dates for the PRAD, Functional Spec, and Test Strategy etc. The schedule section should include build delivery dates, release dates and the dates for the Readiness Review, QA Process Review, and Release Board Meetings.

-        Materials Consulted - Identify the documents used to prepare the test strategy

-        Test Setup - This section should identify all hardware/software, personnel pre-requisites for testing. This section should also identify any areas that will not be tested (such as 3rd party application compatibility.)

IV.      Test Matrix (Test Plan)

The Test Matrix is the Excel template that identifies the test types (GUI, Functional etc.), the test suites within each type, and the test categories to be tested. This matrix also prioritizes test categories and provides reporting on test coverage.
-        Test Summary report
-        Test Suite Risk Coverage report

Upon completion of the functional specification and test strategy, QA begins building the master test matrix. This is a living document and can change over the course of the project as testers create new test categories or remove non-relevant areas. Ideally, a master matrix need only be adjusted to include near feature areas or enhancements from release to release on a given product line.

V.       Test Cases

As testers build the Master Matrix, they also build their individual test cases. These are the specific functions testers must verify within each test category to qualify the feature. A test case is identified by ID number and prioritized. Each test case has the following criteria:
-        Purpose - Reason for the test case
-        Steps - A logical sequence of steps the tester must follow to execute the test case
-        Expected Results - The expected result of the test case
-        Actual Result - What actually happened when the test case was executed
-        Status - Identifies whether the test case was passed, failed, blocked or skipped.
-        Pass - Actual result matched expected result
-        Failed - Bug discovered that represents a failure of the feature
-        Blocked - Tester could not execute the test case because of bug
-        Skipped - Test case was not executed this round
-        Bug ID - If the test case was failed, identify the bug number of the resulting bug.

VI.      Test Results by Build

Once QA begins testing, it is incumbent upon them to provide results on a consistent basis to developers and the technical product manager. This is done in two ways: A completed Test Matrix for each build and a Results Summary document.

For each test cycle, testers should fill in a copy of the project's Master Matrix. This will create the associated Test Coverage reports automatically (Test Coverage by Type and Test Coverage by Risk/Priority). This should be posted in a place that necessary individuals can access the information.
Since the full Matrix is large and not easily read, it is also recommended that you create a short Results Summary that highlights key information. A Results Summary should include the following:
-        Build Number
-        Database Version Number
-        Install Paths (If applicable)
-        Testers
-        Scheduled Build Delivery Date
-        Actual Build Delivery Date
-        Test Start Date
-        Scope - What type of testing was planned for this build? For example, was it a partial build? A full-regression build? Scope should identify areas tested and areas not tested.
-        Issues - This section should identify any problems that hampered testing, represent a trend toward a specific problem area, or are causing the project to slip. For example, in this section you would note if the build was delivered late and why and what its impact was on testing.
-        Statistics - In this section, you can note things such as number of bugs found during the cycle, number of bugs closed during the cycle etc.

VII.     Release Package

The Release Package is the final document QA prepares. This is the compilation of all previous documents and a release recommendation. Each release package will vary by team and project, but they should all include the following information:

-        Project Overview - This is a synopsis of the project, its scope, any problems encountered during the testing cycle and QA's recommendation to release or not release. The overview should be a "response" to the test strategy and note areas where the strategy was successful, areas where the strategy had to be revised etc.

The project overview is also the place for QA to call out any suggestions for process improvements in the next project cycle.

Think of the Test Strategy and the Project Overview as "Project bookends".

-        Project PRAD - This is the Product Requirements Analysis Document, which defines what functionality was approved for inclusion in the project. If there was no PRAD for the project, it should be clearly noted in the Project Overview. The consequences of an absent PRAD should also be noted.

-        Functional Specification - The document that defines how functionality will be implemented. If there was no functional specification, it should be clearly noted in the Project Overview. The consequences of an absent Functional Specification should also be noted.

-        Test Strategy - The document outlining QA's process for testing the application.

-        Results Summaries - The results summaries identify the results of each round of testing. These should be accompanied in the Release Package by the corresponding reports for Test Coverage by Test Type and Test Coverage by Risk Type/Priority from the corresponding completed Test Matrix for each build. In addition, it is recommended that you include the full Test Matrix results from the test cycle designated as Full Regression.

-        Known Issues Document - This document is primarily for Technical Support. This document identifies workarounds, issues development is aware of but has chosen not to correct, and potential problem areas for clients.

-        Installation Instruction - If your product must be installed as the client site, it is recommended to include the Installation Guide and any related documentation as part of the release package.

-        Open Defects - The list of defects remaining in the defect tracking system with a status of Open. Technical Support has access to the system, so a report noting the defect ID, the problem area, and title should be sufficient.

-        Deferred Defects - The list of defects remaining in the defect tracking system with a status of deferred. Deferred means the technical product manager has decided not to address the issue with the current release.

-        Pending Defects - The list of defects remaining in the defect tracking system with a status of pending. Pending refers to any defect waiting on a decision from a technical product manager before a developer addresses the problem.

-        Fixed Defects - The list of defects waiting for verification by QA.

-        Closed Defects - The list of defects verified as fixed by QA during the project cycle.
The Release Package is compiled in anticipation of the Readiness Review meeting. It is reviewed by the QA Process Manager during the QA Process Review Meeting and is provided to the Release Board and Technical Support.

-        Readiness Review Meeting:

The Readiness Review meeting is a team meeting between the technical product manager, project developers and QA. This is the meeting in which the team assesses the readiness of the product for release.

This meeting should occur prior to the delivery of the Gold Candidate build. The exact timing will vary by team and project, but the discussion must be held far enough in advance of the scheduled release date so that there is sufficient time to warn executive management of a potential delay in the release.

The technical product manager or lead QA may schedule this meeting.

-        QA Process Review Meeting:

The QA Process Review Meeting is meeting between the QA Process Manager and the QA staff on the given project. The intent of this meeting is to review how well or not well process was followed during the project cycle.

This is the opportunity for QA to discuss any problems encountered during the cycle that impacted their ability to test effectively. This is also the opportunity to review the process as whole and discuss areas for improvement.

After this meeting, the QA Process Manager will give a recommendation as to whether enough of the process was followed to ensure a quality product and thus allow a release.

This meeting should take place after the Readiness Review meeting. It should be scheduled by the lead QA on the project.

-        Release Board Meeting:

This meeting is for the technical product manager and senior executives to discuss the status of the product and the teams release recommendations. If the results of the Readiness meeting and QA Process Review meeting are positive, this meeting may be waived.

The technical product manager is responsible for scheduling this meeting.

This meeting is the final check before a product is released.      
Due to rapid product development cycles, it is rare that QA receives completed PRADs and Functional Specifications before they begin working on the Test Strategy, Test Matrix, and Test Cases. This work is usually done in parallel.

Testers may begin working on the Test Strategy based on partial PRADs or confirmation from the technical product manager as to what is expected to be in the next release. This is usually enough to draft out a high -level strategy outlining immediate resource needs, potential problem areas, and a tentative schedule.

The Test Strategy is then updated once the PRAD is approved, and again when the functional specifications are complete enough to provide management with a committed schedule. All drafts of the test strategy should be provided to the technical product manager and it is QA's responsibility to ensure that information provided in the document (such as potential resource problems) is clearly understood.

If the anticipated release does not represent a new product line, testers can begin the Master Test Matrix and test cases at the same time the project's PRAD is being finalized. Testers can build and/or refine test cases for the new functionality as the functional specification is defined. Testers often contribute to and are expected to be involved in reviewing the functional specification.

The results summary document should be prepared at the end of each test cycle and distributed to developers and the technical product manager. It is designed more to inform interested parties on the status of testing and possible impact to the overall project cycle.

Wednesday 3 April 2013

Email Format Testcases

Valid email format case

 

email@domain.com
(Username char max 64) and (domain suffix min 4)
email@domain.com
(Username char max 64) and (domain suffix max 255)
firstname.lastname@domain.com
email@subdomain.domain.com
firstname+lastname@domain.com
email@Domain IP address
email@Domain IP address(in square brackets)
Numeric@domain.com
Alphanumeric@domain.com
email@domain-one.com(Dash in domain name)
a_b@domain.com(underscore in username)
email@Domain IP address.com (ip's as domain but followed by the .com/co.uk/co.in/co.au/')




Invalid email format cases

 

email@domain,com
 @domain.com
<spaces>@domain.com
Email@domain.com (minimum characters less than 6)
 @ and . Missing
email@domain.com (Ascii chars other than 33 to 47)
email<@domain.com> (encoded html)
email.domain.com (Missing @)
email@domain@domain.com (Two @ sign)
 .email@domain.com (Leading dot in address)
 email.@domain.com (Trailing dot in address)
 email@domain.com(text) (Text followed email)

Waterfall Model -SDLC


The waterfall model is a sequential software development model in which development is seen as flowing steadily downwards (like a waterfall) through several phases.
Following phases are followed perfectly in sequential order:

    Requirements specification
    Design
    Implementation
    Integration
    Testing
    Installation
    Maintenance

When to use the waterfall model:

    Requirements are very well known, clear and fixed.
    Product definition is stable.
    There are no ambiguous requirements
    Ample resources with required expertise are available freely

Advantages of waterfall model:

    Due to the rigidity of the model it is easy to manage
    Phases are processed and completed one at a time.
    It is simple and easy to understand and use.
    Useful for smaller projects where requirements are very well understood.
    Each phase has specific deliverables and a review process.

 Disadvantages of waterfall model:

    In testing stage, it is very difficult to go back and change something that was not well-thought out in the   design stage.
    High amounts of risk and uncertainty.
    Not good model for complex and object-oriented projects.
    Poor model for ongoing projects.







Manual Testing Tips II


Why  to  test? 

testing becomes absolutely essential to make sure the software works properly  and  does  the  work  that  it  is  meant  to  perform.

What  to  test?
Any  working  product  which  forms  part  of  the  software  application  has to be tested. Both data and programs must be tested.

How  often  to  test?
When  a  program  (source  code)  is  modified  or newly developed,  it  has  to  be  tested.

Who  tests?
Programmer, Tester and Customer

Requirements

User  Requirements  Specification  (URS)
This document will  describe  in  detail  about  what  is  expected  out  of  the  software  product from  the  user's  perspective. The wordings of this document will be in the same tone that of a user

Software  Requirements  Specification (SRS)
A team of  business  analysts,  who  are  having  a  very  good  domain  or  functional expertise,  will  go  to  the  clients  place  and  get  to  know  the  activities  that  are to  be  automated and prepare a document based on URS and it is called as SRS

Design

High  Level  Design  (HLD)
List  of  modules  and  a  brief  description  of  each  module.
Brief  functionality  of  each  module.
Interface  relationship  among  modules
Dependencies  between  modules  (if  exists,  B  exists  etc.)
Database  tables  identified  along  with  key  element.
Overall  architecture  diagrams  along  with  technology  details.

Low  Level  Design  (LLD)
Details  functional  logic  of  the  module,  in  pseudo  code.
Database  tables,  with  all  elements,  including  their  type  and  size
All  interface  details  with  complete  API  references  (both  requests and  responses)
All  dependency  issues Error  message  Listings
Complete  input  and  outputs  for  a  module.

Testing Levels

Unit Testing
Programs will be tested at unit level
The same developer will do the test

Integration  Testing
When all the individual program units are tested in the  unit testing phase and all units are clear of any known bugs, the interfaces between those modules will  be  tested
Ensure that data flows from one piece to another piece

System  Testing
After  all  the  interfaces  are  tested  between  multiple  modules,  the  whole set of software is tested to establish that all modules work together correctly as an application.
Put all pieces together and test

Acceptance  Testing
The client will test it, in their place, in a near-real-time or simulated environment.

Manual Testing Tips



Testing  Vs  Debugging 

  • Testing  is  focused  on  identifying  the  problems  in  the  product 
  • Done by Tester
  • Need not know the source code
  • Debugging is to make sure that the bugs are removed or fixed 
  • Done by Developer
  • Need to know the source Code

Detailed Test Plan

  • What  is  to  be  tested  ? 
  • Configuration – check all parts for existence
  • Security – how the safety measures work
  • Functionality – the requirements 
  • Performance – with more users and more data
  • Environment – keep product same but other settings different

Detailed  Test  Cases 

The  test  cases  will  have  a  generic  format  as below.
  • Test  Case  Id
  • Test  Case  Description 
  • Test  Prerequisite
  • Test  Inputs 
  • Test  Steps
  • Expected  Results 

Detailed  Test  Case  (DTC) 

  • Simple  Functionality – field level
  • Communicative  Functionality – data on one screen goes to another
  • End-to-End  Test  Cases – full sequence as though the end users carry out

Test  Execution  and  Fault  Reports 

  • Test  Case  Assignment – done by test lead 
  • Test  Environment  Set-up – install OS, database, applications
  • Test  Data  Preparation – what kind of data to be used



Tuesday 2 April 2013

Regression Testing


When some errors occur in a program then these are rectified.  For rectification of these errors, changes are made to the program.  Due to these changes some other errors may be incorporated in the program.  Therefore, all the previous test cases are tested again.  This type of testing is called regression testing.
In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that supports it) is changed. Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.
Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools. Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison.

  • The regression test suite (the subset of tests to be executed) contains three different classes of test cases:
  •  A representative sample of tests that will exercise all software functions.
  •  Additional tests that focus on software functions that are likely to be affected by the change.
  • Tests that focus on the software components that have been changed.
  •  As integration testing proceeds, the number of regression tests can grow quite large.
Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program function once a change has occurred.

For regression testing, some test cases that have been executed on the old system are maintained, along with the output produced by the old system. These test cases are executed again on the modified system and its output compared with the earlier output to make sure that the system is working as before on these test cases.  This, frequently, is a major task when modifications are to be made to existing systems.


A consequence of this is that the test cases for systems should be properly documented for future use.  Often, when we test our programs, the test cases are treated as “throw way” cases; after testing is complete, test cases and their outcomes are thrown away. With this practice, every time regression testing has to be done, the set of test cases will have to be re-created, resulting in increased cost. In fact, for many systems that are frequently changed, regression testing “scripts” are used to automatically perform the regression testing after some changes.  A regression testing script contains all the inputs given by the test cases and the output produced by the system for these test cases.  These scripts are typically produced during the system testing, as regression testing is generally done only for complete systems or subsystem. When the system is modified, the scripts and comparing the outputs with the outputs given in the scripts.  Given the scripts, though the use of tools, regression testing can be largely automated.

Strategic Issues in Testing

Testing is a very important phase in software development life cycle. But the testing may not be very effective if proper strategy is not used.  For the implementation of successful software testing strategy, the following issues must be taken care of: -

  • Before the start of the testing process, all the requirements must be specified in a quantifiable manner.
  • Testing objectives must be clarified and stated explicitly.
  • A proper testing plan must be developed.
  • Build "robust" software that is designed to test itself.
  • Use effective formal technical reviews as a filter prior to testing. Formal technical reviews can be as effective as testing in uncovering errors. For this reason, reviews can reduce the amount of testing effort that is required to produce high-quality software.
  • Conduct formal technical reviews to assess the test strategy and the cases themselves. Formal technical reviews can uncover inconsistencies, omissions, and outright errors in the testing approach. This saves time and also improves product quality.
  •  Develop a continuous improvement approach for the testing process. The test strategy should be measured. The metrics collected during testing should be used as part of a statistical process control approach for software testing.

System Testing


Final stage of the testing process should be System Testing. This type of test involves examination of the whole computer system. All the software components, all the hardware components and any interfaces.
The whole computer based system is checked not only for validity but also for met objectives.

It should include recovery testing, security testing, stress testing and performance testing.

Recovery Testing

Recovery testing uses test cases designed to examine how easily and completely the system can recover from a disaster (power shut down, blown circuit, disk crash, interface failure, insufficient memory, etc.). It is desirable to have a system capable of recovering quickly and with minimal human intervention. It should also have a log of activities happening before the crash (these should be part of daily operations) and a log of messages during the failure (if possible) and upon re-start.

Security testing

Security testing involves testing the system in order to make sure that unauthorized personnel or other systems cannot gain access to the system and information or resources within it. Programs that check for access to the system via passwords are tested along with any organizational security procedures established.

Stress testing 

Stress testing encompasses creating unusual loads on the system in attempts to brake it. System is monitored for performance loss and susceptibility to crashing during the load times. If it does crash as a result of high load, it provides for just one more recovery test.


Performance testing

Performance testing involves monitoring and recording the performance levels during regular and low and high stress loads. It tests the amount of resource usage under the just described conditions and serves as basis for making a forecast of additional resources needed (if any) in the future. It is important to note that performance objectives should have been developed during the planning stage and performance testing is to assure that these objectives are being met. However, these tests may be run in initial stages of production to compare the actual usage to the forecasted figures.

Localization Testing


What Localization Testing means?

Localization testing is performed to ensure that the localized product is fully functional, linguistically accurate, and that no issues have been introduced during the localization process. It involves testing of the localized product in accordance with national language standards, searching for un-translated text in the user interface, verifying consistency of formats (date formats, number formats, etc.), verifying accordance with capitalization rules and proper use of alphabets, verification of correct use of currencies, etc.

GUI Testing

Graphical User Interface (GUI) testing ensures that the user interface of a product contains no defects such as truncated strings, overlapping controls, misaligned controls, duplicated hotkeys, etc. introduced during the localization phase.
GUI testing can be automated, reducing significantly the cost and duration of this type of testing.

User Assistance Testing

User Assistance Testing consists mainly of technical QA of a product's user assistance and on-line help. Our testers ensure that user assistance documentation is completely localized, the original layout is kept intact and that all external and internal hyperlinks work as intended.
This testing step includes all the stages of compiling, engineering, bug fixing and script generation of the user assistance content, and has become a largely automated yet complex activity. It answers client questions such as: How can I be sure my local-language versions and locales are correct and functional? How can I meet daily change-of-content QA demands? How can I verify links in large-content volumes? We use the latest methodologies and tools for content verification.

Internationalization (I18N) Testing

Internationalization testing of products aims to uncover international functionality issues prior to the products' global release through product testing. This technique tests whether the product was correctly adapted to work under different languages and regional settings (ability to display accented characters, to run on non-English operating systems, to display the correct numbering system thousands and decimal separators, etc.).
This type of testing equally includes pseudo-localization testing on the internationalized build to identify potential localization user interface concerns. It also helps uncover issues that may increase the costs of localization and future product support later on.