Wednesday, 17 April 2013

How to measure and analyze the testing efficiency?


Measurements or Metrics or Stats are the common terms you would hear in every management meeting. Some basic numbers that reflect speed of testing, coverage of testing, efficiency of testing are described here. If all these indicators move up, we can definitely be confident that the testing efficiency is getting better.

Test planning rate (TPR). TPR = Total number of test cases planned / total person-hours spent on planning. This number indicates how fast the testing team thinks, articulates the tests and documents the tests.

Bug Dispute Rate (BDR). BDR = Number of bugs rejected by development team / Number of total bugs posted by testing team. A high number here leads to unwanted arguments between the two teams.

Test execution rate (TER). TER = Total number of test cases executed / total person-hours spent on execution. This indicates the speed of testers in executing the same.

Planning Miss (PM).  PM = Number of adhoc test cases that are framed at the time of execution / Number of test cases planned before execution. This indicates, whether the testers are able to plan the tests based on the documentation and understanding levels. This number must be as less as possible, but it is very difficult to achieve zero level in this.

Requirements coverage (RC). Ideal goal is 100% coverage. But it is very tough to say how many test cases will cover 100% of requirements. But there is a simple range you mus assume. If we test each requirement in just 2 different ways - 1 positive and 1 negative, we need 2N number of test cases, where N is the number of distinct requirements. On an average, most of the commercial app requirements can be done with 8N test cases. So, the chances of achieving 100% coverage is high if you try to test every requirement in 8 different ways. Not all requirements may need an eight-way approach.

There is a set of metrics that reflect the efficiency of the development team, based on the bugs found by the testing team. Those metrics do not really reflect the efficiency of the testing team; but without testing team, those metrics cannot be calculated. Here are a few of those.

Bug Fix Rate (BFR). BFR = Total number of hours spent on fixing bugs / total number of bugs fixed by dev team. This indicates the speed of developers  in fixing the bugs.

Bug Bounce Chart (BBC). BBC is not just a number, but a line chart. On the X axis, we need to plot the build numbers in sequence. Y axis contains how many New+ReOpen bugs are found in each build. Ideally this graph must keep dropping towards zero, as quickly as possible. But if we see a swinging pattern, like sinusoidal wave, it indicates, new bugs are getting injected build over build, due to regression effects. After code-freeze, product companies must keep a keen watch on this chart.

Number of re-opened bugs. This absolute number is an indicator of how many potential bad-fixes or regression effects are injected into the application, by the development team. Ideal goal is zero for this.

Ten Software Testing Myths


10. The tester’s task is easy: he should merely write and execute the test cases by translating requirements to test cases. Additionally log some bugs.

9. Every test case is documented. Otherwise, how on earth can we expect to do regression testing and in general repeat testing?

8. Test case Reviews are a one-time effort. All you have to do is take an artifact after it is completed, and verify that it is correct. Test case reviews, for example, should merely verify that *all* requirements are covered by test cases and EVERY REQUIREMENT is COVERED by AT LEAST ONE TEST CASE.

7. Software Testing should be like manufacturing. Each of us is a robot in an assembly line. Given a certain input, we should be able to come up automatically with the right output. Execute a set of test cases (should execute 100 test cases a day) and report pass/fail status.

6. Software Testing has nothing to do with creativity. Creativity – what? The only part which requires creativity is designing your assembly line of test case design. From that point on, everyone should just be obedient.

5. Creativity and discipline cannot live together. Creativity equals chaos. [This one remains unchanged from original list of software development myths]

4. The answer to every challenge we face in the software industry lies in defining a process. That process defines the assembly line without which we are doomed to work in a constant state of chaos. [BIG ONE …This one remains unchanged from original list of software development myths]

3. Processes have nothing to do with people. You are merely defining inputs and outputs for different parts of your machine.

2. If a process is not 100% repeatable, it is not a process. Letting people adapt the process and do “whatever they want” is just going back to chaos again.

1. Quality is all about serving the customer. Whatever the customer wants, he should get. Things that don’t concern your customer should not be of interest to you.

Tuesday, 16 April 2013

Software Testing Terminology


Error - The difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.

Fault - An incorrect step, process, or data definition in a computer program.

Debug - To detect, locate, and correct faults in a computer program.

Failure - The inability of a system or component to perform its required functions within specified performance requirements. It is manifested as a fault.

Testing - The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software items.

Static analysis - The process of evaluating a system or component based on its form, structure, content, or documentation.

Dynamic analysis - The process of evaluating a system or component based on its behavior during execution.

Correctness - The degree to which a system or component is free from faults in its specification, design, and implementation. The degree to which software, documentation, or other items meet specified requirements. The degree to which software, documentation, or other items meet user needs and expectations, whether specified or not.

Verification - The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Formal proof of program correctness.

Validation - The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.

Tip to create workflow in JIRA quickly


To my colleagues who want to create quickly new workflow and don't want to touch xml \ property files.

On Administrator \ Workflows

If you don't have default workflow then create default Workflow with default JIRA "workflow" - it will show status as "active" so we cannot update its scheme \ workflow steps.
Copy this workflow -- it has status as "Inactive" so we can update workflow scheme.
Go to link "steps" and add new status \ transitions for new steps as required in your workflow.

On Administrator \ Workflow Schemes

Create new "Workflow Scheme" then click on "Workflows" link operation of new workflow just created.
Assign workflow to scheme,... Issue type "Bug" and select our workflow from the list.
Assign other workflows for issue types : Task, New Feature or Improvement with default workflow ( named JIRA ) or your workflow.

Now our workflow scheme is ready to associate with our project.
Go to your project admin panel and set workflow scheme there.