Friday 19 April 2013

MOBILE APP TESTING CHECKLIST II


Device specific checks

Can the app be installed on the device?

Does the app behave as designed/desired if there is an incoming call?

Does the app behave as designed/desired if there is an incoming SMS?

Does the app behave as designed/desired if the device is tilted?

Does the app behave as designed/desired if the device is shaken?

Does the app behave as designed/desired if a local message is coming from another app (think of: calendar reminders, to-do task etc.).

Does the app behave as designed/desired if a push message is coming from another app (think of: twitter mentions, whats app message, word feud invitation, etc).

Does the app behave as designed/desired if the charger is connected?

Does the app behave as designed/desired if the charger is disconnected?

Does the app behave as designed/desired if the device goes to sleeping mode

Does the app behave as designed/desired if the device resumes from sleeping mode

Does the app behave as designed/desired if the device resumes from lock screen?

Does the app interact with the GPS sensor correctly (switch on/off, retrieve GPS data)?

Is the functionality of all the buttons or keys on the device defined for this app?

Verify that buttons or keys which have no defined function have no unexpected behaviour on the app when activating.

Does the app behave as designed/desired if the “Battery low” message is pushed

Does the app behave as designed/desired if the sound on the device is turned off?

Does the app behave as designed/desired if the device is in airplane mode?

Can the app be de-installed from the device?

Does the application function as expected after re-installation?

Can the app be found in the app store?

Can the app switch to different apps on the device through multitasking as designed/desired?

Are all touch screen positions (buttons) working when a screen protector is used.

In case there’s a true “back” button available on the device does the “back” button take the user to the previous screen?

In case there’s a true “menu” button available on the device, does the menu button show the app’s menu?

In case there’s a true “home” button available on the device, does the home button get the user back to the home screen of the device?

In case there’s a true “search” button available on the device, does this get the user to some form of search within the app?

MOBILE APP TESTING CHECKLIST

Checklist comprises of below mentioned categories:


  • Network specific checks
  • Device specific checks.
  • App UI checks.
  • App specific checks. (These are related with functionality that is frequently used in an app.)
  • Store specific checks

Network Specific Checks

Does the app behave as per specification if connected to the internet via Wi-Fi?

Does the app behave as per specification if connected to the internet via 2G?

Does the app behave as per specification if connected to the internet via 3G?

How the app behave if out of network reach?

How the app behave if it is in low network reach?

Does the app behave as per specification if navigating through application screens and Airplane mode is activated.

Does the app behave as per specification if while playing media content, Airplane mode is activated.

Does the app behave as per specification if while initiating a call from Device, Airplane mode is activated.

Does the app behave as per specification if while sending a SMS from Device, Airplane mode is activated.

Does the app resume working when it gets back into network reach from outside reach of the network?

Does the update transactions are processed correctly after re-establishing connection. 

Does the app still work correctly when tethering or otherwise connected to another device

What is the behaviour of the app if network switches between  (Wi-Fi, 3G, 2G)

Does the app use standard network ports (Mail: 25, 143, 465, 993 or 995 HTTP: 80 or 443 SFTP: 22) to connect to remote services, ( as some providers block certain ports.)

Thursday 18 April 2013

Steps for Connecting to the Remote Servers



 Pre-requisite:

1. Download the PuTTYgen and PuTTY from the URL: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
2. Generate the SSH Key Pair (Public Key & Private Key) using PuTTYgen by following the steps mentioned at URL:
3. Save the Public Key and Private Key. Public key needs to be on the server that you want to connect to and Private key is to be used on the system that you are using to connect to remote server.
Settings to be done in Putty: 

1. Sessions Settings
Host Name (or IP Address): builduser@build4.appcentral.com
Port: 22
Connection Type: SSH
2. Then goto Connection >> SSH >> Tunnels
Add new forwarded port:
Source port: 3306
Destination: localhost:3306
Click on the "Local" radio button.
Click "Add"
3. Then goto Connection >> SSH >> Auth
Browse the Private Key in the field: Private Key file for Authentication
4. Go back to Session settings, type a name for Saved session and click "Save". Then click "Open"

From Next time just need to select the saved session and then click on Load then Open OR simply double click on the session name to get connected.

Getting the Logs on the server: 
  1. Navigate to directory: /opt/ac/tomcat/logs/catalina.out
  2. Write a command : ‘tail –n 1000 catalina.out >> Logs’ – (This command will get you the last 1000 lines from Catalina.out and will copy into a file named Logs and save it to the current directory)
  3. Now use command cat <filename> to read the contents of the files. In this case it should be ‘cat Log’
 Other commands that can help : 
  1. To move one folder back: cd ..
  2. To see the files and folders in a particular directory: ls OR ls –l
  3. To get inside a folder: cd <foldername>
  4. To remove the files/folders: rm <file/foldername> 
Using WinSCP for fetching the files from Remote (Linux server) server to your local system. 
  1. Download and Install WinSCP using the URL: http://winscp.en.softonic.com/universaldownloader-launch
  2. Provide the Host Name as Provided in Putty configuration and Upload the Private Key (the one used in Putty configuration)
  3. Then Connect. 

Wednesday 17 April 2013

How to measure and analyze the testing efficiency?


Measurements or Metrics or Stats are the common terms you would hear in every management meeting. Some basic numbers that reflect speed of testing, coverage of testing, efficiency of testing are described here. If all these indicators move up, we can definitely be confident that the testing efficiency is getting better.

Test planning rate (TPR). TPR = Total number of test cases planned / total person-hours spent on planning. This number indicates how fast the testing team thinks, articulates the tests and documents the tests.

Bug Dispute Rate (BDR). BDR = Number of bugs rejected by development team / Number of total bugs posted by testing team. A high number here leads to unwanted arguments between the two teams.

Test execution rate (TER). TER = Total number of test cases executed / total person-hours spent on execution. This indicates the speed of testers in executing the same.

Planning Miss (PM).  PM = Number of adhoc test cases that are framed at the time of execution / Number of test cases planned before execution. This indicates, whether the testers are able to plan the tests based on the documentation and understanding levels. This number must be as less as possible, but it is very difficult to achieve zero level in this.

Requirements coverage (RC). Ideal goal is 100% coverage. But it is very tough to say how many test cases will cover 100% of requirements. But there is a simple range you mus assume. If we test each requirement in just 2 different ways - 1 positive and 1 negative, we need 2N number of test cases, where N is the number of distinct requirements. On an average, most of the commercial app requirements can be done with 8N test cases. So, the chances of achieving 100% coverage is high if you try to test every requirement in 8 different ways. Not all requirements may need an eight-way approach.

There is a set of metrics that reflect the efficiency of the development team, based on the bugs found by the testing team. Those metrics do not really reflect the efficiency of the testing team; but without testing team, those metrics cannot be calculated. Here are a few of those.

Bug Fix Rate (BFR). BFR = Total number of hours spent on fixing bugs / total number of bugs fixed by dev team. This indicates the speed of developers  in fixing the bugs.

Bug Bounce Chart (BBC). BBC is not just a number, but a line chart. On the X axis, we need to plot the build numbers in sequence. Y axis contains how many New+ReOpen bugs are found in each build. Ideally this graph must keep dropping towards zero, as quickly as possible. But if we see a swinging pattern, like sinusoidal wave, it indicates, new bugs are getting injected build over build, due to regression effects. After code-freeze, product companies must keep a keen watch on this chart.

Number of re-opened bugs. This absolute number is an indicator of how many potential bad-fixes or regression effects are injected into the application, by the development team. Ideal goal is zero for this.