Session Based Testing: A feasible solution for performing exploratory testing

Exploratory testing is a type of manual testing method where software is tested without a set of tests defined in advance. During an exploratory test session, you are not restricted to a script or a set of predetermined steps. When a QA engineer just goes Ad-hoc into exploratory testing without following any structure, it is very difficult to quantify what work was done, what parts of the software were tested and how much time was actually spent testing.

survey

Session Based Testing (SBT) is a framework that supports exploratory testing. From the semi-structured interviews that I performed I came to know that QA engineers where performing a lot of exploratory testing during the sprint but where not using SBT. This was also evident from the survey performed where about 80% of all the respondents including QA engineers and the team members working with QA engineers answered “No” to the question: Do you use Session based testing?
SBT framework helps bridle the randomness around exploratory testing without compromising on the methodology involved in exploratory testing. It also overcomes major issues involved with exploratory testing like a lack of structure that provides no visibility on the progress made with exploratory testing and test coverage achieved with it (Saxena, 2012).

Basic Elements of Session Based Testing

Charter

is the goal or agenda associated with the test session.

Session

is an un-interpreted period of time spent often time boxed to less than 2 hours.

Session Report

is a record of the time boxed test session containing the charter, area covered, how testing was conducted, list of bugs, risks and issues found, documentation of data files the tester used or created during the session, percentage of time the tester spent on the session vs. investigating new issues, session start time and duration etc. the session report can contain more information if required and can be customized to the needs of the project.

Debrief

is a short discussion between the Product Owner and the QA engineer about the session report.

Parsing Results

in exploratory testing using SBT is performed with a standardized session report. This report can be parsed with a software tool like Microsoft Excel to store results as aggregated data for reporting and metrics.

Planning

with the help of SBT, QA engineers can plan test sessions to fit the needs of the project. Charters can be added or dropped based on tests executed and changing requirements. (Saxena, 2012)

Representation of Session Report

Representation of Session Report

Proposed Usage of Session Based Testing (SBT)

SBT is a feasible solution for performing exploratory testing. Managers in the company like to see metrics because they like to see progress in the project. Using SBT, QA engineers would be able to provide this metric on testing efforts to the managers.There are several tools in the market that can assist SBT. I have also tried to perform a proof of concept with an open source tool called “sessionweb”. But the most feasible tools for a #.Net company could be “Microsoft Test Manager” or “Bonfire” also known as “Jira-Capture” which is an add-on for “Jira”. Both the tools were already available in the company I work. “Jira” is already used as a project tracker in the company and all the QA engineers have Microsoft Visual Studio installed in their standalone machines with which “Microsoft Test Manager” is pre-installed. Seeing the benefits of structured exploratory testing with the usage of “Session Based Testing” it is proposed as a feasible solution.

Advertisements

Capture Screenshot with PhantomJS for RWD test

Here is a little script that could be used to capture screenshots for a responsive webpages with different view-ports and save them as *.png


var async = require('async'),
sizes = [
[320, 480],
[320, 568],
[600, 1024],
[1024, 768],
[1280, 800],
[1440, 900]
];


function capture(sizes, callback) {
var page = require('webpage').create();
page.viewportSize = {
width: sizes[0],
height: sizes[1]
};


page.zoomFactor = 1;
page.open('http://werkstatt.autoscout24.de', function (status) {
var filename = sizes[0] + 'x' + sizes[1] + '.png';
page.render('./screenshots/' + filename);
page.close();
callback.apply();
});
}


async.eachSeries(sizes, capture, function (e) {
if (e) console.log(e);
console.log('Captured');
phantom.exit();
});

Have fun

Behavior Driven Development with 3 Amigo’s task

In Scrum, a story is presented in planning with a story template that would look like:

As a [X]

I want [Y]

So that [Z]

This can be elaborated as the following, the person or the role “X” wants to have a feature “Y” so that he/she could be benefited by “Z”. But this could often lead to misunderstanding as depicted in figure below:

Representation of Communication Gap in a Story Grooming Meeting

Representation of Communication Gap in a Story Grooming Meeting

Everybody in the team could have a different understanding of the story being presented in the sprint planning and could de-scope some of the more esoteric requirements (North, 2006). This misunderstanding could be avoided with the help of sketches which could be presented in the sprint planning as depicted in figure below.

But a more feasible solution to this situation would be the use of a domain specific language (DSL). Every QA engineer knows that a story’s behavior is simply its acceptance criteria and if the system fulfills all acceptance criteria, then the system behaves correctly or vice versa. Therefore a QA engineer needs a template to capture a story’s acceptance criteria which could be done with Behavior Driven Development (BDD). (North, 2006)

Representation of Good Communication with Sketches

Representation of Good Communication with Sketches

BDD is usually done in DSL a very English-like language that helps the domain experts to understand the implementation rather than exposing the code level tests. It is defined in a GWT format, GIVEN WHEN & THEN (What is TDD, BDD & ATDD ?, 2012). The BDD template is loose and structured enough so that it could break the story into different scenarios based on its acceptance criteria. A scenario template would look like:

Given some initial context (the givens),

When an event occurs,

Then ensure some outcomes.

The QA engineer could develop these scenarios from acceptance criteria in collaboration with the Product Manager/Owner. The scenarios with “given”, “when an event”, “then an outcome” are fine-grained enough to be represented directly in the code. The QA engineer could collaborate with the developer to represent these scenarios in the code which would then be part of the acceptance test for the story. A simple representation of the scenarios in coded form would look like:

public class someinitialcontext implements Given {
public void setup(Someinitialcontextsomeinitialcontext) {
...
}
}


public class aneventoccurs implements When {
public void setup(Aneventanevent) {
...
}
}


public class ensuresomeoutcomes implements Then {
public void setup(Outcome outcome) {
...
}
}

Test Driven Development (TDD)

Test Driven Development (TDD)

BDD is test driven development (TDD) in many ways except that the word “test” is replaced by the word “Behavior” and is meant for a broader audience which include Product Owners and stakeholders. In both the methods of software development, the tests are written upfront and made executable, then implementation of a unit is done and finally the implementation of the unit is verified by making the test successful.

Implementation of 3 Amigo’s Task

The decision to implement the 3 Amigo’s task solely depends on the team if they want it. In my case the Scrum team wanted to try something new in the field of testing. The following process evolved which was loosely based on the  “Introducing the Three Amigos” by Ryan Thomas Hewitt. It was accepted as a sprint task and was defined and adopted by the Scrum team. The “3 Amigo’s” were the Product Managers/BA’s, the QA engineer, and a developer who collaborated together to define the scenarios from the acceptance criteria for a story.

Steps involved in 3 Amigo’s:

  • For every new story a task “3 Amigo’s” is defined in the sprint planning meeting.
  • “3 Amigo’s” task is considered to be the first task for every story in the sprint.
  • Immediately after planning the Product Owner, the QA engineer, and a developer create a meeting and collaborated together to define the scenarios.
  • The scenarios are represented as code by a pair of developers or a developer-QA engineer pair.
  • Development of the code is started parallel to or after the implementation of scenarios as executable acceptance test.

Advantages of Performing 3 Amigo’s Task:

  • Performing “3 Amigo’s” task within the sprint showed high level of collaboration between the three parties involved in the sprint: the Product Owner/Manager/BA, Developers and the QA engineer.
  • Performing this task also means that the acceptance tests were written and automated even before coding started.
  • The scenarios where written in Domain Specific Language (DSL), which means everybody in the team could understand the scenarios and no coding skills were required.
  • The QA engineer had more time performing exploratory test as story acceptance test was already automated.
  • The story was accepted automatically when all acceptance tests represented as scenarios were successful.
  • “3 Amigo’s” task could be a pre-planning task, where the QA engineer could collaborate with the Product Manager/BA and define scenarios for each acceptance criteria.
  • The scenarios could be presented directly in the sprint planning meeting to the whole team.

Future Experimental Work

  • “3 Amigo’s” task could be a pre-planning task, where the QA engineer could collaborate with the Product Manager and define scenarios for each acceptance criteria.
  • The scenarios could be presented directly in the sprint planning meeting to the whole team.