FANDOM


Testing activities can be broken down into 3 phases that describe when the activities occur and whom is responsible for executing the tests:

  • Implementation Testing (Stable Team Testers)
  • Release Testing (Enterprise Test Team)
  • User Acceptance Testing (Plans)

Implementation TestingEdit

Implementation Testing is the first round of testing that occurs, and it happens in parallel with development.  These activities are the responsibility of the Stable Team Testers.  The goal of this phase is focused on testing the new functionality being developed (i.e. not regression testing).

There are a few types of testing that are performed during this phase:

  • Structured Testing (aka Test Cases)
  • Exploratory Testing (aka Ad Hoc)
  • Unit Testing - Performed by Stable Team Developers


Note: Exploratory Testing refers to manual testing that is performed without any pre-defined script or test case.


As development begins, the Test Case authoring process should occur in parallel.  Even before any code is written the Stable Team Testers can be working off the same User Stories that developers are, to identify edge cases, and author Test Cases.  As new functionality is deployed to dev/test environments, the Stable Team Testers can begin to execute the Test Cases in combination with any Exploratory Testing activities.

Note: Edge Cases refers to testing scenarios that are just above/below significant thresholds/values.  E.g. if the software has different expected behaviour when claim amount is above $50,000, then you may define Test Cases for $49,999, $50,000, and $50,001.  These are edge cases for that particular value.


Value is realized in 3 ways during this phase:

  • Bugs Found – This is the obvious value that testing provides, preventing defects from reaching production.
  • Test Cases Authored – These have value that will be realized down the road when they are re-used for regression testing.
  • Passing Tests – Each passing test has value as it builds confidence in the quality of the software.


All three of these items should be tracked so they can be used for reporting and analysis.

Any bugs discovered should be logged in TFS.  If they are discovered as a result of running a Test Case they should be linked to the Test Case (MTM will do this automatically).  If they are discovered as a result of Exploratory Testing, a Test Case should be created to capture the repro steps and the defect and Test Case should be linked.

All Test Cases authored should be linked to one or more User Stories which they test.  This will be used for reporting on test coverage by User Story, and also for progress and quality reporting against User Stories based on passing tests.

Passing Tests will automatically be captured in the form of Test Results in the MTM tool.  Any exploratory testing should also be performed in the context of an MTM Exploratory Test session to enable data capture about the time spent doing different testing activities.

To help build discipline in the team and processes the concept of “exit criteria” should be used.  Before any User Story (or defect) can be marked complete it will have a checklist of activities that must have been performed. At a minimum we should implement the following Exit Criteria:

  • Every User Story must have some Test Cases
  • All Test Cases linked to that User Story must be passing
  • All Defects must have at least one Test Case




See Also: How to Write High Quality Test Cases

See Also: Creating Implementation Test Plans in MTM

See Also: Executing Tests with MTM



Release TestingEdit

A Release typically involves grouping a number of User Stories – often from multiple teams – into a Release.  Each Release will have its own testing cycle which will be owned by the Enterprise Test Team.  For each Release a Test Plan will be created, that will determine the set of Test Cases that are planned to be executed.   In an ideal world every Test Case would be run against every Release.  However, due to constraints of time and manpower, this is often not feasible. The Test Planning activity determines which subset of available Test Cases will be run for each release.  This is driven by which tests are determined to be the most valuable based on the nature of the changes included in the Release, and the amount of time available to devote to testing (e.g. if it is an urgent Hot Fix because Production is down, the Test Planning step should not be skipped, but the result of Test Planning may be that the release is so urgent we will plan to run zero tests – or perhaps we decide to deploy immediately, but still plan to run some tests post-deployment).

The first activity of this phase is to determine a Test Plan.  This activity should occur for every release, and controls should be in place to enforce this process and report on it.  The Test Plan determines which Test Cases will be executed as part of the Release Testing phase.  Several pieces of information should be considered, and tradeoffs made when determining the Test Plan:

  • Urgency – How urgent is it to deploy this release quickly.
  • Risk – Based on the nature of the changes, what is the risk that they may have negative or wide-ranging effects on the system.
  • Test Relevance – Tests that are most relevant to the changes included in this Release should get preference.  The Test Cases directly linked to the User Stories implemented in this release should usually be included.  Other regression test cases that test related areas of the system make good candidates for inclusion also.
  • Test Fragility – Some areas of the system tend to be more fragile, and susceptible to regression than others.  This should be visible by viewing the history of Test Cases, and determining which tests have a history of failing often, or have many bugs linked to them.  These Tests make good candidates for inclusion in the Test Plan.
  • Automated Tests – Due to the low cost of execution, all automated tests will typically be included in every release Test Plan.


A typical Release Test Plan will include all Test Cases for new functionality delivered in this release, all Automated Tests, and some subset of regression tests.

Some may wonder what the value is in running the same Test Cases that have already passed in the Implementation Testing phase.  During Implementation Testing the environment where the tests were run may include additional features, or other teams code that is not part of this release.  If a test passed only because some code was present in the dev environment that will not be part of this release, it is important to discover that prior to deployment.  In addition, Functional Tests are usually run until they pass, and not run again during that phase.  Meaning that subsequent changes after the passing test run that break that test may not be discovered (this includes the merge mistakes that have occurred at CoreLink).

During Release Testing it's important to ensure isolation from changes not included in this release.  The environment used should be as close to current production + the candidate release as possible.  This implies that changes not included in this release should not be present in the release testing environment, and the environment configuration and infrastructure should be as close to production as possible.

In the event that Release Testing discovers quality issues, they should be logged as defects, and linked to the Test Case that discovered them.  They will be triaged and a decision will be made whether the Release should proceed with the defects unfixed (presumably scheduled to be fixed in a future release), or the Release should be rejected, code changes made, and a new Release Candidate created.  Should a new Release Candidate be created, the Release Testing process will start over, and a new Release Testing Plan will be created.  Typically, if a release is rejected only a very specific/narrow change is made, and the Release Testing Plan for the 2nd Release Candidate requires significantly fewer tests to build confidence (of course this will depend on the nature of the changes between Release Candidate #1 and #2, and all the other tradeoffs discussed above).



See Also: Creating Release Test Plans in MTM

See Also: Test Automation

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.