The Testing Mindset
The Testing Mindset
There is particular philosophy that accompanies “good testing”.
A professional tester approaches a product with the attitude that the product is already broken - it has defects and it is their job to discover them. They assume the product or system is inherently flawed and it is their job to ‘illuminate’ the flaws.
This approach is necessary in testing.
Designers and developers approach software with an optimism based on the assumption that the changes they make are the correct solution to a particular problem. But they are just that – assumptions.
Without being proved they are no more correct than guesses. Developers often overlook fundamental ambiguities in requirements in order to complete the project; or they fail to recognise them when they see them. Those ambiguities are then built into the code and represent a defect when compared to the end-user's needs.
By taking a sceptical approach, the tester offers a balance.
The tester takes nothing at face value. The tester always asks the question “why?” They seek to drive out certainty where there is none. They seek to illuminate the darker part of the projects with the light of inquiry.
Sometimes this attitude can bring conflict with developers and designers. But developers and designers can be testers too! If they can adopt this mindset for a certain portion of the project, they can offer that critical appraisal that is so vital to a balanced project. Recognising the need for this mindset is the first step towards a successful test approach.
Test Early, Test Often
There is an oft-quoted truism of software engineering that states - “a bug found at design time costs ten times less to fix than one in coding and a hundred times less than one found after launch”. Barry Boehm, the originator of this idea, actually quotes ratios of 1:6:10:1000 for the costs of fixing bugs in requirements,design, coding and implementation phases.
If you want to find bugs, start as early as is possible.
That means unit testing (qqv) for developers, integration testing during assembly and system testing - in that order of priority! This is a well understood tenant of software development that is simply ignored by the majority of software development efforts.
Nor is a single pass of testing enough.
Your first past at testing simply identifies where the defects are. At the very least, a second pass of (post-fix) testing is required to verify that defects have been resolved. The more passes of testing you conduct the more confident you become and the more you should see your project converge on its delivery date. As a rule of thumb, anything less than three passes of testing in any phase is inadequate.
You must retest fixes to ensure that issues have been resolved before development can progress. So, retesting is the act of repeating a test to verify that a found defect has been correctly fixed.
Regression testing on the other hand is the act of repeating other tests in 'parallel' areas to ensure that the applied fix or a change of code has not introduced other errors or unexpected behaviour.
For example, if an error is detected in a particular file handling routine then it might be corrected by a simple change of code. If that code, however, isutilised in a number of different places throughout the software, the effects of such a change could be difficult to anticipate. What appears to be a minor detail could affect a separate module of code elsewhere in the program. A bug fix could in fact be introducing bugs elsewhere.
You would be surprised to learn how common this actually is. In empirical studies it has been estimated that up to 50% of bug fixes actually introduce additional errors in the code. Given this, it's a wonder that any software project makes its delivery on time.
Better QA processes will reduce this ratio but will never eliminate it. Programmers risk introducing casual errors every time they place their hands on the keyboard. An inadvertent slip of a key that replaces a full stop with a comma might not be detected for weeks but could have serious repercussions.
Regression testing attempts to mitigate this problem by assessing the ‘area of impact’ affected by a change or a bug fix to see if it has unintended consequences. It verifies known good behaviour after a change.
White-Box vs Black-Box testing
Testing of completed units of functional code is known as black-box testing because testers treat the object as a black-box. They concern themselves with verifying specified input against expected output and not worrying about the logic of what goes on in between.
User Acceptance Testing (UAT) and Systems Testingare classic example of black-box testing.
White-box or glass-box testing relies on analysing the code itself and the internal logic of the software. White-box testing is often, but not always, the purview of programmers. It uses techniques which range from highly technical or technology specific testing through to things like code inspections.
Although white-box techniques can be used at any stage in a software product's life cycle they tend to be found in Unit testing activities.