Software Testing - White Box testing

White Box testing

 

White-box or glass-box testing relies on analysing the code itself and the internal logic of the software and is usually, but not exclusively, adevelopment task.

 

Static Analysis and Code Inspection

 

Static analysis techniques revolve around looking at the source code, or uncompiled form of software. They rely on examining the basic instruction set in its raw form, rather than as it runs. They are intended to trap semantic and logical errors.

 

Code inspection is a specific type of static analysis. It uses formal or informal reviews to examine the logic and structure of software source code and compare it with accepted best practices.

 

In large organisations or on mission-critical applications, a formal inspection board can be established to make sure that written software meets the minimum required standards. In less formal inspections a development manager can perform this task or even a peer.

 

Code inspection can also be automated. Many syntax and style checkers exist today which verify that a module of code meets certain pre-defined standards. By running an automated checker across code it is easy to check basic conformance to standards and highlight areas that need human attention.

 

A variant on code inspection is the use of peer programming as espoused in methodologies like Extreme Programming (XP). In XP's peer programming, modules of code are shared between two individuals. While one person writes a section of code the other is reviews and evaluates the quality of the code. The reviewer looks for flaws in logic, lapses of coding standards and bad practice. The roles are then swapped. Advocates assert this is a speedy way to achieve good quality code and critics retort that its a good way to waste a lot of people's time.

 

As far as I'm concerned the jury is still out.

 

Dynamic Analysis

 

While static analysis looks at source code in its raw format, dynamic analysis looks at the compiled/interpreted code while it is running in the appropriate environment. Normally this is an analysis of variable quantities such as memory usage, processor usage or overall performance.

 

One common form of dynamic analysis used is that of memory analysis. Given that memory and pointer errors form the bulk of defects encountered in software programs, memory analysis is extremely useful. A typical memory analyser reports on the current memory usage level of a program under test and of the disposition of that memory. The programmer can then ‘tweak’ or optimise the memory usage of the software to ensure the best performance and the most robust memory handling.

 

Often this is done by ‘instrumenting’ the code. A copy of the source code is passed to the dynamic analysis tool which inserts function calls to its external code libraries. These calls then export run time data on the source program to an analysis tool. The analysis tool can then profile the program while it is running. Often these tools are used in conjunction with other automated tools to simulate realistic conditions for the program under test. By ramping up loading on the program or by running typical input data, the program’s use of memory and other resources can be accurately profiled under real-world conditions.