Black or white?
Automated software tests are crucial for IT projects. They enable continuous modifications to an existing code base without the fear of damaging existing functionality. They are executed at will and don’t carry the costs and inconsistencies associated with manual tests.
There are two fundamental approaches for automated software tests:
White Box, where an application is tested from the inside using its internal application programmatic interface (API).
Black Box, where an application is tested using its outward-facing interface (usually a graphical user interface).
White Box tests are normally nice-to-haves, while Black Box tests are an absolute necessity. White Box tests suffer generally from the following shortcomings:
Creating a false sense of stability and quality. An application can often pass all unit tests and still appear unavailable to users due to a non-functioning component (a database, connection pool, etc.). This is a critical problem, since it makes the IT application team look incompetent.
Inability to validate use-case completeness because they use a different interface to the application. White Box tests focus on pleasing the wrong crowd — developers. The right crowd to please in the real world is application users, because they fund IT projects. It is more important to understand how a user views an application than how a developer does. Developers and users use different interfaces to interact with an application; developers use a programmatic API, while users use a GUI. Application users care mostly about the inputs and outputs on their GUI screen. The only technique for testing these is interacting with the application via the same interface used by the users.
Sometimes White Box tests are sufficient. In general, I am no great supporter of White Box tests. White Box unit tests are sufficient to test programmatic interfaces (API). For example, a unit test is sufficient to test a square root function.
Black Box tests are useful in evaluating the overall health of the application and its ability to provide value to users. Most Black Box tests emulate the sequence of interactions between a user to an application in its exact order. Failure of a Black Box test indicates that a user received insufficient value from the application.
Black Box tests emulate user interaction with an application. Keeping the tests closely aligned with the manner in which users interact with the application is the key for quality tests.
Historically, writing Black Box tests was difficult. Applications used different GUI technologies and communication protocols; thus, the potential for tool reuse across applications was low. Commercial testing-tool packages relied on capturing mouse clicks and keystrokes, which proved brittle to change.
The combination of wide usage of HTTP/HTML standards and an emerging number of open source projects makes Black Box testing an easier task.
The choice of test-tools normally depends on the IT-projects budget and the long-term testing strategies. Unfortunately, not many IT-projects cater for sufficient budgets to introduce commercial testing tools and testing is limited to “simplified” unit-tests using open source packages such as HttpUnit or jWebUnit.
The expensive route
This budget-limitation is generally a big problem in any IT-project, since deadlines are tight and companies rather spend more budget on “getting the job” done rather than trying to implement a long-term strategy to ensure continued quality standards.
However, sooner or later, companies will realise that the initial high monetary cost in implementing Q&A standards and utilising a managed testing-strategy will offset the incurred loss of productivity and downtimes due to the lack of quality-assurance.
In past projects I have been lucky – in the sense that all IT-projects have been properly budgeted and they companies had Q&A methodologies implemented in one way or another. In most of the projects I worked on, I used a mix between open-source (such as HttpUnit or jWebUnit) and commercial tools (LoadRunner, Testdirector, WinRunner).
Looking at the long-term testing strategy, where traditionally a project will run through the phases of unit-, system-, integration- and eventually pre-production load-testing, it is definitely a good approach to look at the bigger picture. (I am not trying to punt Mercury’s tools here, since I especially dislike the high licensing-costs, but I do favour the tight integration if the different tools).
During the first few phases of unit-, system- and integration-tests I favour the user of WinRunner, since (looking especially when testing web-applications), it is very easy to record and replay test-cases.
Another approach (though more expensive, but more efficient in the long run) is the use of LoadRunner. For the single-user test, the tool is maybe quite an overhead, but looking at the long-term perspective, you would be able to easily use the same test-scenarios for load-testing emulating thousands of virtual users and thus being able to emulate the real load your application will be exposed in a production environment.
One issue most neglected during Q&A and later on once your application has reached production-level, is the defect- and release-management. Another tool from Mercury Interactive is TestDirector, which not only allows you to manage the test-cases, but also enables you to maintain defects. Once setup properly it not only provides you with features such as escalation, it will also provide you release-management (to the extent of highlighting which defects belong to a certain release) and a knowledge-base for your call-center (being able to search for symptoms and perhaps identify work-arounds).
Recent Comments