Minimal IT logo and link to home page
Research, training, consultancy and software to reduce IT costs
Home | About | Newsletter | Contact
Previous | Next Printer friendly
20 November 2007

The testing time bomb

By Andrew Clifford

Testing is an increasingly important part of IT. We face serious problems with the long-term management and support of systems because testing tools are not based on standards.

I once investigated time usage in a systems development department. 12% of time was programming and 40% was testing (the rest was analysis and support).

In more recent work, I have been using test-driven development and automated regression tests. Looking at our source code repository, 9% is documentation, 30% is functional code, 6% is test code, and 55% is test data.

These examples are typical. We produce more tests than programs, and spend longer testing than programming.

Our emphasis on testing is growing. Testing has moved on: from debugging, through demonstrating requirements, to test-driven development. Automated regression testing is becoming more important as our portfolios of systems grow and age.

There are many tools to help us test. JUnit and similar tools help unit testing. There are tools for planning tests, tracing requirements, running high-level system tests, and checking test coverage. There are session capture and replay tools. There are tools to simulate multiple users for performance and stress testing.

These tools make testing more effective and more efficient, but they create a dependency between the system under test and the testing tool. Testing is critical to ongoing support and the long-term viability of systems. Using testing tools means that systems become dependent on the upgrade path and success of the testing tool vendor (or open source project). If the testing tool fails to stay current we have to redevelop the tests, which could easily require twice the effort we put into programming.

Despite its importance, we place relatively little emphasis on the choice of testing tools. We spend much longer arguing about design approaches, application frameworks and programming languages, even though we will spend longer using, and arguably have more dependence on, testing tools.

There are three ways out of this problem.

  • As an industry, standardise on a smaller number of testing tools. This has happened in some areas (such as JUnit for testing Java classes), but overall a huge variety remains.
  • Create hand-built testing tools for each system and maintain the tools alongside the systems. When we do this, we miss the benefits of using products that other people have created.
  • Define standards for the specification of tests and test data, and use tools that conform to these standards.

The third option interests me most. We need a standard, implementation-neutral syntax for tests to remove our dependency on specific tools. This would be a complete specification of the input data, operations, and expected outputs, not just documentation of test requirements.

This could then be used by testing tools:

  • As the output from test design (or session capture).
  • As the input to test execution. Test execution tools would map the standard to the data structures, functions and comparison methods of the system under test.
  • As an input to tools for controlling tests, such as those that map requirements or check coverage.

Testing tools are a time bomb waiting to wreck the long-term management and support of our systems. If we can find a way to standardise, we can reduce our exposure to this risk significantly.

Next: Borrowed too much

Subscription

Subscribe to RSS feed

Latest newsletter:
Magical metadata

We use the term "metadata-driven" to describe IT solutions in which functionality is defined in data. Taking this to the extreme can provide unparalleled levels of speed, simplicity and versatility.
Read full newsletter

System governance

System governance helps you implement high-quality systems, manage existing systems proactively, and improve failing systems.

Find out more