Wednesday, September 13, 2006

Unit tests - Testing performance

We currently use a single performance unit test assembly which has all performance related times e.g. testing X invoices can be created in under Y seconds. If the tests fail it's usually because either
  1. Database changes have been applied such as indexes / triggers or modfications to stored procedures, to slow the time to perform the database access code
  2. Business logic has been changed e.g. when saving the invoice additional validation has been added which slows the time to create an invoice.

There are two important things to note about our implementation

  1. The expected time periods for the test methods are obtained from the configuration file. This is required because of the differences in expected time from machine to machine. If the expected time is hard coded, the code would need to change when the test is executed on different machines. The key value is read from the configuration file using reflection to obtain the method name the test is executing in, e.g. a configuration key of SP_INVOICE_PERFM_1-MaxTime maybe used for a test method called SP_INVOICE_PERFM_1. The configuration file with all the expected timings can be managed in your source control repository and checked in when the unit test assembly is. We usually check in the configuration file with the timings working on our cruise control build machine. If anyone checks in some bad database code, or some ridiculous index which slows the whole process down, we soon know about it when the next build breaks because of the failing tests.
  2. The actual time period tested is not the duration of the test method, it really starts and stops at specific points in the method e.g. an invoice creation performance test is fine to test the whole method because all the method does is created a number of invoices which should be complete in X seconds. However a second test which verifies X invoices can be updated in Y seconds firstly needs to create the X invoices, and only then start the timer when just before the first update occurs, and stops when the last invoice has been updated. And of course all the invoices have to be deleted when the test finishes which shouldn't be included in the expected time. The SetUp and TearDown methods can't be used here because the unit test class such as InvoicePerformanceTest has many different methods with different set up and tear down implementations, and SetUp and TearDown are common methods called by all test methods in a text fixture. On a side note perhaps it would be useful to extend NUnit one day to support multiple set up and tear down attributed methods which apply to specific test methods.

In terms of implementing this generically in NUnit an attribute such as TimedTest with an optional attribute to specify whether the time period is obtained from a configuration file using a similar scheme we've devised. Also to cope with b), the framework could have a Start and Stop timing event which the test method could raise to indicate when the timing should start and when it should stop. This behaviour could also be specified using another parameter in the TimedTest constructor, e.g. if set to false then the start and stop would be implicit as soon as the test method started and finished.

No comments: