Wednesday, September 20, 2006

Unit tests - Pattern for testing localisable code

Currently we have a few unit tests, although this is sure to increase, which test code should work in any locale ( culture in .NET terms ). To test this we currently have an implementation such as

public AppVersionTest()
{
// Set the current locale to French Thread.CurrentThread.CurrentCulture = new CultureInfo("fr-FR");

Assert.AreEqual(3.6, Tranmit.Sprinter.Configuration.Database.TBConfig.ApplicationVersion, "Failed to read the configuration ApplicationVersion");
}

We did this as we actually had a bug when the code was running on a french machine. I thought a better pattern would actually tests the code in all defined cultures e.g. changing the implementation to

// Obtain the current culture
CultureInfo oldCurrentCulture = Thread.CurrentThread.CurrentCulture;
try
{
foreach(CultureInfo cInfo in CultureInfo.GetCultures(CultureTypes.AllCultures)
{
Thread.CurrentThread.CurrentCulture = cInfo
// Perform test code
}
}
finally
{
// Restore the previous culture
Thread.CurrentThread.CurrentCulture = oldCurrentCulture;
}


Any code which is locale independant should certainly work in any culture e.g. if monetary code was being tested the culture would allow the code to find thousand / decimal separators values, so this seems a sensible implementation. However this certainly strikes me as a pattern which NUnit could implement for test methods decorated with an attribute such as TestAllCultures which would reduce this to

Test, TestAllCultures]
public AppVersionTest()
{
// Test code
}


This would certainly reduce a lot of boilerplate code required for each test method and I'm sure useful to many other developers who have to write code which works across all cultures.

Monday, September 18, 2006

Unit tests - Pattern for time changing test methods

Some of our recent unit tests change the system time within the execution of the test method. e.g. testing components which escalate purchase orders to new users if the approvers don't respond within a configured time period. We test this by setting up configuration, such as the period of time which must elapse before escalation occurs, and then changing the system time to add this time period and call the component again. The assertion is usually a simple one e.g. a new user such as the approvers manager should now in the approval route for the order.

We currently use a simple method of restoring the system time when the test method is executed by resetting it to the value captured when the test method is started. If the test method takes 2 seconds to elapsed and we have perhaps 10 test methods we can easily see that we lose 20 seconds of time every time the test fixture is called.

I've thought of a better way to do this using a class such as RestoreTimeChangingCode. The class should be created as the first object which holds the following values

a) StartTickCount
b) StartTime

The StartTickCount value is obtained using Environment.TickCount and StartTime from DateTime.Now. The Restore method should be called when the test method has finished executing, preferrably in a finally block so it's called even if an exception is raised. The Restore method simply adds the number of ticks which have elapsed since the object was created, which represents the elapsed time the test method took to execute, and adds this to the start time, and sets this as the current time.

Whilst this class is useful it would be great if the NUnit framework provided perhaps an attribute called TestChangesSystemTime which could be added in addition to the standard Test attribute. The NUnit framework could then time the method and restore the time appropriately. It could also use a more accurate timer such as the performance counter timer.

The TestChangesSystemTime attribute, in addition to reducing the amount of repeatable code you have to write in each test method, would also provide the ability to reflect on a testing assembly to see whether any test methods change the system time. This would be useful if changing the system time was a concern on any server e.g. we would not want to execute this test assembly on our source control server to avoid clients inadvertently checking in / checking out files with the incorrect date / time.

Wednesday, September 13, 2006

Unit tests - Testing performance

We currently use a single performance unit test assembly which has all performance related times e.g. testing X invoices can be created in under Y seconds. If the tests fail it's usually because either
  1. Database changes have been applied such as indexes / triggers or modfications to stored procedures, to slow the time to perform the database access code
  2. Business logic has been changed e.g. when saving the invoice additional validation has been added which slows the time to create an invoice.

There are two important things to note about our implementation

  1. The expected time periods for the test methods are obtained from the configuration file. This is required because of the differences in expected time from machine to machine. If the expected time is hard coded, the code would need to change when the test is executed on different machines. The key value is read from the configuration file using reflection to obtain the method name the test is executing in, e.g. a configuration key of SP_INVOICE_PERFM_1-MaxTime maybe used for a test method called SP_INVOICE_PERFM_1. The configuration file with all the expected timings can be managed in your source control repository and checked in when the unit test assembly is. We usually check in the configuration file with the timings working on our cruise control build machine. If anyone checks in some bad database code, or some ridiculous index which slows the whole process down, we soon know about it when the next build breaks because of the failing tests.
  2. The actual time period tested is not the duration of the test method, it really starts and stops at specific points in the method e.g. an invoice creation performance test is fine to test the whole method because all the method does is created a number of invoices which should be complete in X seconds. However a second test which verifies X invoices can be updated in Y seconds firstly needs to create the X invoices, and only then start the timer when just before the first update occurs, and stops when the last invoice has been updated. And of course all the invoices have to be deleted when the test finishes which shouldn't be included in the expected time. The SetUp and TearDown methods can't be used here because the unit test class such as InvoicePerformanceTest has many different methods with different set up and tear down implementations, and SetUp and TearDown are common methods called by all test methods in a text fixture. On a side note perhaps it would be useful to extend NUnit one day to support multiple set up and tear down attributed methods which apply to specific test methods.

In terms of implementing this generically in NUnit an attribute such as TimedTest with an optional attribute to specify whether the time period is obtained from a configuration file using a similar scheme we've devised. Also to cope with b), the framework could have a Start and Stop timing event which the test method could raise to indicate when the timing should start and when it should stop. This behaviour could also be specified using another parameter in the TimedTest constructor, e.g. if set to false then the start and stop would be implicit as soon as the test method started and finished.