Friday, December 01, 2006

Unit tests - Web applications


We have had our unit test .NET assemblies up and running through Cruise Control for nearly a year now. This has worked very well in catching bugs early introduced by developers hastily checking in code without ensuring that ALL unit tests execute successfully. We use a main nant target which executes about a dozen testing assembly and any failures send an email to the development group so the whole team can see the user/s responsible for the checkins which broke the build. We name and shame them and this usually gets them, me quite a few times :(, to fix the build promptly.

However over the last year I noticed 90% of the defects we introduced related to the web application e.g. common script errors or simple page errors which should have been caught however because

  1. The quality of testing performed by each developer was minimal
  2. No developer ever regression tested many other potentially effected areas

We usually had a large number of issues which QA found come each major release. It was then a battle to fix them AND fix any other defects found from the new changes in the release. We barely had enough bandwidth to fix defects from the new changes let alone issues introduced in other areas. Since then I have had a strong motivation to enable a cruise control project which checks for changes in the web application and runs a unit test assembly with a selection of tests in the web app.

The unit testing of the web application requires a new class I’ve created called InternetExplorerWrapper which provides a number of interesting features

  1. Managed access to the unmanaged internet explorer object to manipulate the object model of page contents when pages are navigated to. E.g. forms and their input fields, frames, anchors, images among many other elements.
  2. Support to create unit test assertion with methods such as AssertFormFieldClassName which assert the class name of a specific form field e.g. used in test which specify a different class for fields which fail validation.
  3. Internal auto reset event objects to ensure methods such as ClickAnchor only return back to the unit test code when the anchor has been clicked and the new page navigated too. The document complete event is used in this case, however this is made tricky because in some cases 3 or 4 document complete events maybe fired together because IE is loading multiple documents for a given request.
  4. Methods to assist with viewing and manipulating the IE object model e.g. outputting the contents HTMLDocument object to the trace window. In this way you can see the elements you can access through your unit tests at specific points in time. Also other methods such as TraceDocsLoaded show you how many events are fired for a given request to help determine how many document complete events to wait for in requests such as ClickAnchor ( c)

There were quite a few details which were problematic e.g. the creation of pop up windows e.g. these need to be wrapped by catching the NewWindow event, then create a new IE wrapper object around this and storing in a hash table within the original requested page. Client code can then access the new windows using methods off the main IE wrapper object such as GetNewIEWindow.

Has anyone else had experience creating unit tests for web applications, I’d be interested to hear your thoughts. The framework should work regardless of the platform used to build the application e.g. ASP / ASP.NET and JSP.

Wednesday, September 20, 2006

Unit tests - Pattern for testing localisable code

Currently we have a few unit tests, although this is sure to increase, which test code should work in any locale ( culture in .NET terms ). To test this we currently have an implementation such as

public AppVersionTest()
{
// Set the current locale to French Thread.CurrentThread.CurrentCulture = new CultureInfo("fr-FR");

Assert.AreEqual(3.6, Tranmit.Sprinter.Configuration.Database.TBConfig.ApplicationVersion, "Failed to read the configuration ApplicationVersion");
}

We did this as we actually had a bug when the code was running on a french machine. I thought a better pattern would actually tests the code in all defined cultures e.g. changing the implementation to

// Obtain the current culture
CultureInfo oldCurrentCulture = Thread.CurrentThread.CurrentCulture;
try
{
foreach(CultureInfo cInfo in CultureInfo.GetCultures(CultureTypes.AllCultures)
{
Thread.CurrentThread.CurrentCulture = cInfo
// Perform test code
}
}
finally
{
// Restore the previous culture
Thread.CurrentThread.CurrentCulture = oldCurrentCulture;
}


Any code which is locale independant should certainly work in any culture e.g. if monetary code was being tested the culture would allow the code to find thousand / decimal separators values, so this seems a sensible implementation. However this certainly strikes me as a pattern which NUnit could implement for test methods decorated with an attribute such as TestAllCultures which would reduce this to

Test, TestAllCultures]
public AppVersionTest()
{
// Test code
}


This would certainly reduce a lot of boilerplate code required for each test method and I'm sure useful to many other developers who have to write code which works across all cultures.

Monday, September 18, 2006

Unit tests - Pattern for time changing test methods

Some of our recent unit tests change the system time within the execution of the test method. e.g. testing components which escalate purchase orders to new users if the approvers don't respond within a configured time period. We test this by setting up configuration, such as the period of time which must elapse before escalation occurs, and then changing the system time to add this time period and call the component again. The assertion is usually a simple one e.g. a new user such as the approvers manager should now in the approval route for the order.

We currently use a simple method of restoring the system time when the test method is executed by resetting it to the value captured when the test method is started. If the test method takes 2 seconds to elapsed and we have perhaps 10 test methods we can easily see that we lose 20 seconds of time every time the test fixture is called.

I've thought of a better way to do this using a class such as RestoreTimeChangingCode. The class should be created as the first object which holds the following values

a) StartTickCount
b) StartTime

The StartTickCount value is obtained using Environment.TickCount and StartTime from DateTime.Now. The Restore method should be called when the test method has finished executing, preferrably in a finally block so it's called even if an exception is raised. The Restore method simply adds the number of ticks which have elapsed since the object was created, which represents the elapsed time the test method took to execute, and adds this to the start time, and sets this as the current time.

Whilst this class is useful it would be great if the NUnit framework provided perhaps an attribute called TestChangesSystemTime which could be added in addition to the standard Test attribute. The NUnit framework could then time the method and restore the time appropriately. It could also use a more accurate timer such as the performance counter timer.

The TestChangesSystemTime attribute, in addition to reducing the amount of repeatable code you have to write in each test method, would also provide the ability to reflect on a testing assembly to see whether any test methods change the system time. This would be useful if changing the system time was a concern on any server e.g. we would not want to execute this test assembly on our source control server to avoid clients inadvertently checking in / checking out files with the incorrect date / time.

Wednesday, September 13, 2006

Unit tests - Testing performance

We currently use a single performance unit test assembly which has all performance related times e.g. testing X invoices can be created in under Y seconds. If the tests fail it's usually because either
  1. Database changes have been applied such as indexes / triggers or modfications to stored procedures, to slow the time to perform the database access code
  2. Business logic has been changed e.g. when saving the invoice additional validation has been added which slows the time to create an invoice.

There are two important things to note about our implementation

  1. The expected time periods for the test methods are obtained from the configuration file. This is required because of the differences in expected time from machine to machine. If the expected time is hard coded, the code would need to change when the test is executed on different machines. The key value is read from the configuration file using reflection to obtain the method name the test is executing in, e.g. a configuration key of SP_INVOICE_PERFM_1-MaxTime maybe used for a test method called SP_INVOICE_PERFM_1. The configuration file with all the expected timings can be managed in your source control repository and checked in when the unit test assembly is. We usually check in the configuration file with the timings working on our cruise control build machine. If anyone checks in some bad database code, or some ridiculous index which slows the whole process down, we soon know about it when the next build breaks because of the failing tests.
  2. The actual time period tested is not the duration of the test method, it really starts and stops at specific points in the method e.g. an invoice creation performance test is fine to test the whole method because all the method does is created a number of invoices which should be complete in X seconds. However a second test which verifies X invoices can be updated in Y seconds firstly needs to create the X invoices, and only then start the timer when just before the first update occurs, and stops when the last invoice has been updated. And of course all the invoices have to be deleted when the test finishes which shouldn't be included in the expected time. The SetUp and TearDown methods can't be used here because the unit test class such as InvoicePerformanceTest has many different methods with different set up and tear down implementations, and SetUp and TearDown are common methods called by all test methods in a text fixture. On a side note perhaps it would be useful to extend NUnit one day to support multiple set up and tear down attributed methods which apply to specific test methods.

In terms of implementing this generically in NUnit an attribute such as TimedTest with an optional attribute to specify whether the time period is obtained from a configuration file using a similar scheme we've devised. Also to cope with b), the framework could have a Start and Stop timing event which the test method could raise to indicate when the timing should start and when it should stop. This behaviour could also be specified using another parameter in the TimedTest constructor, e.g. if set to false then the start and stop would be implicit as soon as the test method started and finished.

Thursday, August 03, 2006

Unit tests - Executing over multiple application versions

The main application I design and develop usually contains additional database schema with each new version e.g. a recent 3.6.1 release will no doubt have a few new tables / new stored procedures e.t.c. Currently all our automated unit tests execute against a single database using the latest schema. When we complete and release 3.6.1 the cruise control build will only execute tests against this schema. However I would like to be able to execute our unit tests against old versions at least one back if not more e.g. so we can execute tests against 3.6.0 and 3.6.1. We do put effort into the assemblies to ensure backwards compatibility so it would be very handy to test this within our continuous integration process. Even in cases where we haven’t added any specific backwards compatibility code it would be nice to know existing tests still worked on old databases as we will still apply database patches to them.

The obvious issue with executing the most up to date unit test assemblies is some new features in 3.6.1 won't work on a 3.6.0 database. e.g. assuming a new calendar table was included in a feature in 3.6.1, any tests which required this table would fail if run on databases less than 3.6.1. A calendar class may typically exist in a common assembly which contains many other types which work in previous versions. There are essentially three types of modiciations we could be making to unit test assemblies in each new release
  1. Addition of schema which wouldn't work in previous versions e.g. using new table / sps / UDFs
  2. Updates to existing unit tests. As we adapt existing code we may have to change tests in some cases slightly, although this should be quite rare
  3. Addition of new tests for old schemas e.g. we may find some additional scenarios which haven't previously been tested.

2 and 3 obviously don't pose any issues however when executing unit tests against old databases we clearly need to conditionally execute the tests in 1 only if the schema is valid. A solution to this could be to apply an optional attribute against any test method called something like ApplicationVersion which would take the version number of the schema the unit test can be executed on. The attribute could be applied on a class or method. If applied on a class all tests within the class would require this version number unless any test methods overrode this by specifying a different number. e.g. a CalendarTest class could be defined like this

[TestFixture]
[ApplicationVersion(3,6,1,0)]
public class CalendarTest

However I thought whilst the attribute and use itself seemed quite generic, the actual implementation to retrieve the version number would not be. There are two ways in this case NUnit could be modified to incorporate this

  1. Use a plug in framework within NUnit. The plug in could have one method which allows the method to determine whether an actual method is executed or simply ignored. A Bottom line plug in could simply have knowledge about the custom attribute, read it using reflection at runtime, and then verify the version number against that stored in the database and decide whether or not to ignore the method.
  2. Have specific knowledge of the ApplicationVersion attribute and perhaps allow a simple piece of configured embedded C# script which could evaluate what the application number would be at runtime e.g. for us we could read it from the data store from the a table which contains the schema version.

If tests were ignored, the ignore reason could be added automatically e.g. "Test method ignored as version executing against is 3.6.0 and this method requires 3.6.1".

This feature would certainly be useful for us as we have a large number of clients using old versions and only a few at a time initially using the most up to date versions. I'm sure the solution would be useful to others too, thoughts?

Wednesday, August 02, 2006

Continuous Integration of database scripts

I've recently got Cruise Control working on our development server to get latest from source safe, build our .NET assemblies and execute all unit tests. I'm extremely pleased with this and has really proved it's worth recently to show complete visibility to the team of failed builds. Any broken builds which do occur get turned around very quickly; I would strongly encourage anyone who hasn't already integrated cruise control into their build process to do so now.

I have also recently taken this further by integrating database script changes into the process. In source safe we keep all the creation scripts for tables / stored procedures / UDFs e.t.c. When any changes are detected on the VSS database the scripts are applied using a few nant tasks and then the unit tests are re executed. This has quickly caught a few issues recently when stored procedures weren't in sync with the .NET code e.g. wrong data types or too many parameters in the DB compared to the client code.

Like most development teams we have to maintain a few versions of our application and upgrade databases from old version to new versions. The latest database script are rerunnable against old versions of the schema i.e. it will add columns / stored procedures if their not already there, Ideally I would like to execute the database scripts against a number of different versions of the database and then execute the unit tests against the newly updated database to ensure the tests still passed. This would find issues for clients upgrading very quickly. I'd be interested in hearing from anyone else who has tried to automatically integrate database changes within their continous integration process.

Friday, July 21, 2006

Unit tests - Changing app.config values at runtime

It would be useful for a number of unit tests to change the configuration values used in the app.config file of a unit test DLL. Actually changing the value in the file is straightforward enough i.e.

  1. Get access to the app.config file full path from the process using AppDomain.CurrentDomain.SetupInformation.ConfigurationFile
  2. Load the file into an XmlDocument
  3. Extract the relevant node using code such as XmlNode node = xmlDocument.SelectSingleNode(@"/configuration/appSettings/add[@key='" + configurationName + "']");
  4. Set the value of the “value” attribute using node.Attributes.GetNamedItem("value").Value = configurationValue;

However by default changes to the configuration file won’t be reflected at runtime i.e. code such as

String fromEmailAddress = System.Configuration.ConfigurationSettings.AppSettings["EmailFromAddress"];


will not reflect the new value if EmailFromAddress is changed in the app.config file. The AppSettings class caches the values retrieved within the AppDomain and never refreshes them. ASP.NET as a CLR host actually does refresh the settings when the web.config file is changed however by default the standard CLR host won't.

Since there is no way to force the settings to be re loaded the simplest approach would be to ensure that all clients use the existing façade defined in Tranmit.Common.ConfigurationSupport. The GetValue method looks like this

public static Object GetValue( String key,
Type typeOfObject,
Object defaultValue)
{
// Assign default value to object if no key found
Object value = defaultValue;

System.Configuration.AppSettingsReader appSettingsReader = new System.Configuration.AppSettingsReader();

// Check value is present first to avoid exception
// being thrown from AppSettingsReader.GetValue
if(ConfigurationSettings.AppSettings.Get(key) != null)
{
value = appSettingsReader.GetValue(key, typeOfObject);
}

return value;
} // GetValue

This is fine for production code however when executing unit tests we need to make sure the value is loaded directly from the app.config file rather than through the AppSettings or AppSettingsReader class. We could do this by

  • Set a value within the AppDomain to indicate a unit test is being executed e.g. AppDomain.CurrentDomain.SetData(“UnitTestExecuting”,true);
  • Modifying the existing GetValue method to check the app domain value UnitTestExecuting. If this is enabled then extract the configuration value using the steps 1 to 3 above directly from the file. Otherwise use the existing code which uses the AppSettings class

There should only be a slight performance impact for unit test code which is fine, but production code should execute the same. This will obviously require that client code uses the GetValue method rather than the classes in the ConfigurationSettings namespace directly.


Friday, February 24, 2006

VBScript plug ins for ASP applications

Recently one of the developers in my team was developing some custom code in a COM table rendering object we've built for an ASP web application. The object essentially takes source SQL with added meta data and creates a HTML table which is returned into the calling ASP application, and then written out using Response.Write.

The design required some very custom presentation within one of the table cells. The table rendering object is quite generic ( knowing little about specific tables in the database schema ) and the custom presentation task required work on very specific tables. Custom work is essentially associated with each table cell using meta data tags such as "SubordinateList(%UserID%)". Assume the table object is rendering a list of users and if a user is a manager displaying a comma separate list of subordinate user names.

Although this example is contrived (as our real example would require a lot more explanation) the design issue is the same. The obvious easy approach would be to simply add handling within the table rendering object in a switch statement to match on the start of the tag "SubordinateList" and then perform some custom processing in SQL to find the list of sub ordinates. The list would then be converted into a comma separate string which would be added to the table cell. This business logic ideally should not be contained within a generic table rendering engine.

You may have seen plug in design patterns which support the extensiblilty we are trying to achieve here. Typically this would require the creation of a well defined interface e.g. IValueResolver which is defined and called in the table rendering object. The implementation of different IValueResolver objects could be created in one or many plug in components. The IValueResolver interface could only require one method such as GetValue which takes a row, column and table data.

Although ASP, using VB script, doesn't support strongly typed interfaces we can still use the late bound IDispatch support in VBScript to implement this design. We could create a VB script class on the ASP page such as

Class SubordinateListValueResolver
Dim UserColumn

Public Function GetValue(row, col, tableData)
' Work to retrieve the list of subordinates from the user specified
' in the column UserColumn in the row from the tableData data source.
End Function
End Class


The plug in class could then be instantied on the ASP page and passed to the table rendering object using code such as

Dim oValueResolver
Set oValueResolver = New SubordinateListValueResolver
oValueResolver.UserColumn = 4 ' Means the 4th column in the result

oTableRenderer.AddValueResolverPlugIn(oValueResolver, "SubordinateList")

The table rendering object can then determine whether any plug ins exist using the meta data tag passed when the data source SQL is specified. e.g. if a matching plug in object is found for the tag SubordinateList the VBScript object of type SubordinateListValueResolver will be returned. The GetValue method can then be called directly passing the row, column and table data from within the table rendering object. i.e.

oCellResolvedValue = plugin.GetValue(row, col, tableData)

Although designs which use well defined interfaces and contain the plug in code in compiled languages are preferrable to this approach, it does demonstrate how easy a simple plug in mechanism can be developed for a script based application.

Tuesday, February 14, 2006

Remote debugging - Release build optimisations

I recently had to diagnose an issue which required remote debugging the application as no exception was raised and therefore no dump could be obtained and analysed. The issue was caused by an error code deep down in the call stack being returned and ignored a few levels up the call stack :(

Whilst this was trivial to debug using the windbg client and dbgsvr on the server ( both part of the debugging tools package ) I found in some cases, particularly where the error code was being returned, that local variables couldn't be seen. The message <Memory access error> was displayed in the locals window for each variable.

Our standard process is to generate debugging symbols with each release build by simply selecting the generate debugging symbols option (http://msdn.microsoft.com/library/en-us/dnvc60/html/gendepdebug.asp). To accurately see variables in both the watch and locals windows you need to make sure you disable optimisations i.e. through Project Settings / C++ / Optimisations - Set this to Disable ( Debug ).

Obviously this should only be disabled for ad hoc builds as you will most likely want to enable all optimisations in your standard builds you deploy. It's hard enough trying to get our applications to be as performant as possible without disabling optimisiations which the compiler gives us automatically :)

Friday, February 10, 2006

Database filtering of documents - performance issues

Our application has a requirement to filter the documents a user can see based on various criteria e.g. security / approval rights. A user may only wish to see documents which they can approve or have approved. This would seem to be a common pattern for a lot of business applications.

To implement this our business objects create a list of document identifiers which are then appended to a SQL statement which selects the data and filters the documents returned. We used to use the IN clause to filter the documents returned using a pattern such as


SELECT * FROM PurchaseOrders WHERE PurchaseOrderID IN ( CommaSeperatedListOfIDs )

where the list of order ids was generated from the business object. After usage of the application quickly increased we encountered a stack overflow exception issue http://support.microsoft.com/kb/q288095. To resolve this we changed our approach to use temporary tables i.e. insert all the IDs from the business object into a temporary table and then use this to filter the main query e.g.

SELECT * FROM PurchaseOrders WHERE PurchaseOrderID IN ( SELECT ID FROM UniqueTempTableName )

The IDs are inserted into the temporary table using batch INSERTS e.g. in batches of 20 statements at a time. However when upwards of 10000 document identifiers of inserted into the table, it can take over 5 seconds to do this. E.g. an ASP screen which presents a list of invoices takes 15 seconds of which 11 seconds is attributed to the time to insert into the temporary table. Over the last day I've been exploring ways to minimise this.

Extended stored procedure
My first thoughts were creating an extended stored procedure which would create an in memory result set using a comma separated list of identifiers. The extended stored procedure would be called from a UDF which would return a table variable which could be used in a select statement e.g.

CREATE FUNCTION dbo.Tfn_IDs(@listIdentifiers text)
RETURNS @idTable TABLE (id int )
AS
BEGIN
INSERT INTO @idTable
EXEC master..xp_IDContainer(@listIdentifiers)
RETURN
END


However unfortunately EXEC cannot be used as the source for an INSERT INTO statement.

OPENXML
I then realised I maybe able to generate an XML representation of the identifiers using the extended stored procedure e.g. in the format

<values>
<value id="84">
<value id="85">
</values>

and then use the XML in a SQL OPENXML statement using

INSERT INTO @IDTable
SELECT ID
FROM OPENXML (@hDoc,'/Values/Value')
WITH (ID int)


UDF Only
I then went back to an approach without using an extended stored procedure and only using a UDF. The UDF could parse the comma separated list of values and insert into a table variable which would be returned. I won't outline the solution in detail but it basically encompasses using CHARINDEX to find the start and end positions of each identifiers using the comma as the delimmiter. It would be interesting to compare the timings of this and the OPENXML approach. The XML approach could be slower because of the more verbose structure of the data and the cost of creating the XML internally in SQL Server and performing XQuery expressions however it maybe offset by the improvement in speed of using a single INSERT INTO statement rather than multiple INSERT INTO statements for each identifier found and parsed in the UDF only approach.

However after increasing the size of the input to this solution it quickly became apparent that CHARINDEX truncates text data passed to varchar(8000) and so won't work with long text types which contains 10000 identifiers with an average of 3 characters each.

Back to a UDF + extended stored procedure
To overcome the issues faced in the UDF only solution an extended stored procedure would probably be the logical place to put all the parsing logic of the comma separated string into values used for an insert statement. However instead of using the resultset of an EXEC call as source to an INSERT INTO statement, which weve shown isn't possible, we could use an output parameter to get each identifier. Psuedo TSQL could look like

@sessionToken = xp_IDContainer_ParseValue('1,2,3')
WHILE xp_IDContainer_EndOfResults(@sessionToken) <> 0
BEGIN
@ID = xp_IDContainer_GetNextID(@sessionToken)
Insert id into table variable
END


Has anyone solved this problem using the approaches I've highlighted or other approaches which are successful with large volumes of data.

Thursday, January 12, 2006

Debugging - Data grid exceptions

I recently noticed an issue in one of our .NET controls which contains a Data grid which was causing values editted to be reverted back to their original values without any events being raised or any exceptions being raised to the client.

However if you enable all CLR exceptions to break into the debugger you will see an exception e.g. in this case "1,444.66 is not a valid value for Decimal". You can then get the call stack at this point and determine where the code is failing.

The data grid seems to handle quite a few exceptions silently which can cause confusion sometimes. If you are experiencing any odd behaviour with the Data Grid I strongly recommend you enable all exceptions to break into the debugger. To do this from Visual Studio .NET perform the following

  1. Select the Debug menu
  2. Select Exceptions
  3. Select Common Language Runtime Exception.
  4. Change "When the exception is thrown" to "Break into the debugger". This will apply the setting to all exception types unless you've overridden this.