Tuesday, December 04, 2007

User interface automated testing

If you’ve read any of my previous posts you’ll know I’m very keen on unit testing and continuous integration. Up until now we’ve managed to automate the testing of pretty much everything i.e. XSL / components covering many different technologies / web applications both ASP and ASP.NET / and also test and integrate database scripts. One thing I’ve always struggled on though is effective test automation of thick client GUI applications and always managed to push this task aside.

Recently whilst fixing a defect within a server component, I introduced another defect in a thick client user interface application which was using this component. At the time I added a few new unit tests for the component changes and checked all 3544 of our existing unit tests passed. When I got the green light on cruise control I was very happy with my changes and completely oblivious to the fact I’d just broken the GUI application :( When I discovered this a few weeks later I was determined to create a user interface unit test for this application– this was the kick start I needed.

John Robbins is usually more renowned for his excellent debugging articles however here he writes about the new features in NET framework 3.0 which support UI automation. This sounded like an ideal technology to drive the UI and test features. I’d strongly encourage you to read this article and download the sample source code.

In the article Johns written a wrapper over the UIAutomation object model which makes programmatic control of UI elements much simpler than using the UIAutomation API directly. However the wrapper assembly is still incomplete and only covers a few of the common controls. If you use this you will quickly find yourself needing to add new control types however this is relatively simple to do. To select an item in a combo box I created a class called UIComboBox which supports simple methods such as SelectItem. As you can see calling SelectItem sure beats writing this every time you need to select a combo box item in your client code.

public void SelectItem(Int32 comboBoxItemIndex)
{
AutomationElementCollection comboBoxItems = this.RootElement.FindAll(TreeScope.Descendants,
new PropertyCondition(AutomationElement.IsSelectionItemPatternAvailableProperty, true));
AutomationElement itemToSelect = comboBoxItems[comboBoxItemIndex];
object pattern = null;

if (itemToSelect.TryGetCurrentPattern(SelectionItemPattern.Pattern, out pattern))
{
((SelectionItemPattern)pattern).Select();
}
}

One wrapper control I certainly missed was one associated to the NET DataGrid. I was actually automating a NET 1.1 application and although I found this solution , it only works on a NET 2.0 DataGridView. Still if you are using this you can build a very nice API with methods such as SetAt and GetAt to retrieve / set cell content within the grid. The only solution I could find to fill the NET 1.1 grid was to use the SendKeys method to send all the required keystrokes to navigate through the grid.

All of the UIAutomation and the corresponding wrapper methods in the article download use methods which take a UIAutomation identifier. To create tests quickly it’s vital to use a tool such as UISpy to obtain these identifiers. All you need to do is put UISpy in “hovering mode” and then simply hold down the Ctrl key whilst hovering over a window in the application being automated. The window is highlighted in red and UISpy then reports all the details in the right hand pane, most importantly including the AutomationId.

Once you have the automation identifiers it’s very straightforward to write code to access any GUI elements e.g.

UIEditControl invoiceNoTextBox = appWindow.FindDescendant("invoiceNumberTextBox", ControlType.Edit) as UIEditControl;
invoiceNoTextBox.Value = "SLOG_Invoice_1";

It is usually far more effective to use FindDescendant to find controls rather than FindChild given this searches all children recursively and will eventually find the control if it exists :)

When building unit tests you’ll usually find you automate the GUI to a point where you need to test the expected output. This could either be

  1. Verifying GUI control values have been set correctly by asserting the contents of the controls. E.g. you could take the user through a task such as create a new invoice and then assert the contents of a screen which shows the invoice detail ( perhaps a wizard summary screen ).
  2. Verifying a specific action has occurred by invoking business objects and then asserting the values from the business object. E.g. you could automate the GUI task to create a new invoice and then when this is complete verify the invoice exists in the data store and the values e.g. invoice no, match the values you entered in the GUI.

User interface testing is just as important as any other component testing that I do hope more people embrace this and start to automate the testing of a good portion of their user interfaces. As developers we cannot and should not rely on QA to find the defects, at least certainly not the obvious ones. With a few simple unit tests we can make great strides to catch any serious defects before QA and then the customer see the product.

Thursday, October 18, 2007

Design - Modelling user interfaces in UML

In my previous post I focused on how to model a web application and it's component parts e.g. server and client side pages into UML elements. In this post I'll describe an EA add in I built to help reverse engineer UML models from user interface elements such as dialogs and there component parts e.g. check boxes, text boxes, button. Although the add in works only on thick client applications, the approach could also be used for web applications.

Most of the screens in your application, there contents and flow to other screen are typically driven by the use cases you write. It helps to model the screens using UML notation so you can describe each element and model how each element links or depends on other elements. Once the elements are in a UML model you could also show interaction e.g. an actor clicking a button which instigates a business object function which then builds a list box and populates a text box, and then enables another button.

Enterprise Architect is an excellent low priced modelling package, however whilst it provides many useful features and powerful forward and reverse engineering options ( it even includes a debugger which can auto generate sequence diagram based on a run through code ), it also provides great extensibility through the add in interface. In the previous post I discussed an addin I wrote to reverse engineer the UML model from various web page files, however this addin works differently by using a spy++ type capture pointer. The user only needs to hover the pointer over a window they need to capture, the addin then creates a UML model of the dialog using all the component parts of the window, with the exact same size and position as they appear on screen.

Thanks to Mark Belles for writing the windows capture code which provides the hover utility to obtain a window handle. From this it was a simple matter of extracting all the windows information using the Win32 API. The add in also should also provide a good template for anyone else writing one and contains supporting classes that help wrap the unmanaged EA interfaces.

The screen below shows the EA options dialog





When you start the add in through EA you are presented with the following dialog



The UML diagram below then gets generated.




Although the output isn't perfect it's a lot more productive than creating all yours screens manually. Not a job Id want to do often :)

The full source code to the add in written in C# is available in the EA WIKI, I hope you find this useful.

Thursday, September 06, 2007

My first open source project - Web modelling UML add-in for Enterprise Architect

I recently created my first open source project called EA WebModeller Add In which is an add in for the Enterprise Architect ( EA) UML modelling tool. The add in provides a much filled gap to the product to enable users to reverse engineer web projects into UML artifacts. Specifically using the web extensions for UML which Jim Conallen produced. The web extensions provide a standardised way of modelling the components which make up a web application.

A seemingly simple concept, which most would just think of as one web page associated with another quickly becomes very complex and is proof of why Jim could write a substantial book on this subject and still leave room for more. Classes are used with a variety of different UML stereotypes to define the various components which combine to produce a web page e.g. server side page which builds a “client side page” which aggregates a number of “forms”. Various stereotypes of association are also defined to support different associations between the components e.g. one server side page redirects to another server side page or a client side pages links to another client side page ( through hyperlinks perhaps ). Components contained within pages both on the server and client are also modelled appropriately.

The example web project included with the download available on source forge contains a number of scenarios e.g. file 1.asp includes files include1.asp and include2.asp. Both included files contain server side functions. 1.asp also contains a redirect call to 2.asp, an embedded class, some client side functions ( in java script ) and an embedded form. The 1.asp server side page and it’s associated elements are shown in the diagram below






This is produced by the following settings in the add in









The add in is built in an extensible way to allow new script parsing languages to be easily integrated within the framework e.g. a base ScriptParserBase class contains JavaScriptParser and VBScriptParser derived classes. Similarly the ServerPage class which models the UML server side page contains derived classes ActiveServerPage (ASP ) JavaServerPage (JSP) and DotNetActiveServerPage ( ASP.NET ) for each of the web application technologies. Note only the ActiveServerPage class is fully supported for ASP pages, I’m hoping the open source community will develop the JSP and ASP.NET technologies :)

The add in uses mostly regular expressions to parse the script source into it’s appropriate components. They range from very simple expressions such as #include\s+(?virtualfile)=""(?.*) which parses include directives within ASP pages. The PathType and IncludedFile are named captured groups which here capture the type of file reference e.g. virtual or absolute file location. To relatively compex expressions such as “^\s*function\s*(?\w+)\((?.*)\)\s*(?(?:(?\{)(?\})(?(LeftBrace)[^\{\}]*.*))*)” which use “balance grouping” to support recurision within the expression. This power is required here to match all function definitions because we can’t simply match an opening and closing brace as we may find the first closing brace is not closing the function, it’s in fact closing the first for loop defined within the function.

To parse the client side page the internet explorer active x control is used to load the htm file produced when navigating to the web page. By using the IE control, the parts of the web page such as embedded forms and controls within the form can be found and added to the UML components. I actually wrote a web application testing framework around this control however recently I’ve started to use the WatIN framework which is more feature rich.

Given the main web application our team develops is written in ASP with many hundreds of pages, the productivity boost from using the add in is huge. Although we couldn’t possibly try and reverse engineer the whole web application in one go, being able to selectively single out and reverse engineer a few web pages when adding new features or doing some refactoring is of great help. It clearly shows the effected pages and where the new logic is best placed and perhaps most importantly helps to show the impact of any changes on existing functionality.

Monday, August 27, 2007

Window Workflow Foundation - A new way to develop workflow

I’ve recently reviewed the new workflow framework called Windows Workflow Foundation ( WWF ) which is part of the .NET framework 3.0 runtime. Using the framework successfully within our applications will require a completely different thought process on traditional methods e.g. where we typically put most of the workflow behaviour directly within the code.

I started to develop a generic workflow framework implementation of core business object services a few years back, however even with considerable effort, the implementation lacks many of the features provided by WWF. Given that most business applications go some way to make them flexible by developing a generic workflow framework, use of WWF should allow most teams to concentrate more on the back end business logic rather than trying to do what WWF does very well.

I would highly recommend reading Bruce Bukovics excellent WWF book http://www.apress.com/book/bookDisplay.html?bID=10213 which provides coverage of all the important features in WWF very clearly.

I have summarised the main points of WWF below

General Points

  1. Multiple types of workflow – It handles both sequential and state machine workflows, although typically P2P / Banking applications which Bottomline develops use state machine workflows because the processes don't define a fixed flow of control within the workflow.
  2. Scalability - The ability of a workflow to persist itself could allow multiple servers to be workflow runtime hosts and therefore a workflow could be loaded and executed on a different server than the one that started it.
  3. Workflow tracking - This is one of the most powerful features of WWF. However whilst it provides rich support for tracking the workflow through the different stages in the lifecycle, it also tracks rules. Rules evaluation records each rule or rule set as it is evaluated. This could also help to easily explain to users why the workflow has progressed in a particular way. You can even go further and use custom tracking profiles to track the values of the entities at particular points in time when for example a rule was evaluated. This provides a complete tracking picture for rules evaluation
  4. Hosting of workflow designers - The tools to host the designer tools within your own applications are exposed and available as part of the WWF framework. This allows you to create even simple designer interfaces very easily e.g. allowing a user to create a number of different states within a state machine and chain them together to describe how each state transitions to the next and the event which cause the transition.
  5. Parallel execution of activities within a workflow – Whilst activities cannot execute truly concurrently within a workflow the framework does support execution of two parallel branches of activities in that the first activity of the first branch will execute, when this finishes, the first activity in the second branch executes and so on
  6. Exception handling – Provides support for handling exceptions in the workflow e.g. if activities which are part of the workflow do not complete as expected, different activities can be called and activities cleaned up correctly. Related to this is compensation which allows an activity to be undone rather than rolled back, when typical long running transactional flows could not possibly maintain a database transaction for a long period of time.

Rules

  1. The rules and workflow data can be specified in an XML based format which is compiled by the WWF into code making it very quick to execute at runtime. The rules files can either be created manually or modified through using a rules designer.
  2. Very feature rich rules can be created e.g. if you have access to the invoice entity you could say if Invoice.LineItems.Count > 1 then HasMultipleLineItems, and then write control of flow in your workflow using this rule.
  3. Support for forward chaining of rules also means that you don't need to be so concerned with the order or priority you specify rules in. If a rule further down in a list of rules causes fields to change for which previous rules relied upon, this will cause those previous rules to be re-evaluated. Although you can override this for specific rules and specify a priority and indicate you don’t want any forward chaining to occur

Persistence of workflow

  1. Objects associated with the workflow are serialized and de-serialised seamlessly. E.g. an invoice workflow object may require references to the invoice itself, and perhaps related purchase orders.
  2. Handles the persistence of the workflows in the database or any other data store as required. Given the complex workflows you can create this support coming out of the box is a big plus. Not only is the workflow itself persisted i.e. the current state machine workflow , but all the associated rules and rule sets that the workflow contains
  3. Workflow versioning / Dynamic workflow updates - Versioning is also handled well i.e. as you evolve your workflows and add new activities, create new or modify existing rules. Multiple entities can therefore exist which are associated with older and newer workflows. Although conversely with very long running workflows e.g. when a process will take a considerable time to complete you may want to change an existing workflow, this is also supported through dynamic updates.

Integration

  1. Publishing of workflows – Workflows can be published as web services, which provide very powerful integration features to third party systems. This enables applications to call into the workflow and take appropriate actions e.g. approve an invoice.
  2. Workflow chaining - Allows you to chain workflows together i.e. call one entirely new workflow from within an activity of another workflow.
  3. Activities within workflows can call into web services, which provide easy integration points into many different ERP systems.

State machine workflows

  1. Support complex state machines - Support for recursive composition of states which effectively means states can contains other states i.e. any events defined by the outer state are available while the workflow is currently in any of the inner states. This allows more elegant state machines to be designed which don’t contain redundant activities e.g. for an invoice a Cancel activity could be defined in a single state rather than all possible states from which the invoice could be cancelled.
  2. Visibility of permissible events - The framework provides support within state machine workflows to easily see which events can be called when the workflow is in a specific state. We can easily see how this would be useful e.g. for invoices to determine which action buttons to display such as "Approve" could be driven on whether the "Approve" event is available. This would also facilitate external components knowing the events which they could call through a web service interface.


Using WWF should allow you more time to concentrate on the development of the business logic and also reduce the risks that are inherent in building a generic workflow framework.

Thursday, August 16, 2007

Continuous Integration – Using integration queues

One of the long standing issues we had with our continuous integration project was the lack of any concurrency control when multiple projects were executing. Although there was some support for this through custom plug ins to ccnet it wasn’t integrated within the core code well i.e. you couldn’t easily see through cctray which projects were executing and which projects were pending, waiting on other projects to complete.

I managed to successfully integrate this new release with our project today which means for example when our unit test project is building, the core NET build project can no longer execute, which previously could cause all components to be removed whilst tests were running L. We have quite a few ccnet projects with a few dependencies so this is very useful. I’ve included the project dependencies in the diagram below




The dependencies shown in the diagram are not true UML dependencies, they simply show the project dependencies which trigger builds to occur i.e. if the Sprinter3.6.4 database project performs a new successful build, it will trigger the web app project to build.

If the diagram showed true UML dependencies then there would be a dependency between the NET assemblies project and the Sprinter3.6.4 database because the NET components depend on the schema of the database. However the NET code doesn’t need to be re built if the database changes, only component or web application unit tests should be re-executed.

Friday, May 18, 2007

Debugging - Exception stack traces are not always detailed enough

We often use the Microsoft Exception Management Application Block ( MEMAB )to publish unhandled exceptions in our applications. This provides a detailed log including all inner exceptions contained within the outer most exception. For each exception the message and stack trace are output. For the inner most exception the stack trace usually contains the actual method which caused the exception.

Although most well written applications should contain small reusable methods of perhaps no more than 20 lines, this is often not the case. If methods only contain a few lines, even with a stack trace it is sometimes very difficult or impossible to pin point the exact line which is causing the exception to occur. To find out the exact line causing the error the best course of action would be to take a dump. You can find out how to do this for a .NET application in my article http://msdn.microsoft.com/msdnmag/issues/05/07/Debugging under “Managed First-Chance Exception”. Once you have a dump you can usually find the line of code on the production server without even using debugging symbols using the techniques highlighted below.

Assuming you have a System.NullReferenceException thrown from a .NET component you should trap access violation exceptions ( av code ) and take a full dump on this type of exception. E.g. assuming the MEMAB logs the following exception info


Exception Type: System.NullReferenceException
Message: Object reference not set to an instance of an object.
TargetSite: Void Tranmit.Common.Interfaces.IXmlSerialisable.LoadFromNode(System.Xml.XmlNode, System.Collections.Hashtable)
We can find out the exact location in LoadFromNode by loading the full dump file in windbg then executing


We can find out the exact location in LoadFromNode by loading the full dump file in windbg then executing

0:000> !u @eip
Will print '>>> ' at address: 04a34312
Normal JIT generated code
[DEFAULT] [hasThis] Void Tranmit.Sprinter.AccountCodeCollection.Tranmit.Common.Interfaces.IXmlSerialisable.LoadFromNode(Class System.Xml.XmlNode,Class System.Collections.Hashtable)
Begin 04a34168, size 25b
04a34168 55 push ebp
04a34169 8bec mov ebp,esp

04a342de ff15381ea204 call dword ptr [04a21e38] (Tranmit.Sprinter.AccountCode.ExtractIdentityFromNodeAsString)
04a342e4 8945d8 mov [ebp-0x28],eax
04a342e7 b9241fa204 mov ecx,0x4a21f24 (MT: Tranmit.Sprinter.HistoryItem)
04a342ec e827ddf8fb call 009c2018mscorwks.pdb not exist
Use alternate method which may not work.
04a342f1 8bf0 mov esi,eax
04a342f3 8b45e8 mov eax,[ebp-0x18]
04a342f6 8945c8 mov [ebp-0x38],eax
04a342f9 8b157cc31202 mov edx,[0212c37c] ("The level {0} account code {1} could not be found")
04a342ff 8955cc mov [ebp-0x34],edx
04a34302 b978afba79 mov ecx,0x79baaf78 (MT: System.Int32)
04a34307 e80cddf8fb call 009c2018mscorwks.pdb not exist
Use alternate method which may not work.


04a3430c 8945d0 mov [ebp-0x30],eax
04a3430f 8b4ddc mov ecx,[ebp-0x24]
>>> 04a34312 8b01 mov eax,[ecx]
04a34314 8b400c mov eax,[eax+0xc]
04a34317 8b803c040000 mov eax,[eax+0x43c]


Reviewing the source code we can see

String accountCodeIdentity = Tranmit.Sprinter.AccountCode.ExtractIdentityFromNodeAsString(accountCodeNode);
historyItems.Add(new HistoryItem(this,
String.Format("The level {0} account code {1} could not be found", accountCode.Level, accountCodeIdentity),
DocumentType.SupplierInvoiceLineSplit));


Although we have no symbols loaded on the production server we can be pretty sure that the line

04a34302 b978afba79 mov ecx,0x79baaf78 (MT: System.Int32)

Refers to the references in the String.Format method call to the Int32 property Level referenced on the accountCode property. Reviewing the code surrounding this line we can see why the account code object would be equal to a null reference and proceed to diagnose the issue further.

Friday, January 26, 2007

Code Reviewing - Improving our process

We have implemented a code review process over the past 3 years which seems to work reasonably well. This helps considerably to reduce the number of defects QA and ultimately the customer will see. However it’s only recently that I’ve started to analyse our current process with a view to improve it.

One of my biggest concerns is that we find a lot of non functional defects and not many functional defects in code review. Whilst the non functional defects are important in terms of helping with code maintainability, functional defects should be the top priority on any code review as these defects usually manifest themselves as issues which would be visible to a customer.

I have recently read a copy of “Best Kept secrets of peer code review” which is freely available from smart bear software at http://smartbearsoftware.com. It provides useful techniques which can be used to improve your code review process and are based on real life case studies. Although of course the book does lean towards using smart bears code collaborator product to help with the code review process it does have a lot of useful content to warrant reading it even if you’re not likely to purchase these tools.

Some points I picked up from the book which I’m going to look at in improving our process are

  1. Initial quick scan – The first initial scan can be very important in helping find defects throughout the code. As there is a negative correlation in the time the reviewers takes on the first scan versus the detected speed. The more time the reviewer takes on their first scan, the faster the reviewer will find defects. If you don’t perform a preliminary scan your first issues identified may focus on the wrong areas. With a quick scan you can identify areas with the highest possible issues and then address each one with a good change you’ve selected the right code to study.
  2. Review for at most one hour at a time – The importance of reviewing in short time periods shouldn’t be underestimated. We should try and avoid large reviews and try and split them up into smaller reviews where possible. E.g. you could certainly split up reviews of unit test codes versus core code. Although of course this isn’t an excuse to submit code with known defects for review.
  3. Omissions are hardest defects to find – It’s very easy to find issues for code you’re reviewing, it’s much harder to find defects in code which isn’t there. Having a review check list illeviates this issue some what to remind the reviewer to look for code which may not be present. Defects that aren’t associated with any checklist item should be scanned periodically. Usually there are categorical trends in your defects; turn each type of defect into a checklist item that would cause the reviewer to find it.
  4. Defect density and author preparation – Providing some preparatory comments before code is submitted for review does seem to have an impact on defect density. The key point here is that by annotating certain areas of code, developers are forced to do some kind of self review of their code which obviously increases the quality. Although there is a danger that the developer by preparing could disable the reviewer’s ability for criticism i.e. he’s not thinking afresh. However usually by developers preparing these kind of comments makes them think through their logic and hence write better code.
  5. Leverage the feedback you’re getting - Improve your ability and make yourself a better developer by reviewing the points you commonly receive. Code review points should be learned so you reduce the number of defects over time. A useful exercise could be to put your comments in your own personal check list which you visit when ever performing a code review. Any points which aren’t covered on the global check list should be added
  6. Developer and reviewer joint responsibility – Defects found in code review are good. A defect found in review is a defect which never gets to QA or the customer. Developer and reviewer are a team and together plan to produce excellent code in all respects. The back and forth of coding and finding defects is not one developer punishing another, it’s simply a process in which two people can develop software of far greater quality than either could do single-handedly. reviewer finds in the developers code the better, this should be thought excellent summary
  7. Checklist – Use check lists as an aid to identify common defects. The most effective way to build and maintain the checklist is to match defects found during review to the associated checklist item. Items that turn up many defects should be kept. Defects that aren’t associated with any checklist item should be scanned periodically. Usually there are categorical trends in your defects; turn each type of defect into a checklist item that would cause the developer to find them next time.

Tuesday, January 09, 2007

Design - Transitioning from conceptual domain to physical data model

When creating the design for new requirements we often design a domain model of key entities and then when were happy with it we migrate this domain model into a logical and then physical data model. Although we should go to the logical model and then transition to the physical data model we often go straight to a physical data model.

We currently perform this process manually however Enterprise Architect ( UML modelling tool ) has got great support for transforming a domain model into a data model using model driven architecture ( MDA ) templates. It simply reads the entities and the multiplicity relationships between them and generates an appropriate data model. Handling many to many relationships isn't handled with the built in templates however you can easily customise the transformation template using an editor e.g. you can specify

%if attTag:"columnLength" != ""%
length=%qt%%attTag:"columnLength"%%qt%
%elseIf classTag:"columnLength" != ""%
length=%qt%%classTag:"columnLength"%%qt%
%endIf%

to use tagged values to denote the length of a character string in the domain model. Although you shouldn't be too concerned with lengths of strings in the domain model, if you want to specify this information in a tagged value you can at this point, and then ensure it's carried over to the data model e.g. to create a varchar(100) field for a UserName attribute of a User entity. Sparx support are currently developing these templates further and the latest version of the template handles many to many relationships by creating a link table in the format JoinTo however you could customise this to fit your own database standards.

It would be great if through automation we could eventually create a data access layer from the conceptual model using the following steps


  1. Create the conceptual domain model. A business analyst would ideally create the first draft
  2. Add attributes as required to turn this model into a logical data model
  3. Transform the logical data model into a physical data model

We actually use LLBLGen ( object relational mapping tool ) to generate a data access / business object layer from the database so in this way we could continue this process and generate the LLBLGen classes by following some additional steps

  1. Create the database from the physical data model.
  2. Create the LLBLGen classes from the database directly.

Both of these steps can actually be automated through the automation ability within Enterprise Architect and LLBLGen can be automated too to forward engineer the generated code. In this way we could go from a logical model to a generated database and then to an auto generated business object / data access layer automatically.