Sunday 25 March 2007

Exception handling: Catch, throw or let it go?

When you think about it, structured exception handling is a powerful yet reasonably straightforward mechanism for...well for handling exceptions in a structured way. You call a method and it either throws an exception or it doesn't. If it does, you have 3 choices:

  • Rethrow the exception
  • Throw a different exception
  • Catch the exception and continue processing
There are no chiselled, marble tablets defining the universal laws of exception handling. There is no perfect strategy for dealing with every possible exceptional scenario. However, I have seen enough poorly implemented error handling strategies to at least recommend a few guidelines here.

Be careful what you catch
Any catch blocks following Catch(System.Exception) will be ignored because System.Exception is the base class for every other exception type. Luckily, the compiler does not allow this anyway and raises an exception of its own if you decide to ignore this advice.

Only catch an exception if you expected it and you know what to do about it. Never use the construct shown in Snippet 2 in a class library. The way an exception is handled should always be the responsibility of the calling application.

A System.Exception should never be caught unless it is rethrown. This rule should always be followed for class libraries. Within an application, the application policy will determine whether a System.Exception should be rethrown.

Be careful what you throw
Never throw System.Exception; take care when throwing any other exception base class types. If you are defining your own exceptions, then think carefully about its base class. For example, if you derive an exception type from ArgumentException, then any code that already catches ArgumentException will catch your new exception type. This may or may not be a problem based on how you (or someone else) is handling the exception.

All custom exception types should end with 'Exception'. e.g. NumberTooLargeException, NameNotFoundException etc.

Use the recovery position
So you have a class library and an exception occurs. You want to leave the responsibility of handling the exception to the calling application but you also want to make sure that your system is not left in an unrecoverable state. So what do you do (please see Snippet 1)?

 Snippet 1

 public void LibraryMethod()
 {
     try {
         //Method logic here
     }
     catch {
         //Handle system consistency here
         throw;
     }
 }

Basically, all possible exceptions are caught, the system's state is verified and the same exception that originally ocurred is rethrown. It may be appropriate to restore consistency in a Finally block depending on the context of your work.

Please note that using throw ex; instead of throw; causes the stack trace to be reset here.

Avoid boilerplate code
It's easy to get into the habit of adding Try/Catch/(Finally) blocks to each method that you write but this really is not the way to approach structured exception handling. I have worked on projects where the following pattern is used with alarming regularity:

 Snippet 2

 public void ProcessData(int firstNum, int secondNum)
 {
     try {
         //Method logic here
     }
     catch (Exception ex) {
         //Log error here
     }
 }

So any exception is caught, logged and discarded. The application continues (most probably in an inconsistent state) and the user/developer remains blissfully unaware until they make a conscious decision to check the logging device and find entries representing errors.

Validate parameters
Where you are creating a library that will be used by other developers, it is prudent to validate the parameters of all public and protected methods. This has a number of benefits:
  • The caller is informed if invalid parameters are supplied
  • The code is more likely to run consistently with valid data
  • Problems are identified quickly, less code is executed with invalid data
  • Effort to rollback to a consistent state may be avoided
If a problem is detected, throw an exception that derives from ArgumentException; the most notable being ArgumentNullException and ArgumentOutOfRangeException. If these are not appropriate then consider throwing a custom exception derived from ArgumentNullException or ArgumentNullException itself. Snippet 3 illustrates the process.

 Snippet 3

 public int DivideNumbers(int numerator, int denominator)
 {
     //Validate denominator to ensure non-zero value.
     if (denominator == 0)
     {
         throw new ArgumentOutOfRangeException(
             "denominator", "Denominator cannot be zero");
     }

     return numerator/denominator;
 }

Be aware that arguments passed to methods by reference may be changed after validation has occurred (e.g. in a multi-threaded application). In these cases it may be wise to create a copy of the argument(s) and operate exclusively on this copy within the method.

In the example shown, the denominator is validated against a value of zero. An alternative here would be to use a Catch block to trap this occurence. The actual method used depends on how often the situation is likely to occur. The validation as shown is fine but this check will be performed on each pass through the method. If the possibility of denominator being zero is rare then it may be more prudent to use a Catch block and rethrow the exception for improved performance.

And Finally...
A Finally block is guaranteed to run whether an exception occurs or not. It is therefore the logical place to add clean-up code. Beware that any exceptions raised within a Finally block will hide details of any exception that was raised in the corresponding Try block so try to avoid them.

Structured exception handling provides a consistent framework for dealing with exceptional situations. Taking an umbrella with you on a winter's day stroll adds a little overhead to your journey but it also prevents you from getting soaked if the heavens open. Handle exceptions in a similar way; prepare for what may happen before you start your development journey and you will never find yourself without an umbrella if something unexpected happens.

Saturday 24 March 2007

Vital statistics

As with many software houses, we have tried various ways of measuring and monitoring the 'quality' of the code we produce. We are not blacksmiths forging ornate, wrought iron gates that welcome visitors to the stately homes that they protect. We are not carpenters creating polished dining tables that provide the social focus for a very special dinner party.

We write code, code that is often compiled into some indecipherable intermediate language and then laid at the mercy of a virtual machine. It cannot be directly admired by its users; even when displaying information, it renders results that are presented on a screen that has been designed by a graphics designer or business analyst. Often it displays nothing at all, quite happy to sit far from the limelight in a cold, dark server room that provides more comfort to mushrooms than human beings.

Despite its intangible nature, it is what we produce and it defines and differentiates us as solution providers. It is every bit as important to us as the gates and tables are to the blacksmith and the carpenter. Few people would argue that quality of code is important but in order to measure it properly we first need a suitable definition.

Of course there is no absolute, irrefutable description for 'software quality' and there are as many opinions (most probably more) than there are IT professionals. Many runners jostle at the starting line in the race for software excellence: coding standards, performance, bugs per developer...even cyclic complexity! My suggestion is to judge code on its simplicity.

Simple code is easier to maintain. The problem with code maintenance is that it is usually done a long time after the original code was written and by somebody that didn't write the original code. Code maintenance often occurs after software has been released where the cost of change is highest.



Simple code is easier to understand. Being frustrated for hours by a design pattern implementation or sample code from an anonymous expert on the internet that doesn't quite work as expected is usually an indication that you should try something else. If you don't understand it now then how will it appear in 6 months time to someone else that has to fix a critical bug, immediately, with no previous knowledge of your code or the system it belongs to?

Simple code is self describing. It does not require copious comments or armies of related documentation fighting to remain up to date with each change that is made to the associated software. Of course, comments are a good thing but only if they accurately describe the code they represent; this is often not the case. It could be argued that no documentation is better than inaccurate or outdated documents where false assumptions are easily made. Either way, the simpler the code, the less need there is to refer to supporting documentation.

Simple code requires less effort to produce. The Cocomo II project provides comprehensive analysis of factors that contribute to increased effort in software implementation. Platform experience, appropriate software tools, multisite development and time constraints, along with many others, all play second fiddle to product complexity in terms of increasing effort.

A good story can be enjoyed even if scribbled in crayon on crumpled paper. However, print in a legible font, on good quality paper, add page numbers and chapters, a preface and an index, sprinkle with artistic illustrations and suddenly it springs to life. In a similar way, simple code is enhanced by good coding standards, proper indentation, sufficient comments and a pragmatic design. In fact, simple code is defined by these attributes rather than merely enhanced by them.

As developers, we often strive for the perfect design and implementation. Sometimes it is the ability to compromise effectively that produces the best results; it is left to the reader to decide whether worse can be better.

In any event, keep it simple and the rest will follow - and you won't find yourself cursing in 6 months time when you have to refactor your optimised code :-)

Thursday 22 March 2007

The new SharePoint...smaller steps, greater strides

Microsoft's new SharePoint offering, MOSS 2007, offers many new and improved features that compliment and extend previous versions of the product. Rather than clumsily repeating the Microsoft marketing spiel here I would like to discuss a less tangible, but equally important aspect of this new framework - incremental development.

Collaboration is the key here, it is one of the principal benefits that a SharePoint portal provides and promotes. Ironically, the SharePoint framework also supports a collaborative and incremental approach to development that offers an attractive, lower risk alternative to up-front, big budget project implementations.

Slip into the cautious shoes of an IT manager looking for the ideal provider for his new corporate extranet. Excited by the prospect of sharing, managing and interacting with information that literally sits at the end of your fingertips yet troubled by the memory of past projects that never quite delivered what they promised. You have to justify large, up-front commitments to your project sponsors whilst convincing yourself that the arbitrary 30% cost increase for a fixed price quotation is a small price to pay for supposedly heaping all of the risk on your provider. Or do you?

Agile advocates will tell you that lightweight, throwaway prototyping is the way to go; RUP followers suggest a comprehensive set of document artefacts - your sponsors just want to see results.

SharePoint enables customers and providers to work closely together; building prototypes that can be continually reviewed and refined in small steps. The system being built is under constant scrutiny and changes can be made in real time. This removes the need for large, verbose functional specifications that are often misinterpreted and constantly changed. Rather, time is spent implementing the actual system; each functional area may be refined until an agreed point is reached before moving onto the next requirement.



Your sponsors are happier; with a small, initial investment they can see immediate results. As confidence is gained further investments may be made. Your providers are happier; they are keen to produce good work to ensure future investments. A lower risk approach is common to both sponsors and providers and the burdens of missing fixed cost schedules and costings is considerably reduced.

At the end of each discrete iteration, a decision is made to continue with the project or call an end to the proceedings.

Of course, the whole concept of iterative development has existed for a good while and may be attributed to both successful and unsuccessful projects. For once, I have to hold up my hands and say that SharePoint (in its new 2007 livery) really has given me the opportunity to offer our customers a credible and advantageous alternative to fixed cost deliverables.

Please take the first, low risk step and give it a try - you may find that the shoe really does fit :-)