Friday 14 December 2007

Pic

Sunday 18 November 2007

iTunes software piracy incentives

People steal software for many reasons. Of course it's generally easier to copy and share your mp3 collection than it is to walk out of HMV without paying for a bagful of CDs. Mailing application serial numbers around is certainly convenient for the morally corruptible amongst us and sharing account details for online systems is a well established practise.

So where does iTunes fit into this battle for moral righteousness? It only takes a casual browse around the iTunes store to see how many people think their pricing structure may help to promote piracy. Let's look at an example:

Lost series 1 Pay £32.99 on iTunes for video quality that is inferior to standard DVD (PAL resolution)

Pay £34.98 on Amazon for the full DVD boxed set

So for £1.99 extra you get a nicely boxed DVD set for your bookshelf that you can watch on a PC, Mac, TV or even projector with no scaling problems whatsoever. Additionally, there is lots of free software that will convert your Lost episodes to a format that can be viewed on any video capable iPod.

There is, of course, convenience to consider. With a few button clicks and a decent broadband connection you can download episodes of Lost directly to your computer and quickly switch them across to your iPod via firewire or USB2. It still seems a ridiculously high price to pay in my opinion when I can get next day delivery from Amazon for a 'real' product. This seems to be the general concensus of the general public too according to the comments that accompany many of the downloads that iTunes offer.

And there's more! Video downloads to the UK from iTunes cost considerably more than from the US. Alarmingly, in the US this same download costs $34.99 - meaning us poor souls in the UK pay almost double for an identical product. Come on Apple, be fair!

It's also bizarre that you are not able to make legitimate backups of your downloaded video to DVD. It seems that the very steps that iTunes are making to try and prevent piracy are alienating many of their potential customers.

Whilst these issues will not convert me to a life of piracy, I will certainly not be using the iTunes service unless some serious changes are made.

Friday 9 November 2007

Temperature and double standards

A friend sent me a link to a funny but alarming article in The Register today regarding a young woman's invalid scratchcard claim. Basically each scratchcard has a temperature printed on it; the idea is to scratch away a small window to reveal a number that is hidden beneath it. If the uncovered number is less than the printed temperature then the card owner wins a prize. The temperature on the woman's card was -8 degrees and the number she uncovered was -6; she was not happy when Camelot refused to acknowledge her claim. She was reported to say "I phoned Camelot and they fobbed me off with some story that -6 is higher, not lower, than -8, but I'm not having it."

Being an Englishman, hot Summer days are a rare experience and we tend to report the temperature in farenheit, for example "It must be 90 degrees today, quick get me another beer!" Yet on a cold, frosty morning as we're scraping the ice off the car windows we're equally likely to exclaim how it must be at least -5 which of course is measured in centigrade.

Perhaps 32 degrees centigrade doesn't seem quite as hot as 90 degrees farenheit and maybe 23 farenheit feels a lot warmer than -5 centigrade? I guess turning up the central heating always feels more justified when it's zero outside rather than 32 degrees.

:-)

Thursday 18 October 2007

Trial software on trial

I'm sure that just like me you often need a bit of software to help you out with a particular task. Last week I wanted something that would help me find album cover art for my MP3 collection; it just makes flipping through music with iTunes a better experience.

A few minutes with Google and I found a likely candidate; after navigating to the parent site I was asked to register my details before downloading the utility. This annoys me! After all, I took the time to find the utility in the first place and registered an interest by wanting to download a trial version. If the software did what I wanted then I would have been more than happy to pay the license fee. If it didn't then no amount of email marketing would have changed my mind. The decision is made in the first few minutes and having to spend a couple of those minutes registering my details (or Mickey Mouse's if I'm feeling particularly mischievous) does not get the vendor off to a good start.

It's a little like walking into a Virgin Megastore and not being able to look around without first filling in an application form - great marketing...for HMV!

So site registration is not my favourite exercise but it registers a poor second to functionally restricted software. Why on Earth would anyone want to provide trial software that is not 100% complete in every way? First impressions count and if my first view of an application leaves me thinking that it could do more then it is highly unlikely that I'll pay good money for it. Knowing that additional features exist in the full version does not sway the argument.

It's a little like taking a new car out for the day on a test drive only to find that the performance has been limited or the air conditioning disabled. It doesn't create a good impression.

So please software vendors, wherever you are. Don't make us register with your site, don't restrict your trial downloads (except by time) and trust your development and design skills to win us over. I, for one, would be a happier shopper and a happy shopper spends more than a disgruntled one.

:-)

Wednesday 17 October 2007

Part 4: Implicit types

With C#2, a variable's type must be explicitly declared, some simple examples are:

int clientAge;
string clientName = "Matthew";
DateTime currentDate = DateTime.Now;

Notice that we do not have to specify a value for the variable at the point at which it is declared, in the example, clientAge will be assigned a value of zero by the compiler. In C#3, variables may be declared with the var keyword in which case the compiler infers the variable type from the value it has been assigned...but it does this at compile time!

So var is not a variant (it's just a keyword that's not named particularly well) and it's not an object type either.

The following constraints apply to the use of the var keyword:

  • var can only be used within a method and not at the class level
  • Any variable created with var must have a value assigned as part of the declaration

Please see the following code for examples:

class VarSamples
{
    //This will not compile, the following error will be reported:
    //The contextual keyword 'var' may only
    //appear within a local variable declaration
    var invalidDeclaration;

    public void ImplicitTypes()
    {
        //The compiler will create postCode as a string variable
        var postCode = "B12 5HP";

        //The compiler will create postCode as an integer variable
        var houseNumber = 8;

        //This will not compile, the following error will be reported
        //Implicitly-typed local variables must be initialized
        var streetName;

        //Perform operations with var variables.
        int nextHouseNumber = houseNumber++;
        string formattedCode = "Postcode: " + postCode;
    }
}

The invalidDeclaration variable will not compile because it has been declared at the class level. This is not allowed for variables declared as var.

The streetName variable will not compile because any variable declared with var must be initialised when it is declared. Otherwise, the compiler is not able to determine an actual type for the variable when it is compiled.

So why do we need var at all?

The var keyword was introduced to support anonymous types that are also a feature of C#3. Anonymous types are described in the next article.

Part 3: Initialising objects

Consider the following simple class definition:

//Simple client class used to demonstate
//C#3 object initialisation
class Client
{
    //Read-write property for the client's name
    public string Name { get; set; }

    //Read-write property for the client's age
    public int Age { get; set; }

    //Read-write property for the client's height
    public double Height { get; set; }

    //Read only property that is True for existing
    //clients and False for new clients
    public bool IsCurrent { get; private set; }
}

A constructor has not been explicitly created for this class so the compiler will generate a default, parameterless one for us. This means that an object can be created and properties assigned inthe way that we are currently used to with C#2

//Create a client object using the default constructor
Client firstClient = new Client();

//Assign values to the class properties that have public set methods
firstClient.Name = "David";
firstClient.Age = 32;
firstClient.Height = 124;

However, with C#3, we are now able to create an object using any of the following methods:

//Construct the object by supplying the client Name
Client secondClient =
    new Client { Name = "Matthew" };

//Construct the object by supplying
//the client Name & Age
Client thirdClient = 
    new Client { Name = "Sarah", Age = 21 };

//Construct the object by supplying
//the client Name & Height
Client fourthClient =
    new Client { Name = "Daniel", Height = 131 };

//Construct the object by supplying
//the client Height only
Client fifthClient = new Client { Height = 108 };

This is essentially identical to creating a client object with the default constructor and then calling each property set method as appropriate. Looking at the disassembled code shows us what isactually happening for these client objects.

Client <>g__initLocal0 = new Client();
<>g__initLocal0.Name = "Matthew";
Client secondClient = <>g__initLocal0;

Client <>g__initLocal1 = new Client();
<>g__initLocal1.Name = "Sarah";
<>g__initLocal1.Age = 0x15;
Client thirdClient =<>g__initLocal1;

Client <>g__initLocal2 = new Client();
<>g__initLocal2.Name = "Daniel";
<>g__initLocal2.Height = 131.0;
Client fourthClient = <>g__initLocal2;

Client <>g__initLocal3 = new Client();
<>g__initLocal3.Height = 108.0;
Client fifthClient = <>g__initLocal3;

As the code above shows, each client is constructed using the default constructor and then the property set methods are called as necessary. Of course, we cannot specify a value for IsCurrent when creating a client object because the set property is marked private (as shown in the first code snippet). Trying to do so will generate a compile time error of something like: The property orindexer ‘Blog.Client.IsCurrent’ cannot be used in this context because the set accessor is inaccessible.

Part 4: Implicit types

Part 2: Automatically implemented properties

C#2 provides the ability to create properties that encapsulate class member variables; the following code provides a simple example:

//Private member variable to store the client name
private string name;

//Read-write property for the client?s name
public string Name
{
    get
    {
        return name;
    }
    set
    {
        name = value;
    }

The idea here is that the property prevents the class variable from being accessed directly. A property can be made read only as follows; this is simply a property that does not have a set statement.

// Read only property that returns the client?s name
public string Name
{
    get
    {
        return name;
    }

C#3 provides a shortcut syntax for creating class variables and associated properties; the first code example can be shortened to:

// Shortcut syntax for creating properties
//and underlying member variables
public string Name { get; set; } 

And of course, the principle of read only (or write only for that matter) properties is also supported using this new syntax:

// Shortcut syntax for creating properties and underlying
//member variables (this time ClientName is read-only)
public string Name { get; private set; }

For this shortened syntax, the compiler actually generates all of the required code behind the scenes. So in effect, you end up with exactly the same code that you would have got if you had written it using the available syntax of C#2. The code below shows the MSIL that is generated from the source code; note the use of the [CompilerGenerated] attribute that decorates the code.

[CompilerGenerated]
private string k__BackingField;

public string Name
{
     [CompilerGenerated]
     get
     {
         return this.k__BackingField;
     }
    [CompilerGenerated]
    set
    {
        this.k__BackingField = value;
    }
}

Automatically implemented properties provide a way to quickly expose class member variables; we will examine this feature more in the next article.

Part 3: Initialising objects

 

The newbie guide to C#3, Part 1: Overview

This short series of articles gives you an overview of the new features provided by C#3. The articles are peppered with simple code examples that you can copy directly into a C#3 editor to try them out for yourself. At the end of each article, a link is provided to a standalone class that contains fully commented examples of the features that have been discussed.

The information presented here won't make you a C# guru but they will give you an appetite for the new features that have been added to the language.

Part 2: Automatically implemented properties
Part 3: Initialising objects
Part 4: Implicit types

Saturday 16 June 2007

Smaller steps, greater strides ... revisited

I often see colleagues of mine struggling to add new features to our software; I often see workmates struggling to rectify the existing features of our solutions. Frustrated managers wonder why code that only took 2 or 3 months to write has stagnated in an endless UAT cycle with little hope of breaking free. Small change requests end up taking 3 times longer than the estimates that helped justify the change in the first place. Customers, initially used to timely releases grow increasingly impatient as release periods are extended or bug littered deployments are made on schedule.

Why does this happen? Optimism play a small role in this predicament; as software developers we almost always underestimate the tasks that we are given. Pick up any Steve McConnel book and look at the graphs - it's an industry wide phenomenon. Although this plays a contributory role, it is by no means the root cause of the problem. I rarely see developers having difficulty putting new code together; I guess there's no getting away from the fact that software is easy to write but difficult to maintain. There is also the question of motivation, nobody likes to work with scrappy code; as Pragmatic Programmers we shouldn't have to live with broken windows.

So why do we write poor, complex, undocumented code when we know it will eventually come back to haunt us (or more likely someone else on the team). Sometimes it's laziness but I think there is an inherent trait that developers possess - we are always keen to see the results of our ideas. We start with the best of intentions but as we gain momentum we see the outcome beginning to materialise. We write code more quickly, absolutely determined to come back and add the comments and write the unit tests. It's not true TDD anymore but the result is almost tangible now, just a few more lines and we'll be home and dry. Sure we have some tidy up code to complete, exception handling needs to be incorporated and then there's the diagnostics namespace to worry about...but just a few more lines and the refactoring can begin...just a few more lines...

And so the problem begins. before we get chance to paint over the cracks a system bug is reported, the requirements change and a different solution is screaming for attention. We no longer have time to commit to our solution, there is no longer an end point to reach for - just boring changes and bug fixes. The momentum disappears and we move on to a newer, brighter challenge vowing never to cut corners again. Somebody else is assigned to the project, sees all the broken windows in the neighbourhood and continues to break a few of their own. Which leads us back to our frustrated managers and impatient customers.

There are many methodologies that address these issues and offer a strategy for preventing them. Personally I think the methodology we need is a simple injection of discipline. After all, if you can't be bothered to add a single function header now, will you really come back and add 100 once your class is complete? If the code seems to work ok will you really find the time to go back and write the unit tests. Even if you do, the opportunity to confirm your understanding of the problem before the coding started disappeared when the TDD process stopped.

As with any habit, good or bad, it can take a lot of discipline and effort to stop. The same is true, in my opinion, with software development practises. Jumping ahead may seem to shorten the journey but smaller steps really do lead to greater strides especially in a well disciplined environment.

Wednesday 6 June 2007

It takes time to make a decision

Some people describe software development as a manufacturing process. Basically you take a recipe, try it out a few times and once you are happy with the ingredients and the cooking process you write it down and pass it on to your eager coders. Over time you collect a whole book of recipes that can be easily followed, producing code that is both repeatable and reusable. Software restaurants announce ticker tape openings around the globe to serve up cheap and reliable cookie cutter software solutions. Recipe for success? Actually, past experience tells us that more often than not it is a recipe for disaster.

The manufacturing process needs to guarantee repeatable results and this is very difficult given the amount of human intervention that programming typically involves. Of course, there are now many utilities that generate, parse, check, analyse and generally prettify source code but it's still essentially a solo practise between the developer and the IDE (apologies to all you XP pair programmers out there). There is also the issue of code maintenance to consider, generated code is not the prettiest of creations and, whilst it's fine when it works, any subtle imperfections are often very difficult to isolate and rectify.

Some people describe software development as an artform. Each contributor adds their own creative interpretation of requirements, applies keyboard brushstrokes to their electronic canvas and steps back to admire yet another unique creation. No two pixel paintings are the same; programmers the world over write different code to achieve the same objective. Even individual coders write the same routines differently, with or without varying degrees of standards and procedures, many times during their illustrious careers. An artistic masterclass? Very often this approach generates code that may be inventive but is seldom practical and rarely maintainable.

The problem here is consistency; code is produced that can be difficult to maintain where the layout of each class looks very different to its neighbours. There are a number of architectural decisions and considerations that a developer has to make that exist outside of what is generally perceived as the 'system design'. Does a particular piece of logic sit in the database layer or the business layer? How are exceptions handled? Should a method be instanced or static? Do you choose an interface or an abstract base class? In reality your code is simply a string of statements, loops and conditions but the potential number of combinations are astronomical even within a relatively small code base. Coding standards and recommended practises can help to reduce the confusion, code generation can produce generic classes and components and design patterns provide some level of guidance but dividing a limitless number of possibilities by any given factor still leaves a limitless number of possibilities.

So should we strive for a repeatable and consistent, painting by numbers development process where creativity and initiative are completely removed from the programming equation? Or do we promote flexibility and encourage our development teams to express their creativity and inventiveness in new and inspiring ways? Is there some magical, middle ground that satisfies the developer's creative aspirations whilst still adhering to repeatable templates, design patterns and agreed best practises?

This blog is littered with questions but unfortunately devoid of answers; each question corresponds to at least one, and often very many decisions. It is ironic that the decision making process is a key factor in making programming an interesting practise. It is also a subtantial contributor to the problems and complexities that lurk beneath the surface of even the most trivial of solutions.

Decisions take time ... so less choice, less freedom and fewer choices must surely facilitate a better programming environment...or do they?

Monday 30 April 2007

The wow is not yet!

My house, for all its sins, has good quality window frames that really were built to last. The underlying woodwork is preserved against the elements by a tough primer, several layers of resilient undercoat and a highly polished gloss exterior. The smooth surface hides a multitude of hidden stratas; each coat of paint provides a robust foundation for the neighbouring coat above it.

Many times, the effects of driving rain, wintery gales and sub-zero temperatures have been wiped away with the stroke of a cloth and the occasional squirt of polish. However, over time, the relentless onslaught of the weather gradually erodes this painted armour. The once pristine surface begins to look aged and imperfect and the challenging maintenance cycle begins.

With care and attention, the frame can be restored but each repair introduces subtle imperfections that accumulate over time. Of course, other parts of a house are prone to this gradual degradation and even the most dilligent handymen struggle to keep everything in showroom condition. Corners are cut and compromises are made to patch up a problem because of other more pressing maintenance and decorating issues. Ultimately, the patching, redecorating and restoration can no longer resolve the effects of an ever ticking clock and the window frames have to be replaced.

It is at this point that the inevitable analogy with PC based windows can be made. The question is, does Vista apply just one coat too many in the continuing search for a new glossy finish? Sure it looks pretty with its 3 dimensional application browsing and fancy widgets but look beneath the surface and perhaps the undercoat is beginning to crumble a little. For my laptop, applications run more slowly when they can actually be bothered to run at all. The entire system locks up with more regularity than ever before and refuses to reanimate itself unless rebooted. The continual need to reaffirm my identity, even as a system administrator, is annoying at best and common tools and features have been juggled around to the point of obscurity. Many of the once promised features have been omitted leaving a pretty shell with an aging interior.

Everyone around me is busily reinstalling Windows XP as new hardware manufacturers grudgingly offer alternative operating systems.

I have always supported Microsoft in their endeavours and continue to do so but the problem of beauty only being skin deep remains. A pretty face will always turn heads but this fascination soon fades especially when there seems to be little inner depth below the surface. My expectations of Vista have far outweighed my experience and I can only hope the disappointment of this flagship product is not repeated. The wow is most definitely not now!

Tuesday 3 April 2007

Preparation is everything

During a particularly engaging World of Warcraft quest last month I decided that it was time to find a 'proper' hobby. This dawned on me as my fellow guild members started calling me old timer and logging off at 7pm on school nights. I'd always had a distant fascination with astronomy and chose to prefer star gazing into the heavens as a more intellectual alternative to star gazing in Hello magazine.

Armed with my battered credit card I went online shopping and 2 weeks later my new telescope arrived with built in gps device, object database and goto facility. My enthusiasm was not diluted even by the prospect of a multi-lingual, 200 page instruction manual.

After a hasty assembly I cursed and scowled my way through the remaining 5 hrs of daylight hoping sunset would deliver me a cloud free northern hemisphere. My prayers were answered and I found myself wrestling with a 100 pound glass / metallic hybrid which possibly outweighed the instruction booklet.

The next problem was to point the telescope at true (not magnetic) North, ensure it was aligned horizontally and then validate its position with one or two predetermined reference stars. This seemed like a daunting prospect but the next 30 minutes reaffirmed my belief in technology. The attached handset was intuitive to use, it found my local time and location (using the inbuilt gps device), and happily pointed the scope at true North. Whirring back into motion, the motorised mount pointed me at a reference star from its database of truly stellar proportions, a slight manual adjustment and I was ready to go.

This typifies, for me, one of the major benefit of software systems. They help you prepare quickly and efficiently for the real task at hand. Don't get me wrong, watching a telescope engaged in an automated, robotic dance while it aligns itself with an object 200 light years away is certainly an impressive experience...the first time around. On a cold, damp night it soon becomes a process that you would happily perform instantly given the opportunity. The real experience is looking at the comet battered surface of the moon or the perfectly formed rings around Saturn.

So do software systems prepare us for the tasks we need to complete and the activities that we enjoy rather than performing them for us? I would like to think so. After all, once your forecast reports have been scheduled for printing and your manangement reports have been automatically generated the real work can begin. Efficiency improvement discussions, marketing campaign brainstorming sessions, sponsor driven board meetings, the day to day human collaboration that constitutes the organic nature of an organisation...these are the processes that really make a difference. It's comforting to think the polished oak table that decorates the board room will not be replaced by server racks and an air conditioning system. So perhaps we should think about how our systems can assist rather than replace next time we are struggling to put realistic requirements together. It could save a lot of effort thinking about something that really isn't necessary.

And if the monthly reports find their way to the MD's desk a little earlier because of the new accounting system then so much the better. You can contemplate how software helps rather than replaces your daily functions as you walk down the first fairway on your sunny afternoon off. In fact, thinking about it; preparation isn't everything - it's just something that software systems are particularly good at. :-)

Sunday 25 March 2007

Exception handling: Catch, throw or let it go?

When you think about it, structured exception handling is a powerful yet reasonably straightforward mechanism for...well for handling exceptions in a structured way. You call a method and it either throws an exception or it doesn't. If it does, you have 3 choices:

  • Rethrow the exception
  • Throw a different exception
  • Catch the exception and continue processing
There are no chiselled, marble tablets defining the universal laws of exception handling. There is no perfect strategy for dealing with every possible exceptional scenario. However, I have seen enough poorly implemented error handling strategies to at least recommend a few guidelines here.

Be careful what you catch
Any catch blocks following Catch(System.Exception) will be ignored because System.Exception is the base class for every other exception type. Luckily, the compiler does not allow this anyway and raises an exception of its own if you decide to ignore this advice.

Only catch an exception if you expected it and you know what to do about it. Never use the construct shown in Snippet 2 in a class library. The way an exception is handled should always be the responsibility of the calling application.

A System.Exception should never be caught unless it is rethrown. This rule should always be followed for class libraries. Within an application, the application policy will determine whether a System.Exception should be rethrown.

Be careful what you throw
Never throw System.Exception; take care when throwing any other exception base class types. If you are defining your own exceptions, then think carefully about its base class. For example, if you derive an exception type from ArgumentException, then any code that already catches ArgumentException will catch your new exception type. This may or may not be a problem based on how you (or someone else) is handling the exception.

All custom exception types should end with 'Exception'. e.g. NumberTooLargeException, NameNotFoundException etc.

Use the recovery position
So you have a class library and an exception occurs. You want to leave the responsibility of handling the exception to the calling application but you also want to make sure that your system is not left in an unrecoverable state. So what do you do (please see Snippet 1)?

 Snippet 1

 public void LibraryMethod()
 {
     try {
         //Method logic here
     }
     catch {
         //Handle system consistency here
         throw;
     }
 }

Basically, all possible exceptions are caught, the system's state is verified and the same exception that originally ocurred is rethrown. It may be appropriate to restore consistency in a Finally block depending on the context of your work.

Please note that using throw ex; instead of throw; causes the stack trace to be reset here.

Avoid boilerplate code
It's easy to get into the habit of adding Try/Catch/(Finally) blocks to each method that you write but this really is not the way to approach structured exception handling. I have worked on projects where the following pattern is used with alarming regularity:

 Snippet 2

 public void ProcessData(int firstNum, int secondNum)
 {
     try {
         //Method logic here
     }
     catch (Exception ex) {
         //Log error here
     }
 }

So any exception is caught, logged and discarded. The application continues (most probably in an inconsistent state) and the user/developer remains blissfully unaware until they make a conscious decision to check the logging device and find entries representing errors.

Validate parameters
Where you are creating a library that will be used by other developers, it is prudent to validate the parameters of all public and protected methods. This has a number of benefits:
  • The caller is informed if invalid parameters are supplied
  • The code is more likely to run consistently with valid data
  • Problems are identified quickly, less code is executed with invalid data
  • Effort to rollback to a consistent state may be avoided
If a problem is detected, throw an exception that derives from ArgumentException; the most notable being ArgumentNullException and ArgumentOutOfRangeException. If these are not appropriate then consider throwing a custom exception derived from ArgumentNullException or ArgumentNullException itself. Snippet 3 illustrates the process.

 Snippet 3

 public int DivideNumbers(int numerator, int denominator)
 {
     //Validate denominator to ensure non-zero value.
     if (denominator == 0)
     {
         throw new ArgumentOutOfRangeException(
             "denominator", "Denominator cannot be zero");
     }

     return numerator/denominator;
 }

Be aware that arguments passed to methods by reference may be changed after validation has occurred (e.g. in a multi-threaded application). In these cases it may be wise to create a copy of the argument(s) and operate exclusively on this copy within the method.

In the example shown, the denominator is validated against a value of zero. An alternative here would be to use a Catch block to trap this occurence. The actual method used depends on how often the situation is likely to occur. The validation as shown is fine but this check will be performed on each pass through the method. If the possibility of denominator being zero is rare then it may be more prudent to use a Catch block and rethrow the exception for improved performance.

And Finally...
A Finally block is guaranteed to run whether an exception occurs or not. It is therefore the logical place to add clean-up code. Beware that any exceptions raised within a Finally block will hide details of any exception that was raised in the corresponding Try block so try to avoid them.

Structured exception handling provides a consistent framework for dealing with exceptional situations. Taking an umbrella with you on a winter's day stroll adds a little overhead to your journey but it also prevents you from getting soaked if the heavens open. Handle exceptions in a similar way; prepare for what may happen before you start your development journey and you will never find yourself without an umbrella if something unexpected happens.

Saturday 24 March 2007

Vital statistics

As with many software houses, we have tried various ways of measuring and monitoring the 'quality' of the code we produce. We are not blacksmiths forging ornate, wrought iron gates that welcome visitors to the stately homes that they protect. We are not carpenters creating polished dining tables that provide the social focus for a very special dinner party.

We write code, code that is often compiled into some indecipherable intermediate language and then laid at the mercy of a virtual machine. It cannot be directly admired by its users; even when displaying information, it renders results that are presented on a screen that has been designed by a graphics designer or business analyst. Often it displays nothing at all, quite happy to sit far from the limelight in a cold, dark server room that provides more comfort to mushrooms than human beings.

Despite its intangible nature, it is what we produce and it defines and differentiates us as solution providers. It is every bit as important to us as the gates and tables are to the blacksmith and the carpenter. Few people would argue that quality of code is important but in order to measure it properly we first need a suitable definition.

Of course there is no absolute, irrefutable description for 'software quality' and there are as many opinions (most probably more) than there are IT professionals. Many runners jostle at the starting line in the race for software excellence: coding standards, performance, bugs per developer...even cyclic complexity! My suggestion is to judge code on its simplicity.

Simple code is easier to maintain. The problem with code maintenance is that it is usually done a long time after the original code was written and by somebody that didn't write the original code. Code maintenance often occurs after software has been released where the cost of change is highest.



Simple code is easier to understand. Being frustrated for hours by a design pattern implementation or sample code from an anonymous expert on the internet that doesn't quite work as expected is usually an indication that you should try something else. If you don't understand it now then how will it appear in 6 months time to someone else that has to fix a critical bug, immediately, with no previous knowledge of your code or the system it belongs to?

Simple code is self describing. It does not require copious comments or armies of related documentation fighting to remain up to date with each change that is made to the associated software. Of course, comments are a good thing but only if they accurately describe the code they represent; this is often not the case. It could be argued that no documentation is better than inaccurate or outdated documents where false assumptions are easily made. Either way, the simpler the code, the less need there is to refer to supporting documentation.

Simple code requires less effort to produce. The Cocomo II project provides comprehensive analysis of factors that contribute to increased effort in software implementation. Platform experience, appropriate software tools, multisite development and time constraints, along with many others, all play second fiddle to product complexity in terms of increasing effort.

A good story can be enjoyed even if scribbled in crayon on crumpled paper. However, print in a legible font, on good quality paper, add page numbers and chapters, a preface and an index, sprinkle with artistic illustrations and suddenly it springs to life. In a similar way, simple code is enhanced by good coding standards, proper indentation, sufficient comments and a pragmatic design. In fact, simple code is defined by these attributes rather than merely enhanced by them.

As developers, we often strive for the perfect design and implementation. Sometimes it is the ability to compromise effectively that produces the best results; it is left to the reader to decide whether worse can be better.

In any event, keep it simple and the rest will follow - and you won't find yourself cursing in 6 months time when you have to refactor your optimised code :-)

Thursday 22 March 2007

The new SharePoint...smaller steps, greater strides

Microsoft's new SharePoint offering, MOSS 2007, offers many new and improved features that compliment and extend previous versions of the product. Rather than clumsily repeating the Microsoft marketing spiel here I would like to discuss a less tangible, but equally important aspect of this new framework - incremental development.

Collaboration is the key here, it is one of the principal benefits that a SharePoint portal provides and promotes. Ironically, the SharePoint framework also supports a collaborative and incremental approach to development that offers an attractive, lower risk alternative to up-front, big budget project implementations.

Slip into the cautious shoes of an IT manager looking for the ideal provider for his new corporate extranet. Excited by the prospect of sharing, managing and interacting with information that literally sits at the end of your fingertips yet troubled by the memory of past projects that never quite delivered what they promised. You have to justify large, up-front commitments to your project sponsors whilst convincing yourself that the arbitrary 30% cost increase for a fixed price quotation is a small price to pay for supposedly heaping all of the risk on your provider. Or do you?

Agile advocates will tell you that lightweight, throwaway prototyping is the way to go; RUP followers suggest a comprehensive set of document artefacts - your sponsors just want to see results.

SharePoint enables customers and providers to work closely together; building prototypes that can be continually reviewed and refined in small steps. The system being built is under constant scrutiny and changes can be made in real time. This removes the need for large, verbose functional specifications that are often misinterpreted and constantly changed. Rather, time is spent implementing the actual system; each functional area may be refined until an agreed point is reached before moving onto the next requirement.



Your sponsors are happier; with a small, initial investment they can see immediate results. As confidence is gained further investments may be made. Your providers are happier; they are keen to produce good work to ensure future investments. A lower risk approach is common to both sponsors and providers and the burdens of missing fixed cost schedules and costings is considerably reduced.

At the end of each discrete iteration, a decision is made to continue with the project or call an end to the proceedings.

Of course, the whole concept of iterative development has existed for a good while and may be attributed to both successful and unsuccessful projects. For once, I have to hold up my hands and say that SharePoint (in its new 2007 livery) really has given me the opportunity to offer our customers a credible and advantageous alternative to fixed cost deliverables.

Please take the first, low risk step and give it a try - you may find that the shoe really does fit :-)