Tuesday, February 24, 2009

Presenter Tests 101

What to wrtite a test to make sure the prsenter load dats from a service/moeld/repository to the view?

IFooEditView view ;
IFooService service ;

public void MyTestInitialize()
view = MockRepository.GenerateMock<IFooEditView>();
service = MockRepository.GenerateMock<IFooService>();

public void MyTestCleanup()
//global assertion

public void CanInitialiseFooEditPresenter()
var id = 1;
var record = new Foo(id);
view.Expect(v => v.LoadRecord(record));
service.Expect(s => s.RetrieveFooRecord(id)).Return(record);
var pres = new FooEditPresenter(view, service);
//assert - mock verification in tear down

NB: I really should stop posting code by writing staight in to the blogger create post screen. just lazy...

PowerShell to save the day!

I have been doing a fair bit of build script stuff over the last couple of months. I guess it started when we were having big problems dealing with the build process at my last contract. (I had been using Nant for about a year prior but it never really did anything other than clean rebuild and run my tests. That's cool, it's all it need to do.)
We really need to look at our build process as it took about 3 hours to do a deploy, which we were doing up to 2 times a week…. 6 hours a week of a London based .Net contractor: that is some serious haemorrhaging of cash. I started really looking in to build server and properly configuring build scripts. As most places I work at are very M$ friendly and not overly fond of OSS, so I tend to be stuck with MSBuild if it is a shared script. So goodbye Nant.
Fast forward to a few weeks ago and I have moved country and company and am working with a great team of developers that are incredibly pragmatic and receptive to new or different ideas. We set up a build server and installed Jet Brain TeamCity to point at VSS and a basic MSBuild script that was a basic port of my Nant script. It worked, it did what we need, which was take what was checked in, rebuild, test and send a zip of the output to a network folder and let us know if the whole process succeeded or not. Simple and sweet.
Enter ClickOnce. Ahhh. Ok, so ClickOnce is a great idea in that it manages your companies deployments of smart client software. No longer do you have to worry if the users are using the correct version of your software, the latest will always be on their machine. Personally I think this is a great idea and can see why mangers would love the idea. Its also really easy to deploy… if you are using Visual Studio… and if you only have one deployment environment. Unfortunately I don’t want to use VS (I want to do this from a build sever using a potentially automated process) and we deploy to Dev, Test, UAT and Prod. MSBuild really struggles when it comes to this… it basically just cant do it.
The biggest problem was I need to be able to change assembly names so the ClickOnce deployments don’t get mixed up (I want o be able to install Test and Prod on the same box). Changing the exe assembly name in MSBuild changes all the assembly names, which is not too good.
After struggling with MSBuild I realised I was hitting the limits of what MSBuild is supposed to do, it was either change my approach or enter hack town.
Initially I thought Boo, Python or Ruby would be my saviours… then quickly rethought. Although they would be good in MY mind, other people have to use this and those options are not real M$ friendly… yet. I don’t know why I didn’t think of it earlier but PowerShell was the obvious answer. I downloaded PowerShell and after playing with it for a couple of minutes I was super impressed. All the stuff I was struggling with in my bat files or my MSBuild scripts were trivial in PowerShell.
Variable assignment, Loops, switches etc are all trivial. It extend .Net so you can handle exceptions, interact with Web service Ado.Net Active Directory… the sky is the limit.

Anyway if you haven't played with PS go download it, get the manual and get the cheats sheets


Cheat Sheets

And Check out PSake from James on codeplex if you are keen on incorporating PS into your build cycle.


NB: I hope to post my revised ClickOnce build strategy… as my last one was a bit of a failure, sorry if I lead anyone astray.

EDIT: Check out Powershell GUI if a nice free IDE

Please be aware of P&P guidance

First and formost, I think the idea of a P&P team at M$ is a good thing. Its a team that is suposed to give guidance on how to use proven practices to build enterprise grade solutions. Unfortunately tit is not always the case. Normally I wouldnt care if someone was giving dodgy advice however when it is a team that people follow blindly it can be a bit agrevating.
I personally have nothing to add to the mess that is P&P. I had used various things that come out of P&P including Ent lib from 1 - 4.1 & guidance packages suchs as SCSF. Largely they are not best of breed but, "enough to get you by". 99% of the time there is an OSS version that is better (Castle, Log4Net/NLOG) and when there is not something i want, i create my own over useing the out of the box products. Ironically I use fthe GAX and GAT to create my own Software factory, which I feel provides a much more usabale version of the SCFC/CAB for the majority of user and projects out there.
Well aparently they have done it again… I rarely even read what comes out from them now , my disillusionment is such that I don’t think it is worth my time. Sebastian AKA SerialSeb has put out a ppst recently that highlights his concerns. Really a lot of it is semantics; however when you have proclaimed to the world that your are an authority of a subject, as P&P have done then those semantics actually matter. If you muddy the means of communicating your message then your intents can be interpeted in many ways. In this case I don’t think P&P are trying to be ambiguous or abstartc I just think they don’t have a level of understanding of the problem space that is required when trying to give this type of advice. For example I understand agile and I use it, but it does not make me an authority on the subject and therefore it is not appropriate to give best practice advice on the subject. P&P guidance is percieved by the masses to be just that. My stance now unfortunately is to assume the worst and hope for the best when it come to M$ or P&P guidance because they have got it wrong so often.
How can this be changed?
I have to be honest, my uinderstanding of the inner workings of P&P is pretty much nothing. I have met a bunch of the guys, they are ALL super nice, friendly and generally knowledgable about M$ and .Net stuff.
But I don’t want any of those things. All I want is for them to be super experts in architecture, design patterns and frameworks,: specifically the fiedl they are giving advice on. If they have not persoanlly rolled out production code using those best practices and had it peer review (peer is not the guy you worte is with) then how can it be Proven Best Practices?

*The following is all opions do not construe any of theis as me stating facts*
Smart Client Software Factory to me was a mistake. It should have been preceeded by a lightweight application framework and the SMSF should have been kept for the M$ consultants. Huge red writing should be placed all over the download page telling you thios is a big bloated framework. Every project I have come across that has used it has failed. Miserably. Why? Because it was decide that M$ had "recommeded" SCSF and therefoe it was best practice. The fact the M$ nor P&P never actually recommended it is beside the point, in fact many time they recommend you to seriously consoder whetherth this is the best option. Unfortunatley the people who make the descision on what software I use are not coders, don’t like reading docs and have more faith in M$ than their own team. They also have ego's and believe that their project is a big enterprise system that need the biggest and best framework. I find this is not the exception, this is the rule. It usually take months of "Good Behaviour" at a new contract for employers to have faith in my descions making skills. By then we are usually up to our eyes in whatever technical descision management made for us many months ago. SCSF, EntLib EF and are a couple of pain points I have had to bare.

So if you are embarking on a new green fileds project and the mentality at your firm is still M$ == best of breed, please rethink and make an educated descuion on your tool of choice. It may be the case that the M$ product is the best choice for you, bu then again you wont know that unless you do a bit of research. As for P&P, it really is time to pull your socks up. LIike anything in life, if you dont know, ask. Dont publish junk... please.

Thursday, February 19, 2009

Stilling Learning CI

Today some colleagues and I were discussing prospective deployment options. We have all worked on many projects, isnt it funny how often the deployment process, the most important part, is a complete after thought. After mentioning it this morning to the PM/BA/Non techies we realised there was no actual deployment procedure in place (this is my first real release at the current company). We decided to let them come up with a standardised plan for their end ("put last build into production" was an option we had to take away from them) while we sorted out our plan.

I am still very much learning abut builds and CI, so this is by no means the authoritative answer; This post basically described what we came up with.


One thing we are dong well is actually using ClickOnce, a technology that is perfectly suited for the large corporate environment we work in and deliver smart client app's to. We want to continue to do this but make sure it is done properly.

We have very loose processes out side of the people actually writing code. The non tech's are not au fait with agile, they are not too good at requirements, planning, resourcing... well you get it. So we need to insulate ourselves from any curve balls that get thrown by these guys. We also need to seriously cover our ass, because when the proverbial hits the fan around here it slide down the ranks very fast; we want to make sure the have no way of letting it get to us, unless of course we actually deserve it.

We don't have full control over deployments. There is a system team that will copy our applications up to the next environment (ie Test -> UAT), and it has to be the same application to get pushed up. This is fair enough, however it doesn't work well for ClickOnce, by default any one testing on a lower level would not get updated*, we actually need a separate application for each environment. We also have only once chance for deployments. if we mess up even one deployment it means this process is no longer automatically approved and every change must go thru a 2 week change management process.

The Plan

In the time I have been at my new contract I have managed to get a couple of pretty big wins. We are now completely TDD we have a build server, we do (close to) weekly deployments to a dev test environment and we are getting up to speed on scrum. What I really wanted next was "one click deployments".

The plan basically is:

  • We will decide a release date/time for moving from Dev to Test. We are currently doing 7 day sprints and trying (scrum is new) to deliver a working production-quality piece of software each week. ATM this is Wednesday 2pm (for a variety of reasons).
  • We run the Deploy Script**. The deploy script runs our standard build which performs unit and integration test, static analysis etc, then it modifies the config to the test, environment and publishes to the pre-deploy network location (then repeats for UAT then Prod) .
  • We now have all the releases of each version in a known pre-deployment folder. From here the test click once is copied to the real deployment folder.
  • The Testers test away and of course there are no bugs [ ;) ] and they approve release of version x.y.z.b. Because we have all the releases for each environment produced at the same time and are the exact same build (other than 3 small config files) the system lads can do their Test->UAT (or UAT->Prod)deployment based on the version number that has been approved.

This means

  • We have every release in pre-deployment for Test, UAT and Prod
  • The testers can let the system guys know the exact version number that has been approved. It is now up to the system guys to copy the correct versions up to the next environment.
  • We cant modify the files as it will break the manifest and render the app useless. This keep the system guys happy.
  • We are removed from the deployment process, which means we don't have to be at work at 9pm when the deployment takes place.
  • Multiple versions of the application can be held on the users workstation, each one assured it is the latest for it given environment. This keeps PM's, BA's,Testers & UAT testers very happy.

This process takes about a minute. A lot happens but it is totally repeatable and completely versioned. This certainly is a better/ faster/ more reliable option than the 3 hour deploys we did at my last place of work. To be honest I'm pretty happy with it. This should also work well with ASP.Net deployments however there would have to be a versioning folder "hand created"*** (I believe) to get the same effect.

So I haven't quite got my "one click deployments", but half a dozen clicks and some automated scripts that run in under 5 minutes (most of that is watching static analysis and  tests run) is a bloody good start. Plus it's a good time for me to sit back and have a coffee; I'm almost looking forward to deployments :)


For you nosey bastards; The deployment part of the script looks like:

<Target Name="DeployClickOnce">
<Message Text="**************BUILD_NUMBER = $(BUILD_NUMBER)*********************"/>
<Message Text="OutputFile='$(ApplicationPropFolder)\AssemblyInfo.cs'" />
<AssemblyInfo CodeLanguage="CS"
AssemblyCopyright="Copyright © YOURCOMPANY"

<Error Condition="!Exists($(RootClickOnceDeploymentLocation))" Text="The Root deployment location ($(RootClickOnceDeploymentLocation)) does not exist, publish can not occur."/>

<MakeDir Directories="$(DevFolderLocation)" Condition="!Exists($(DevFolderLocation))"/>
<MakeDir Directories="$(TestFolderLocation)" Condition="!Exists($(TestFolderLocation))"/>
<MakeDir Directories="$(UatFolderLocation)" Condition="!Exists($(UatFolderLocation))"/>
<MakeDir Directories="$(ProdFolderLocation)" Condition="!Exists($(ProdFolderLocation))"/>

<MSBuild Projects="$(SolutionFile)"
<MSBuild Projects="$(SolutionFile)"
<MSBuild Projects="$(SolutionFile)"
<MSBuild Projects="$(SolutionFile)"


NB:You will need the MS Build Community Task download to update the AssemblyInfo.cs. The M$ one is a bit flakey (apparently). I used the community task for other things anyway so I figured I would use what is there.

NB: the BUILD_NUMBER is being passes in as a parameter to the script.

As my MSBuild skills are not the sharpest, I have left the repeated code in. I spent a couple of minutes trying to figure out how to prevent repeating myself with MSBuild but to no avail. if you have any ideas (short of Rake) let me know.  I am also still figuring out ClickOnce so some of those URLs are probably not necessary, have a play yourself.... this is just my first run.

*because, for example, Dev version would be higher than Test so when clicking on the latest Test ClickOnce application it would deem the application does not need to be updated as the dev version number is higher than the Test version number (if they were being run on the same machine). The applications have to be different applications for each environment. We still need to confirm this is the case with today's changes. Yeah... a bit of a pain, but worth it I guess.

**we actually stop the build server for the project we are deploying, reset the build counter, increment the release version, then run the script then restart the server.

** MakeDir is not really hand created, but you get my drift ;)

Wednesday, February 18, 2009


I am sure someone out there may find this useful. Its an XSLT to transform the TRX file that is output by the MSTest runner. Its a slightly better visual depiction than my MSBuild output of thousands of lines of courier new text...

This is very much "It works on my computer", I am running VS 2008 Team edition (?). Let me know if this works for you.

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
  <xsl:template match="/">
      <body style="font-family:Verdana; font-size:10pt">
        <h1>Test Results Summary</h1>
        <table style="font-family:Verdana; font-size:10pt">
              <b>Run Date/Time</b>
              <xsl:value-of select="//vs:Times/@creation"/>
              <b>Results </b>
              <xsl:value-of select="//vs:Deployment/@runDeploymentRoot"/>
        <a href="coverage.htm">Coverage Summary</a>
        <xsl:call-template name="summary" />
        <xsl:call-template name="details" />
  <xsl:template name="summary">
    <h3>Test Summary</h3>
    <table style="width:640;border:1px solid black;font-family:Verdana; font-size:10pt">
        <td style="font-weight:bold">Total</td>
        <td style="font-weight:bold">Failed</td>
        <td style="font-weight:bold">Passed</td>
        <td >
          <xsl:value-of select="//vs:ResultSummary/vs:Counters/@total"/>
        <td style="background-color:pink;">
          <xsl:value-of select="//vs:ResultSummary/vs:Counters/@failed"/>
        <td style="background-color:lightgreen;">
          <xsl:value-of select="//vs:ResultSummary/vs:Counters/@passed"/>
  <xsl:template name="details">
    <h3>Unit Test Results</h3>
<table style="width:640;border:1px solid black;font-family:Verdana; font-size:10pt;">
        <td style="font-weight:bold">Test Name</td>
        <td style="font-weight:bold">Result</td>
      <xsl:for-each select="//vs:Results/vs:UnitTestResult">
          <xsl:attribute name="style">
              <xsl:when test="@outcome = 'Failed'">background-color:pink;</xsl:when>
<xsl:when test="@outcome = 'Passed'">background-color:lightgreen;</xsl:when>
            <xsl:value-of select="@testName"/>
              <xsl:when test="@outcome = 'Failed'">FAILED</xsl:when>
              <xsl:when test="@outcome = 'Passed'">Passed</xsl:when>

Tuesday, February 17, 2009

Rhino Mocks: AAA vs Record- Playback

Rhino Mocks is one of my favourite pieces of open source software. It has, more than any other piece of code, changed the way I code, for the better, I hope.
Many moons ago I first played with it and like the fact it was strongly typed, NMock2 was the mock framework I was using at the time and it is string based, which can lead to havoc when refactoring.
Back in those days RhinoMocks was only Record-Playback and to be honest it never felt natural to me. Due to popular demand the framework was extended to allow for either the record play back or IMO the more natural AAA syntax

Arrange - Act - Assert

Arrange, Act, Assert to me help break up the way I write my test to make it very clear what I am trying to achieve. I even have code snippets that auto populate my test. I type "mstest" and i get

public void Can()
Assert.Inconclusive("Test not completed");

I also feel this allows newcomer to see what is going on more clearly and also helps them write test first.
Well, in my mind the hardest thing to do when starting TDD is knowing what to write! If you have the code stub with comments as above, it gives you a visual guide to nudge you into progress.

I also find it helps if n00bs actually write the ACT part first, not the ARRANGE. Typically this involves writing 2 lines of code

  • create the object and

  • call the method you want to test


public void CanValidateCustomerFromCustomerAddPresenter()
var presenter = new CustomerPresenter(view,service);
Assert.Inconclusive("Test not completed");

The fact the above code wont even compile is irrelevant. It shows intent. Now the developer writing the test has a clear direction of what they need to do. Often this way of TDD fleshes out new tests. To me this (incomplete and fictitious) test straight away is crying out for complimentary tests: eg CanNotValidateCustomerFromCustomerAddPresenterWithNullCustomer etc etc
The fact that I have not even defined what a customer is, means my mind is still open to possibilities.
On top of the benefits of writing the ACT first, I think AAA syntax makes the test more readable in terms of maintaining code bases, as it has the top down procedural look that coders are used to (even OO has top down).

public void CanValidateCustomerFromCustomerAddPresenter()
//Arrange - Set up mocks (put in your TestInitialize)
var view = MockRepository.GenerateMock<IView>();
var service = MockRepository.GenerateMock<IService>();
//Arrange - Set up your parameters & return objects
var customer = TestFactory.CreateVaildCustomer();
//Arrange - Set up your expectations on your mocks
var presenter = new CustomerPresenter(view,service);

Now I have not run this thru a compiler I just threw this down, but to me this is pretty readable. I used Record-playback only for a few months and found it a little confusing, perhaps my pitiful little brain was maxing out on simple syntax, but hey.
If you are not using AAA try it out, it works great with the C# lambda expressions too (as above) which, to me, means you have incredibly readable tests.

*please ignore the fact the test is odd.. I am trying to show readability as opposed to how to write a crap object ;)
**is it incredibly obvious that i am writing MVP triplets ? ;)

Monday, February 16, 2009

Technical Debt

Having first heard the term from a former colleague that summed up the project we were one very well, this video by Ward Cunningham struck a chord with me.

Tuesday, February 10, 2009

Perth Alt.net: Tonight!

Just a remnder that we are meeting at 43below on Barrack Street in the city at 5:30 thonight (11th feb 09).
Be there or... well, or dont be there.

Stuff to play with

More stuff i am looking at:

  • Android dev... very exciting i will finally be able to do some real world dev in my osx environment
  • I will be starting to use GIT* due to android dev with my java mates... should be interesting. * this bring me up to using 4 different SCMs at the moment... bloody hell.
  • Automocking from the Jimmy Bogard. This looks to be a great help in the mismatch between dto and domain object. Used with the NH fluent interface for mapping life could be significantly easier :)

Some links:
Fellow OzAlt.netter post on GIT : http://www.paulbatum.com/2009/02/im-starting-to-git-it.html

Monday, February 9, 2009

Real World Test Driven Development: Unit Testing Enterprise Solutions

Join us at the Perth .NET Community of Practice, Thursday March 5th to hear Rhys Campbell present on the essentials of TDD and how encourages good software design as opposed to just having tests. Rhys will cover the differences between unit, acceptance and integration tests, why conventional unit test examples often do not work in the real world, what to test and what to mock, automating your tests, coding examples of how to use Mocks, Stubs, Fakes, Dummies and Spies... what are they and how do they help me.

TOPIC: Real World TDD with Rhys Campbell
DATE: Thursday, March 5th, 5:30pm
VENUE: Excom, Ground Floor, 23 Barrack Street, Perth
COST: Free. All welcome

Rhys Campbell is a software developer currently contracting in Perth, WA. He recently returned from London, where he has been active in the .NET community, attending and speaking at the 2008 Seattle and London Alt.NET Open Spaces. Rhys is interested in design, architecture, patterns and bringing best practices from other communities to .NET. Rhys is a director of ArtemisWest.

There will be door prizes of a ReSharper license (courtesy of JetBrains) and T-Shirts (courtesy of Redgate).


Sunday, February 8, 2009

Singleton Pattern

I am not a fan of the singleton pattern. This may come as a surprise to some, as the very first thing that people may see when using my code is the wrapper I have for my IOC container acts as a singleton.

So why do I, along with many others, not like the singleton? Because it is usually used incorrectly and hard to test.

The first time I had seen massive singleton abuse was when I had to go on a consulting gig to help "finish" a project, i.e. the final sprint prior to go live. The whole data access layer was made up of a mess of singletons. There was no need for it, none for the objects had state let alone required to hold state in a single instance but they would not let us, the hired consultants, refactor it out. Bizarre*. Since then I have seen a singleton butcher-job at just about every contract I have had. it seems to be the first pattern people use and the first to be abused.

So when do i use a singleton? Well when an object that should only have one instance. The notion of singleton implies there is only one logical possible instance of that type that can be in creation at a time. I think this is the fundamental problem I regularly see. Most times I see the use of a singleton this is just not the case.

To highlight this even more, often the object itself does not even have state. If the type has no possible (instance) state, then there is surely no need for singular state! This is the time where the object is just a static class. In the same way it is ok to use singletons, it is ok to use static classes, just make sure it is the right circumstances for your choice.

One annoyance is when singletons are used so they can be "thread safe" and then the construction of the object is not thread safe. please investigate how to do this if this is actually a concern. Even better use an IoC container! By using an IoC the object becomes easily testable and you infrastructure concerns are hidden from the consumer. To me, this is a good thing. :)

*That project is still going, still not live and apparently still has singletons used inappropriately in the data access layer. Oh well.