Wednesday, December 23, 2009

Relearning WCF

Of late I have been playing with WCF again. We have some projects here at work that require some integration and we are desperately trying to move away from the old ASMX based services. Unfortunately because I have not touched WCF the whole time I have been here (12 months now, wow! That has gone fast!) and I have found myself at a point where I really need to relook at WCF again and basically relearn it... oh well.
Anyway here is a bunch of stuff that here, that at work we have found to be useful that you may not otherwise be able to do with ASMX or may not be aware you could do with WCF.


You can in fact use IoC with WCF, there are some good blog posts and accompanying videos to show what to do and if, like me, you just want one ready to that uses the CSL then The Code Junkie has done it for you!

Dynamic KnownType Resolution

This always erked me that I had to put into the data contract that I knew of other types, it was like really bad tight coupling*. There are a bunch of way to declare known types with the bottom example seeemingly a little known alternative : a provider mechanism

in config

<system.runtime.serialization> <dataContractSerializer> <declaredTypes>...

Data contract with attributes

public class ApprovalRequest

Knowntype provider

The way I have just found out is by declaring a knowntype provider on the service contract:

[ServiceKnownType("GetKnownTypes", typeof(ApprovalRequestKnownTypesProvider))]
public interface IApprovalService

with the following class (change the implementation to suit yourself, this is from some of my demo code, it’s not recommended!)

internal static class ApprovalRequestKnownTypesProvider
public static IEnumerable<Type> GetKnownTypes(ICustomAttributeProvider provider)
// collect and pass back the list of known types
foreach (var module in Assembly.GetExecutingAssembly().GetLoadedModules())
foreach (var knownType in module.FindTypes(
(t, f) => ((Type)f).IsAssignableFrom(t), typeof(ApprovalRequest)))
yield return knownType;

With these two little nuggets I have been able to produce a pretty handy little broker service that act as a very basic content based router that keeps the client messages very clean and does not expose any implementation details (i.e. no passing of service or workflow names in the message header!)

*NB: to paraphrase Krzysztof: "Polymorphism is an OO term, not SOA term, so I don't use it, and make my contracts explicit wherever possible." be wary that you are using known types for the right reasons

Thursday, November 5, 2009

Functional .Net : Tuple

Tuple's really don't have a lot to do with functional programming, they are a common concept in many language that for some reason are only making their way in the .Net on version 4.0. One could argue that you could have always easily constructed your own Tuple class but unnecessary duplication of such a simple type has obviously become apparent to the BCL team. This is good. Simple classes like this should be present in the framework. :)

So what is a Tuple? It is basically a container of a finite list of objects; a Point could be described as a list of 2 values Tuple<int, int> where the int values could be the X and The Y coordinates of a point. A Date could be described as Turple<int,int,int> with the values being Year, Month & Day. It is not a list in that your would typically enumerate through the items but you would reference them by index. You may be ask why is this different to an Array eg int[3]? Well with a Tuple the list length is set at compile time and the type of each index point is also set. Tuple<int,string,DateTime> forces you to always have the DateTime as the 3rd item. This is all pretty under-whelming to be honest, but like anything cool its the simplicity that is its strength and also why it pops up in functional styled programming a lot. Future demos are likely to include Tuples :)

Thursday, October 29, 2009

Functional .Net : Currying

Currying is another functional technique that is possible to achieve with C#. The technique basically allows the rewriting of a function that takes in multiple arguments to one that takes in one argument and returns a function which may in turn take more arguments, the basic premise being able to build up composite function by splitting functions down and reducing the number of parameters dealt with. I will be honest and say that  I have found the language you use (C#, F#, Haskell etc) would be more influential to your predisposition  in using this technique, as it is with many of the function patterns and IMO C# does not lend itself nearly as well as F# for example. That being said it still can be done so lets look at a basic example. For starters currying is something that is not catered for explicitly out the box in C# but can easily be done using extension methods eg:

public static Func<TArg1, Func<TArg2, TResult>> Curry<TArg1, TArg2, TResult>(this Func<TArg1, TArg2, TResult> func)
return a1 => a2 => func(a1, a2);
public static Func<TArg1, Action<TArg2>> Curry<TArg1, TArg2>(this Action<TArg1, TArg2> action)
return a1 => a2 => action(a1, a2);

These extension method now allow you to take a 2 parameter delegate and split it into a one parameter argument return an Action or Func that take one argument. This can obviously be extended and helps facilitate the separation and composition of functions.

Now from my understanding of currying it is a specific form of partial application, being that currying splits its functions down to single argument delegates while partial application make no such claim, i.e. a three argument function be be reduced to a single argument function returning a 2 argument delegate. As this is purely academia I don't really care, the principle is the same.

A trivial example of currying (I'm lazy I stole it from Matt P and it is using the extension method above) :

Func<int, int, int> multiply = (x, y) => x * y;
var curriedMultiply = multiply.Curry();
var curriedMultiplyThree = curriedMultiply(3);
var curriedMultiplyResult = curriedMultiplyThree(15);
Console.WriteLine("Result of 3 * 15 = {0}", curriedMultiplyResult);

Unfortunately the verbosity of C# when approaching this style of coding very quickly begins to put me off.  The equivalent in F# is much more readable, but hey its what the language is strong at so it really should be a nicer experience. Either way its good to know the facilities are there if one day I do ever need to use it.

Basically the take away from this post is the extension methods at the top, with out these there will be no curry love.

Links forwarding (yeah this was a lazy post):

Tuesday, October 13, 2009

Functional .Net : Closures

One of the more commonly used functional techniques that can be used in C# is the use of Closures, a technique that if your are currently using lambdas, you may be using them inadvertently. My understanding of closure may be different to others as there seems to be so many subtlety different definitions especially when comparing languages. Anyway, in my mind the comments of Javascript closure best align with my understanding (

A closure is a delegate that has references to a variable not passed to it and in an scope outside the delegates immediate scope.

Like any delegate its definition and execution are not the same thing. You can define a closure and never use it or just call it later

A simple closure example I can think of is:

static void Main(string[] args)
var timesToRepeat = 100;
//Declare the Action
Action<string> print = text => //text (string) is the only parameter
//using varb declared outside of Action
for (int i = 0; i < timesToRepeat; i++)
timesToRepeat = 3;//Lets modify the variable
print("Hello!");//Call the action/evaluate the expression

Note that the timeToRepeat variable is declared outside of the declaration of the lambda statement. Think about this; the Action 'print' can be passed out side of this scope, it could be passed to another class which does not have visibility of the locally declared variable. The 'print' expression is bound to that variable declared outside of its scope. This obviously has ramification in terms of holding reference to that object. Please also note that the expression 'print', like all delegates is evaluated when it is called, not when it is declared; Stepping over the above code will not print when declaring the 'print' Action but at the last line when it is called. One last thing to note is that the variable timetoRepeat is modified after defining the print Action and this is carried when we call 'print' in the last line; "Hello!" is printed 3 times, not 100 times as the variable would imply when the closure was declared.

You may have been using closures with out knowing it. Javascript and the associated libraries like jQuery use this technique a lot, as do many open source library such as TopShelf, MassTransit etc.

Monday, October 12, 2009

Functional .Net : First Class Functions

One thing I notice in .Net is that many developers do not think of functions as first class citizens. I guess in the OO world classes or more appropriate the instance representations are the real hero's, however, in my mind, functions deserve much more appreciation than they perhaps get.

Since .Net 1.0 delegates have been around and I still think many developers do not fully understand how they work. I have previously made a post with regard to delegates showing how they can be used in a real world way to save code duplication here. I guess one of the first steps to being comfortable with functional programming is being comfortable with functions as first class citizens; The best way for a typical C# developer to do this is get comfortable with delegates. Before I continue on with my Functional programming journey I want others following with me to be on the same page. Please be sure you understand what a method & delegate are; I feel I describe them reasonably well on the previous mentioned post.

Functional .Net: The Beginning

Of late I have (along with a few colleagues and friends) started to make a bit more of a concerted effort to up skill in the area of functional programming. I admit that my knowledge is  of functional programming is high level (at best) although have in advertently been using several of the core concepts due to the language feature I am exposed by the C# language I use on a day to day basis.

What has spurred me on is the talks from Dr Erik Miejer on channel 9 (the first of 13 can be found here). The talks plan on tackling Functional programming by working through the benchmark Functional book: Programming in Haskell

I am keen to see how the series progresses, I am only up to episode 2 but am already seeing value, more in the "why" as opposed to the "how", which is fine for this early stage of my journey.

I also want a bit of commercial return on investment with relation to what I can do in my day to day job with functional programming. As I have mentioned C# actually handles several of the Functional paradigms (although perhaps not as elegantly as F# and the like) and thanks to .Net resident Functional voice-to-the-masses, a bunch of Functional programming samples in C# can be found here to download; Cheers Matt! Along with just raw C# code he has a bunch of Wiki links to highlight what each example is actually doing; you may be surprised that you are inadvertently using some of these techniques!

Any way I will keep you posted as to how I progress as I move forward on the beginning on what is hopefully a fruitful journey!

Thursday, September 24, 2009

TDD Mind Shift

Ok so I am not the fastest adopter, however recent situations have almost forced me into a new way of thinking. Recently I was going over a fellow colleagues code and was about to add a test to one of his existing fixtures. He had previously mentioned that he felt there were a lot of tests, I told him not to worry, often when you are new to TDD it seems like a lot of extra code... I had underestimated his comments, he was right there was a ton of tests, thousands upon thousands of lines of tests, in one fixture.  I had clearly not been doing my job and should have been helping him out.

The test were broken down into one fixture per class under test, using regions to separate out test groupings, typically for each method under test. There was some very basic common set up however there was still a lot of set up in each test. It became quite clear that there were certain common set ups that were occurring.  Although the test per class is common it is usually not the best way of keeping your tests together, especially if you are being thorough in your testing. What my refactoring produced was akin to something I have been following for awhile but not really embraced, context based tests.  The tests were broken up so those with common set ups were grouped in their own fixtures; i.e. those with the same defined stubs. Although in this case it was done to make maintenance easier it highlighted for me the benefits of this approach.

Fixtures become specific to the context in which they apply, not blindly and solely to the class they are testing. This will lead to lead to multiple test fixtures per class and multiple test fixtures per method. This is what initially turned me off the approach; fixtures would be come to fine grained, however I have changed tact. Although it may not always be appropriate I can see it being beneficial for many situations. For example you may emulate the Repository having no records in it (via a stubbed method call) and run a bunch of test with that context. This encourages you to test not just method calls but to think it terms of a given scenario, something that I see unit sometimes missing in the pursuit of just achieving code coverage. The fixture tend to have test that are very small and very clear as to what they are asserting, often only a couple of lines per test.

Another thing that originally put me off the notion of Context specification and BDD was the perception of tooling... RSpec etc are not required, it is the thought patterns that I think are more important. Setting up a specification can be done using basic setup method with each fixture defining a specific context. Test inheritance can be very helpful here too.

Although Context specification and BDD are not the same thing I believe they are a movement in  the same direction; moving away for blind testing to defining scenarios that we need to test. Test become closer to one of the goals of being "readable documentation".

If you do what to read up a bit more check out:

Article introducing BDD styled tests with the notion of Context Specifications :

A nice coding example showing one way of thinking in a context spec way:

MSpec with Boo looks to be very cool too: (requires git client):

Although the shift for me has not been great it has been significant, I would encourage you to at least investigate to consider if some of the principles can be applied in your testing.

Tuesday, September 15, 2009

When Easier is Better

Most of the people I currently or have worked with know that I have a strong preference for NHibernate for my persistence mechanism. I typically use a repository pattern, often with services hiding the repositories from the outside world. This is great as I get enterprise scalable domain-driven solutions up and running pretty quick and helps me focus on fixing business problem, not spinning my wheels with infrastructure details. However some time having DTO's, services, repositories, mapping files, anti-corruption/translation layers etc are just overkill. This is where is would typically say "Use Linq2Sql", if someone asked me what they should use, but its probably not what I would use. I like the idea of having the flexibility of moving from a simple domain to a complex domain with out too much problem. Enter Castle Active Record

Active record is in no way a new concept (PEAA p160) but is heavily under used in the .Net realm. AR is a great pattern when you are fleshing out a domain. You can very quickly start building up relationships have screens up and running for client very quickly. This is great for spikes but also for writing real code. The database can be generated from the code (Castle AR sits on top of NHibernate) so it is a great fix for fast moving agile project, especially in the initial sprints. The thing I like most about it is that if I decide I want a more complex domain, all is not lost; I remove the Castle attributes and references wrap a repository pattern around it and I am done. It really is that easy. All my existing domain unit tests should still pass. In a matter of hours you could switch from a developed 2 tier app to an enterprise ready scalable architecture.

Basically AR is great if you are lazy (or need results now).

To prove all of this I plan on presenting Castle AR in an up coming Perth Alt.Net meeting. Originally I had wanted to show NHibernate however I think the progression from AR to NH will help show both of there benefits. This may also allow us to show the limits of both... time will tell.

As a side note the castle stack (as well as NHibernate) has had a "proper" release this year so if you haven't had a look in a while check it out :

Links - Articles

All you wanted to know about Castle ActiveRecord - Part I

All you wanted to know about Castle ActiveRecord - Part II

Links - Videos

Ayende Rahien - Using Active Record to write less code (Oredev 2008) << Watch me I'm great!

Ayende & Hammet - Painless Persistence with Castle ActiveRecord (JAOO)

Tuesday, September 1, 2009

The Build XML Divorce - Part II

OK, so as I continue to play with Rake I am very quickly seeing what is happening. I am in effect building up a bunch of reusable scripts that can manage task dependencies but really are just orchestrating other applications and system admin tasks. The syntax is nice, Ruby is a very clean fluent  language, however it is becoming abundantly clear that all I am really doing I reorganising my build flow from PowerShell and MSBuild to Rake and MSBuild... It quickly dawns on me that I probably have not understood how PSake really works. I have briefly looked into PSake only because James Kovacs (who I hold in very high regard) was the author. I quickly pushed it to the back of my to do list as it look more like a pet project that's only intention was to add to the variants of Make. The problem was I didn't really understand what it was supposed to do. At the time, in my mind all PSake provided was a means to have dependant task hierarchy written in PowerShell... that's it... buts that's all it needs to be! It should be calling out to MSBuild or csc.exe to build assemblies, it should be calling out to your test runners and analysis tools. The (R/B/M/Ps)ake tool is (IMCO) just a way to facilitate tasks and control their dependencies.

Ok so why the big rant? Well it was becoming obvious to me that what I was trying to do things in Rake last night were things that I could easily do in PowerShell. Not only easily but arguably much more appropriate to be done in PowerShell; things like file and directory manipulations. My build process is really pretty basic and can been done completely in an XML based tool like Nant or MSBuild. Its what I do after the most rudimentary clean/build/test that requires a bit more muscle, this is where I have been using PowerShell any way, so using PSake just makes sense. PSake is just PowerShell with a nice clean API to declare tasks with dependencies. Anything you can do in PowerShell you can do in PSake.

This is good news. So the next step is to refactor my PowerShell bootstrapper scripts into PSake tasks, pull some of my MSBuild task into PSake task and keep the MSBuild file down to the bare minimum that MSBuild does well.. namely: Build. One thing that I would have thought 6 months ago was that Rake/Ruby would soooo much cleaner sexier code... but no, I actually think the PowerShell code is very nice and very well suited to these types of task. Sure its got a few bugs to iron out, but my affection for PowerShell continues.

Sorry Rake, its been a fun 3 days, but its over... its not you, its me.

Sunday, August 30, 2009

The Build XML Divorce

Like many .Net Dev's is have been using NAnt and MSBuild a lot of the last few year to speed up my own local build and to create a suite of task for my build server to do when I check code in. If you have been doing the same I am sure you will run into issues as soon as you surpass the most basic of clean->build->test->analyse type scripts. For some reason I like to build deployable versioned  packages for each environment when I deploy to Test. We only deploy to test every day or 2 and I want to know that version x on Test will have the exact same complied code as version x on UAT and Prod... it sounds obvious, however its no surprise that this is not always the case in may software departments. Having these versioned packages ready to go also means when it is time to push to UAt or prod it a matter of seconds before they could be live (not hours or days like some places i have worked at). Doing this more detailed versioned pre-deployment packaging meant my XML based build soon became messy and I turned to PowerShell to bootstrap some of the processes and loop thru things like swapping out config's etc. This is fine but it was becoming a little confusing  for the other Dev's who had not been as involved in the process as any of us would have liked (especially as a bat file was kicking the whole thing off for local builds). It also means they now have to know MSBuild and PowerShell...

Ok so this blog post is not going to be anything ground breaking for those out there that are au fait with the ruby community, however I have had an itch to check out (properly) rake for a while now. I have finally got a home project that I am sinking my teeth into and I thought this is a great opportunity to bring rake in to the folds.. finally!

Right so a quick brief on Rake: Its loosely based on Make, Its a build tool written in Ruby, its much cleaner than the XML based options & you are writing real code, so you can do what you want (including loops; which in ruby is oh-so-clean)!

here is a super simple skeleton rakefile.rb script below. The rake file should be somewhere  in your solution directory structure. Just calling rake from the cmd line in this directory will call the default task.

task :default => ["build:test"]

namespace :build do
desc "Clean Solution"
task :clean do
puts "Cleaning..."

desc "Build Solution"
task :buildsln => :clean do
puts "Building..."

desc "Test Solution"
task :test => :buildsln do
puts "Testing..."

namespace :deploy do
desc "Publish Soln"
task :publish do
puts "Publishing..."

So say this is in "c:\rhysc\rakefile.rb", I just open a cmd window change the dir to "c:\rhysc" and type rake and the follow will be printed:


Right so lets look at the above rakefile.rb document:

  • First we define our default task, the thing that will run if the rake command is not given any parameters. This is saying the Test task in the build namespace is the default task to run.
  • next we define a namespace (build); this is standard ruby
  • next we document our task. If we type "rake -T" we get to see the list of available task with their descriptions. Personally I think this is fantastic
  • next we define a task! these task are pretty silly as they only print to the console what they should be doing but it helps show the basic structure

Note the build and test task have the => notation. this show dependencies i.e. test depends on build which depend son clean; so calling test will mean the tasks that are run (in order) are clean, build then test

Also note that we have quote marks in the default dependency (["build:test"]). My knowledge of ruby is poor at best (i'll get there!), but for some reason this is required when using a namespace. If test was not in a name space the line could read:

task :default => [:test]

To call the publish task with all the build tasks we would just call :

rake "build:test" "deploy:publish"

Clearly this is a very light taste of what rake does. I intend on posting more scripts as I continue to build real scripts* to incorporate into my code base however for now the link below may be a good starting place... as well as reading the doc's...  I'm so looking forward to losing this XML bride ;)


(you obviously need Ruby installed.. its click once so its pretty painless)

*don't worry work colleagues; I don't intend on inflicting this onto you... yet.. we'll keep to our Bat File/MSBuild/PS cocktail for now ;)

Thursday, August 27, 2009

.Net - Rx

Rx in .Net 4.0 is looking pretty sweet. It also tackles something that has bugged me for quite awhile: the notion of having to explicitly unhook from an event...

It basically is Linq to events, for more info see some cool stuff here:
InfoQ Article and Jafar Husains blog posts here and here

Saturday, August 15, 2009

AOP with Delegates

In the past I have made mentions of the notion of Aspect Orientated Programming (AOP) in regards to reducing the noise that can occur when cross cutting concerns, like logging, invade business logic. Unfortunately most of the posts I have made have been in reference to tools and the assistance they can offer. Such tools like PostSharp, Unity, Castle etc provide some "magic" to eliminate the code clutter. Unfortunately many of the people I talk just do not implement tools like this at the place they work and want a POCO option to deliver such results. Well this is actually simpler than many people realise and also points to the issue of the misunderstanding of delegates, anonymous methods and lambdas ; as well as the huge amount of code reuse they can provide.

Firstly I will show an example of “typical” business code that has a lot of business noise. Secondly the code will be show how it could be written if we were use AOP and later on a clean version that mixes POCO with other forms of AOP

class AopEnabledSampleService : ITransferable //from Wiki
void Transfer(Account fromAcc, Account toAcc, int amount)
if (fromAcc.getBalance() < amount)
throw new InsufficientFundsException();

class NoAopSampleService : ITransferable
private string OP_TRANSFER = "Transfer";
private Database database = new Database();
private Logger systemLog;

void Transfer(Account fromAccount, Account toAccount, int amount)
if (!getCurrentUser().canPerform(OP_TRANSFER))
throw new SecurityException();

if (amount < 0)
throw new NegativeTransferException();

if (fromAccount.getBalance() < amount)
throw new InsufficientFundsException();

Transaction tx = database.newTransaction();

systemLog.logOperation(OP_TRANSFER, fromAccount, toAccount, amount);
catch (Exception e)
throw e;
//...more code


It is quite clear that the AOP code is much cleaner to look at however there is a lot that is potentially happening that we do not know about. You have to know that the AOP injection or interception is catering for all of the things that the second example dealt with. This is a fundamental problem with AOP: it is not explicit. This obviously can make it very hard to debug and can be confusing to the developer maintaining the code. One way you can get around this by marking up methods or classes with attributes; this at least gives the user of the code a hint as to what is going on. Many of the AOP providers allow for this. However sometime you are just shifting the noise from inside the method to an attribute. How you deal with this is up to you and your team, however I will later on offer some ideas how to manage this.

What the purpose of this post was is to show how we can achieve the functionality of the verbose code above with reduced noise, yet still be maintainable and somewhat explicit. What we will eventually be using is lambdas to achieve the same functionality. Many .Net Dev's use lambdas on a semi regular basis but many do not know how to write a basic API that uses them or even what is really going on when they are using a lambda. Bare with me now while we have a code school moment and cover methods, delegates, anonymous methods, lambdas (closures will be covered in another post). If you are comfortable with all of these then I don't really know why you are reading this post, you should know how to solve this problem already.


Right we all know what a method is; its a function, something that does something, typically a command or a query. You can pass in parameters and you can get something back from a method. The way we typically use a method is in the named sense i.e. 5.ToString(); we are calling the ToString Method on the integer object 5. The name of the method is “ToString”


A delegate is to a method what a class is to an object. A class defines an object as a delegate defines a method. Typically most code will never need to define a delegate for a given method unless it is passing the method around like an object... read that again; you can pass methods around like objects. This is where delegates become powerful and this is where the notion of delegates is often misunderstood and often not even known! We will cover more of this later... but for now here is how you define a delegate and what a method would look like that adheres to a delegate.

public class UsingDelegates
public delegate void MyDelegate();

public void Main()

private void MyMethod()
Console.WriteLine("This is My Method!");

private void UseADelegate(MyDelegate myDelegate)
Console.WriteLine("Before using my delegate");
Console.WriteLine("After using my delegate");
/*Output is:
Before using my delegate
This is My Method!
After using my delegate*/

In this code we expose the public method Main which then calls the UseADelegate method passing in the address of the MyMethod method. Note that the parameter passed in to the UseADelegate method does not have the typical parenthesis associated with the method, that is because we want to pass the method as a delegate, not the returned value of the method; This is hugely significant. You will also notice that the UseADelagate method takes in a variable of type MyDelegate. We have defined MyDelegate as a delegate at the start of the class. When you define a delegate you are defining a signature of a method. The name does not matter (except for readability), the only things that matters are A) whatever can use it must be able to access it (an appropriate accessor) and B) the return type and parameter types are consistent with the methods that you intend to use as the delegate. To me this is similar to classes using interfaces, you don't care what the name of the classes that implements the interface is it just has to implement what the interface says to implement. Delegates are similar, however they are not explicit. A method does not says it implements a delegate in the same way a class says it implements an interface.

The syntax for defining a delegate is

[accessor] delegate [return type] [Custom Delegate Name] ([parameter list]);

e.g. public delegate List<Customers> CustomerFilterDelegate(string filter);

Now any method that returns a list of customers and takes in one string parameter is compliant with this delegate.

Right, now that I have told how to define a delegate I am going to throw a spanner in the works and tell you to never do so... sorry. The reason is that now .Net has given us reusable delegates in the form of Func<> and Action<>. Action specifies a delegate with a return type of void so each of its generic parameters are indicators to the parameters in its signature it is defining. Func is used the same however the last generic argument is the return type.

You can now define any reasonable delegate signature with these two generic delegate types. For example the delegate we defined above would now be used as Func<string,List<Customers>> instead of CustomerFilterDelegate. See Framework Design Guidelines for more info.

example of the above code rewritten to be guideline compliant

public class UsingDelegatesCorrectly
public void Main()

public void MyMethod()
Console.WriteLine("This is My Method!");

public void UseAnAction(Action myDelegate)
Console.WriteLine("Before using my delegate");
Console.WriteLine("After using my delegate");

Anonymous Delegates

An anonymous delegate is a method without a name, i.e. it has a body but no name... hmm. As we have mentioned the name of a method has no relevance to whether it adheres to a delegate definition, it is its signature that counts. Previously we were only using the method name as a effective pointer for the address body. What many people don’t know is that you can create a method body without a name, commonly known as “anonymous methods”, “anonymous delegate” or “inline methods” e.g.:

Action myDelegate = delegate()
Console.WriteLine(”Hello, World!”);
myDelegate();//writes “Hello, World!” to the console

You can use an anonymous delegate anywhere you would typically use a named delegate, however you define the method at the point you wish to use it. The syntax for defining and anonymous delegate is

var x = delegate([parameter list]){[body of method including the return statement]};

Note that the return type is not declared, it is inferred by the presence and type of the return value in the body of the anonymous method. If there is no return value the delegate is considered to have a return type of void. Below we show how the code above would have been written using anonymous delegates:

public class UsingAnonymousDelegates
public void Main()
{ Console.WriteLine("This is My Method!"); }

private void UseADelegate(Action myDelegate)
Console.WriteLine("Before using my delegate");
Console.WriteLine("After using my delegate");

This case show that we do not have to define a delegate signature (the .Net built in Action type is suitable) and we do not even need to create a named method!


Anonymous delegates were great when they came out, it saved a lot of code rewriting and promoted better code reuse; however it was ugly. The majority of the signature still had to be declared and worst of all it was reasonably easy to write anonymous delegates but almost impossible to read them, making maintenance a PITA.

Introducing Lambdas: Lambdas are exactly the same as anonymous delegates in functionality however they have a very different and more readable syntax. Lambdas basically allow the writer of the code to infer a lot about the method signature without explicitly doing so. The reason this can be done is because often the signature is already defined so the lambda can make use of it. Enough chat, lets see what the previous anonymous delegate would look like as a lambda:

Action myDelegate = () => Console.WriteLine("This is My Method!");

Ok so not a huge difference; we have dropped the key word "delegate" and added an arrow looking thing. Perhaps I should show something a little more complex. Firstly lets define a more realistic anonymous delegate using the method from the first example:

Action<Account, Account, int> transfer = delegate(Account fromAccount, Account toAccount, int amount)
if (fromAccount.getBalance() < amount)
throw new InsufficientFundsException();

as a lambda:

Action<Account, Account, int> transfer = (fromAccount,toAccount, amount) =>
if (fromAccount.getBalance() < amount)
throw new InsufficientFundsException();

As you can see the method body is the same, it is just the definition of the parameters that is different and that is because the types are inferred. Again this may not seem like much at the moment but the heavily reduced noise has allowed for much more readable framework usage. I would hate to think how my current test would look in RhinoMocks if I was not using lambdas!

Couple of things I should mention:

  • when using a lambda expression that takes in no parameters use the empty parameters to signal this is the case e.g. ()=>//method body

  • if the method body is a one liner you do not need the curly brackets{}, but you do if there is more than one line!

  • You do not need the parameter brackets when defining the parameter name if there is only one parameter, you do if there is more than one

  • if the return statement is a single statement without curly braces you do not even need the return key word!

(a) =>               {

                             return "bob";


can be written as

a => "bob";

Just to keep things consistent here is the Console.WriteLine example using lambdas:

public class UsingLambdas
public void Main()
() =>
Console.WriteLine("This is My Method!")

private void UseADelegate(Action myDelegate)
Console.WriteLine("Before using my delegate");
Console.WriteLine("After using my delegate");

Using Delegation to Achieve AOP-like Coding

Alright, the whole point to this post was to show how you can use plain .net without any other libraries to do AOP like activities.

Firstly using Lambdas is not as clean as interception, but it is a lot cleaner than copy and paste (right click inheritance) I see so often. I want to help create better code too so here are some thought on where to use AOP and where to use delegation:

  • Use delegation when you want to be specific and and explicit about your intentions (e.g. transactions)

  • Use interception/injection based AOP for things are are truly behind the scenes (e.g. logging)

  • Use attribute based (i.e. explicit) AOP when you want the developer maintaining your code to know that some aspect is taken care of (e.g. security) but you do not want it polluting the method body

Below is an example of what the first example could look like if using a combination of lambdas and AOP:

public class SampleService : BaseService, ITransferable
void Transfer(Account fromAcc, Account toAcc, int amount)
TransactionWrapper(() =>
if (fromAcc.getBalance() < amount)
throw new InsufficientFundsException();


internal abstract class BaseService
protected void TransactionWrapper(Action wrappedDelgate)
Transaction tx = database.newTransaction();
catch (Exception e)
throw e;



  • The logging is no where to be seen. I personally hate seeing logging code, it should be hidden away. To me it is pure noise. This would have been taken care of by the AOP framework of choice.

  • Security is kept a subtle as possible without leaving it off the radar. This is not always possible but if I can I keep it out of the method body and as an attribute.

  • The transaction is dealt with by a separate method that takes in a delegate. This method can now be reused allowing any other method to take advantage of the pre existing transaction handling. this can now be pushed into a base class, or if a standard .net transaction is being created, a static method that anything can use.

Personally I like the last example the most. However to implement this it does require a reasonably detailed knowledge of AOP so interception can be done  using attributes or not and it does require a basic understanding of delegation. Hopefully this post has helped with the later. Next time you start to see repeated code in your code base think if you could use delegation to clean your code up and star making it more reusable.


Monday, August 10, 2009

Explicit interfaces

Further to our teams discussions with Greg Fox and following on from Colin Scotts blog post I thought i would highlight this:

It is a compile-time error for an explicit interface member implementation to include access modifiers, and it is a compile-time error to include the modifiers abstract, virtual, override, or static.

Explicit interface member implementations have different accessibility characteristics than other members. Because explicit interface member implementations are never accessible through their fully qualified name in a method invocation or a property access, they are in a sense private. However, since they can be accessed through an interface instance, they are in a sense also public.

For more information see the MSDN documentation.

that’s all, interesting tho...

Friday, August 7, 2009

Arguments in Stubs and mocks

Below is an informal email to work mates. Please note I am not the boss, I am a lowly contractor at the bottom of the heap, The devs I work with have a refreshingly open communication channel and I tend to have a bit more experience in Testing/TDD

Hey Guys
I think I have unfortunately let some bad habits of mine slip over to you guys.

Some basic rules of thumb:

-Stubs should be used by default to isolate dependencies, use mocks when you are mandating that the SUT is incorrect if it does not interact with the given dependency.
-I will often declare in a test method set up the class level dependency to be a mock (with associated verify in the tear down), however that does not need to have expectations. You can always use the stub method on a mock, meaning failure to call that method will not fail the test.
-Use correct arguments and return values. Returning null and using the .IgnoreArguments() method should be last resorts and are generally a sign (if it is my code) of laziness or haste. Don’t do it unless it actually makes sense for the test.
-When returning null from a stub or mock the SUT should be handling it properly. I.e. what if that dependency actually did pass back a null? Is that even valid? Should we be handling it?

The major problem I find is the ignore argument method, I abuse it far too much when mocking, and I see it creeping it others work (not just ours but external code too!). Note: Ignore Arguments on stubs is not so bad, as a stub should not fail a test

RhinoMocks has the ability to specify argument placeholders that do not have to be the exact reference type that is being used e.g.:

Mock(s => s.SaveNewTimesheet(

.Return(new TimesheetDetailsDto());

Which is much better than:

Mock(s => s.SaveNewTimesheet(null))


Be sure to override the equals method to accurately reflect the equality tho, otherwise the default of reference equality will still cause the test to fail!

Thursday, August 6, 2009

Coding Guidelines

Last night I presented to the Perth .Net Community on an upcoming tool called PEX. There were a couple of mentions in the talk of "allowable exceptions" backed up by mentions of the .Net Framework Guidelines.
I was asked by a few people afterward what the book was and whether I had presumably made these guidelines up ;)
I was under the impression that this book was widely read, so it is clearly not as common knowledge as i may have thought.
Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition)
is a must read for .Net devs that are writing code that is consumable by others (ie anything that uses public or protected accessors)

I would highly recommend this as it also gives a lot of background as to "why" behind the recommendations. It is also nice to read the comment from authors of certain .Net framework as they point out many things including their mistakes.

The books is made available online for free (not sure if it is in its entirety) at MSDN here

The allowable exception was in reference to section 7.3.5 (page 237) or a cut down version here

Oh, the links to the Pex stuff are here:

Thanks to everyone who came (especially those who bought beers afterwards) ;)

Friday, July 24, 2009

Manning Book Reviews

Thought i'd let you guys know of some books i have been reading that are pretty good, They are all from Manning which I am really starting to like that publisher more and more.

JQuery In Action: Within about 90 minutes of reading this book you will understand the fundamentals of jQuery and be ready to do basic, but powerful, jQuery code. If you are using javascript natively seriously consider switching to jQuery and get this book. jQuery also has a test framework (QUnit) and a great suite of UI plugins (jQuery UI)

Art Of Unit Testing: Easily the best unit testing book I have read (and I have read a few). Great for newbies and those still getting to grips with how to test anything more than the most trivial of examples. It is the book i would recommend to people looking to learn to do TDD well. Note the examples are in C# but they really dont require indepth knowledge of .Net, in the same way all the other books are in java and I havent written a line of a coffee flavoured code in a decade. In saying that the tools are all .Net based but i am sure there are Python, Ruby and Java equivilents avaliable for most.

NHibernate in Action: Pretty much the same as the Hibernate book but shows all the .Net stuff you can do. its also a bit more up to date that the original Hibernate book (which has since had a second release). .Net devs nusing NH need* this book.

IronPython in Action: not a bad book... it does exactly what it intends, it teaches .Net devs about python on the CLR. The question is: Do you care? For me it was something of interest, i doubt I'll ever use it in production. As a side note for the .Net kids i think the path of C# => Boo => Python =>Ruby is the one to take for the typical C# developer**. It keeps the "barrier to entry" low for the next step so you are picking up one new thing at a time (ie new syntax, dynamic language contrainst, DSLs and other scripty weirdness) and by the end of the process you have 4 languages under your belt in about the same time it would take to do the C# => Ruby jump.

that's all


*OK no one needs anything, especially as the NH doc a pretty good, but you will be severely hindered without it.
** VB.Net devs; you know you will never learn another language, you have had years to do so!

Wednesday, July 22, 2009

Unity with config free AOP

At the current place of work i have managed to introduce the notion of IoC and DI. As the team was using EntLib i investigated Unity and found it to be a suitable replacement for Windsor or SM considering how we were going to be using it.
We have just started up a new project and i have asked one of the lads to investigate AOP with unity, what we found was a pretty simple solution for our initial requirement, logging the service calls using Unity

Below is the 3 files that make up the spike. it very trivial, but there is some pretty average info on how Unity work, there is sample (like this) but with little explanation as to what is going on. David Hayden is probably the first port of call for more info (note you will need to reference Unity 1.2 and Unity Interception).
Note we are using the TransparentProxyInterceptor not the InterfaceInterceptor, which i believe is broken as it does not handle inheritance, which in an OO world is not good enough.

class Program
static void Main(string[] args)
IUnityContainer container = new UnityContainer();
container.RegisterType<ITalker, Talker>();
<ITalker>(new TransparentProxyInterceptor()).
AddMatchingRule(new TypeMatchingRule(typeof(ITalker))).

Console.WriteLine("This is the start");
Console.WriteLine("This is the end");

public interface ITalker
void Talk();

public class Talker : ITalker
public void Talk()

public class LoggerHandler : ICallHandler
public LoggerHandler()
Order = 0;

public int Order { get; set; }

public IMethodReturn Invoke(IMethodInvocation input, GetNextHandlerDelegate

Console.WriteLine("** I'm in!**");
var result = getNext().Invoke(input, getNext);
Console.WriteLine("** Out I go :) **");
return result;

Which returns :
This is the start
** I'm in!**
** Out I go :) **
This is the end
*Sorry about the formatting, this is done with out WLW*

Tuesday, June 16, 2009

Many to Many joins- Revisited

I do a Google search a year later & I find my own post and I don't like it.
There are several issues with this post:

Firstly sending objects over the wire was always a bad idea, this post was trying to dodge that as the team did not want to move to DTOs, which i maintain would have saved us time in the end. The real fix is to map the entities to DTO's and send those over the wire specific to the service calls needs.
Secondly MANY-MANY joins are not cool. There are very few places where Many-Many actually exists. Hiding the join in the Entity should have been done, not eliminating the mapping and the joining classes. Redoing this i would have kept the joining class and mapping as a one-many <-> many-one relationship.

eg to expose a customers favourite restaurants

public class Customer
public IEnummerable<Restaurant> GetFavouriteRestaurants()
foreach(var customerRestaurant in CustomerRestaurants)
if(customerRestaurant.IsValid())//some check if required
yield return customerRestaurant.Restaurant;

This now hides the notion of a CustomerRestaurant entity from the outside world as it can be contained with the realms of the domain entity classes (that being Customer and Restaurant)

Well, i guess its good to review ones work, I'm not happy that this was a decision I made, however acknowledging ones mistake is an opportunity for growth

Tuesday, June 2, 2009

Gallio : Why? When? How?

In a time were TDD and Continuous Integration is becoming common place Gallio is a great tool to have in the tool belt. I have been a fan of the related MbUnit for several years but only in the last 6 months have I really seen the light in the separation of the Gallio project and why it is such a good thing.

Lets back the truck up a bit and shed some light on what exactly (in my mind) Gallio really is then we can talk about why you would want to use it.

Gallio is basically an interop facility that act act as a generic test runner. Sure it can be much more than that but at the end of the day 99.99% of people that would be using it  will be using it as a  means to execute tests. Gallio actually is a project that has broken away from the MbUnit project to provide an neutral test runner for other test frameworks.

So what the hell is a test runner? Firstly we would need to look at how we would normally run a (unit) test. Firstly we would typically choose a test framework to write tests in; the common API's that fall in to this category are NUnit, MbUnit, XUnit.Net and MSTest. These allow us to write classes and methods with attributes that describe what and how we wish to test the system under test (SUT). Writing these test does not run the tests, we still need something to kick the process off. This is where our test runners come in to play. TestDriven.Net, ReSharper, Visual Studio test window and the various separate GUI that come with the frameworks (e.g. Nunit GUI Runner and Icarus for MbUnit) allow us to select what tests we wish to execute. Unfortunately there is some degree of coupling present here, i.e. the Visual studio test runner may or may not run your given test framework, NUnit GUI surely doesn't run XUnit tests. There is also the very large issue of being able to run these from the command line or a script; this is pretty important for continuous integration. This is where Gallio fits.

Gallio provide a neutral system that provides a "neutral system for .NET that provides a common object model, runtime services and tools (such as test runners) that may be leveraged by any number of test frameworks." What this means is Gallio can sneak in between your chosen test runner and the test API, providing an abstraction between the two. When I first understood this is was underwhelmed... who cares? Well apparently I do!

You see at my current place of work we, like many .Net teams, use MSTest as our test framework. Being the good kids we are we were keen to get CI up and running and with out TFS properly installed (at the time) we decide to use TeamCity as  our build server. Its a great tool and I have no regrets in using it. Unfortunately trying to get MSTest test to run from a script is a little fiddly and requires an install of a version of visual studio that has MSTest on it on the machine that want to run the test script. Obviously we want or build server to run the test for the solution too, so now we had to install Visual Studio onto our build server... this is not cool.

  1. It takes up a lot of space, we had to fight to get a VM created for us to have a build server and  installing VS took up most of the space we were given
  2. We had just used up one of our licences of Visual Studio. VS is not cheap. Sure, I work for a huge company that haemorrhages cash, but wasting money is still wasting money.

Enter Gallio. With a minor adjustment* of our build script I can now use Gallio to run my MSTest tests from my MSBuild script. This is pretty cool. What this means is now I have a test framework agnostic build script. If we converted all of our tests to MbUnit I would not have to change my build scripts; MbUnit is supported by Gallio so I am covered. This also means I have nice reports generated for me without crazy MSTest stuff spewing all over my hard drive. The reports are very clean, configurable and human readable. I can show my department manager (who may or may not be technical) the test reports for all of our projects and he can see what state they are in. Having a clean readable report seriously helps in promoting our good work, something an MSBuild log file or nasty MSTest XML would not do so well.

OK so who should be interested in Gallio?

People who "do" CI: Having a free test runner on the build server may be saving you cash and is a big benefit, I would say however having a neutral runner means easier maintenance and is the biggest win here. The build scripts will all use the same syntax. Gallio works with the above mention test frameworks but also integrates with MSBuild, NAnt, NCover, PowerShell, CC.Net and TeamCity.

People who use (or potentially may use) more than one test framework: Having Gallio in the mix means running NUnit from visual studio is very simple. Pick your poison; TD.Net, ReSharper and VS can all now run that or any other Gallio supported framework.

People who want good consistent Test Reports: This is certainly my opinion, but I really like the Gallio reports. They are clear, easy to navigate and if you are using multiple frameworks you can now have a consistent format to display your reports.

Something to get you started - an MSBuild template for using the Gallio.MSbuildTasks assembly:

<Project xmlns="">
<!-- This is needed by MSBuild to locate the Gallio task -->
<UsingTask AssemblyFile="[Path-to-assembly]\Gallio.MSBuildTasks.dll" TaskName="Gallio" />
<!-- Specify the tests assemblies -->
<TestAssemblies Include="[Path-to-test-assembly1]/TestAssembly1.dll" />
<TestAssemblies Include="[Path-to-test-assembly2]/TestAssembly2.dll" />
<Target Name="RunTests">
<!-- This tells MSBuild to store the output value of the task's ExitCode property
into the project's ExitCode property -->
<Output TaskParameter="ExitCode" PropertyName="ExitCode"/>
<Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" />

Hopefully this helps shed some light on the Gallio project and how it may fit into your build and test process.

*The minor adjustment is actually cleaning up the script which is also a good thing. It is much clearer what is happening. The MSTest hacks involve small amounts of wand waving.

Monday, May 25, 2009

MassTransit Host & Setup


The glue that ties all of MassTransit's moving pieces has to be done  on starting you application. We need configure the service to know what to start up, how to find it and in what context to run it.

MassTransit has split the host service into a separate project, namely TopShelf. you will see TopShelf being used to set up our MassTransit programs in the entry points of our application, typically with the Program.Main(string[] args) method.

The basic set up steps for creating a runner configuration are:

  • Describe the Service
  • Instruct how the service will be run
  • Configure the the service

Once you have done this, the TopShelf Runner can host the service.

Describing the Service means give the service a name, a display name and a description. The display name and description are visible from the Service Control Manager while the service name is intend for console line interactions

Instructing how the service will be run: Define any known dependencies (MSMQ, IIS, SqlServer etc), any actions that should be performed prior to running the service/host and also how the service is to be run: i.e what credentials will the service run under. We can also use the UseWinFormHost<T> where we can supply the name of the WinForm that is the host. This is great for demos, but I am not sure if it is intended for production use... Chris and Dru may care to comment on this; either way its handy when getting to terms with the stack.

Next we need to configure the service(s) we are hosting. In here we can define some delegate for certain event in service life (WhenStarted, WhenStopped etc) and we can also weave some of our IoC voodoo majic by defining our service locator. Again the authors have decided to use Castle Windsor for the sample, however I believe you can use any of the CommonServiceLocator Containers. As this method need to returns something that implements IServiceLocator, using the DefaultMassTransitContainer type makes life a little easier as it does a fair bit of the plumbing for you including setting the current service locator to itself.

private static void Main(string[] args)
//from Starbucks.Barista.Program.Main(string[] args) - Modified for readability
var cfg = RunnerConfigurator.New(configurator =>
//Describe the Service
configurator.SetDisplayName("Starbucks Barista");
configurator.SetDescription("A Mass Transit sample service for making orders of coffee.");

//Instruct How the service will be run
configurator.BeforeStart(a => { });

//Configure the service(s)
configurator.ConfigureService<BaristaService>(serviceConfigurator =>
serviceConfigurator.CreateServiceLocator(() =>
//Use MassTransit's built in Container (Castle Windsor specific), described earlier
IWindsorContainer container = new DefaultMassTransitContainer("Starbucks.Barista.Castle.xml");

//Add the components to the container
container.AddComponent("sagaRepository", typeof(ISagaRepository<>), typeof(InMemorySagaRepository<>));

//Tracing - not super important in this context
Trace.Listeners.Add(new TextWriterTraceListener(Console.Out));
StateMachineInspector.Trace(new DrinkPreparationSaga(CombGuid.Generate()));

//Return the Current ServiceLocator, which has been assigned in the DefaultMassTransitContainer ctor
return ServiceLocator.Current;
//Define delegates (specifically service methods) to fire on given ServiceConfigurator events
serviceConfigurator.WhenStarted(baristaService => baristaService.Start());
serviceConfigurator.WhenStopped(baristaService => baristaService.Stop());
Runner.Host(cfg, args);

Sunday, May 24, 2009

MassTransit End Points


Many people will be familiar with the notion of an "End Point" especially those who use WCF or other web service frameworks. An end point is "the entry point to a service, a process, or a queue or topic destination". My WCF background has had the ABC drilled into me (Address, Binding and Contract) as the 3 things that basically define an end point. MT is pretty much the same. Also like WCF, the endpoints are a configuration aspects of the solution so it seems valid to put this information in a config file. The MT boys are clearly Castle fans (although other IoC frameworks can be used) and they have chosen in most of the samples to use Castle Windsor to configure the endpoints.

SIDE NOTE: For those unaware of Castle Windsor (an IoC implementation) it allows you to write loosely coupled code and specify the concrete implementation detail via config, a little bit like the example of the Asp.Net Membership Provider which is a plug in pattern. Using MT without understanding IoC may prove to be difficult... in fact I would say you are almost certainly biting off more than you can chew. Look in to the Castle stack, it really is great OSS framework to help pick up good habits.

Moving on...

The defining of the endpoints should not be confused with the Castle implementation. It is just as easy to do this in code. Anyway Lets walk through a typical castle config file for MT:

From the Starbucks Sample:

<?xml version="1.0" encoding="utf-8" ?>
<facility id="masstransit">
<bus id="customer"
<subscriptionService endpoint="msmq://localhost/mt_subscriptions" />
<managementService heartbeatInterval="3" />
<transport>MassTransit.Transports.Msmq.MsmqEndpoint, MassTransit.Transports.Msmq</transport>

First and foremost this is a castle config. The name of the file "Starbucks.Customer.Castle.xml" is a pretty good hint and I know "facilities" is a castle concept. MassTransit have embraced the concept of facilities which you can investigate here. MassTransit have their own Facility, namely the MassTransit.WindsorIntegration.MassTransitFacility which helps us get up and running with out having to know about all the plumbing. In this MassTransit specific facility we define the Bus and the Transports. The transports child node is equivalent to our "Binding"; it is essential so we know what transport mechanism to use. You will see standard .Net notation for expressing a type in XML, i.e: "Fully.Qualified.Namspace.TypeName, Assembly.Name". This type must implement the interface MassTransit.IEndpoint. Currently there are adapters for MSMQ, NMS, Amazon SQS and WCF.

The other child node in the facility defines the Bus. Here we give the Bus a identifier and its end point. These are both mandatory. The end point will be the URI the bus will receive communication from, when the application publishes a message. The Id indicates that there can be multiple buses configured, which there can. The bus also can have several child nodes specifically:

  • controlBus

  • dispatcher

  • subscriptionService

  • managementService

The Control Bus is involved in managing the disparate system. For example the Starbucks example uses a control bus to manage the interaction amongst the server side consumers: the Cashier and the Barrister. For more info on a Control bus see page 540 in Enterprise Integration Patterns.

The Dispatcher is a means to control the use of threads. High volume message interaction can be handled using multithreading specifically with the attributes maxThreads and readThreads, both of which are self explanatory integer values.

The Subscription Service is the common service that provides an endpoint for subscriptions. The only value required is the end point attribute.

The Management Service allows for specifying a heartbeat monitor for checking the health of your services queue. The samples use the SubscriptionManagerGUI to show the queues  that are being listened to and the health of the subscriptions.

I do not believe any of these bus child nodes are mandatory, from looking into the code the only requirements are that the bus has must have an id & end point and the facility has a defined transport.

There are a couple of notes for new comers to Castle and MassTransit. Like most config files the XML file that is shown above should have its build action as "Content, Copy Always". The Queues that each service uses also need to be set up (e.g. in MSMQ) before they can be used. Luckily the exception handling in MassTransit is pretty good and will let you know that and endpoint is not set up if it is required, just be sure to read the queue name correctly. I spent a about 15 minutes trying to figure out why a sample subscription was failing when the exception was saying I had  not set up "mt_server1". I thought it was saying "mt_server". If in doubt read the exception! We will cover how the castle config is tied up in the Host And End points post.

End points  and their configuration may be a bit tricky for new comers, but if you break each piece down it becomes more manageable.

Thursday, May 21, 2009

OLE DB oracle drivers

I have had issues in the past with standard .Net OLE drivers for Oracle with reagrd to transactions, switching to the oracle drivers fixed the issue, however i have now found the reason why... the M$ one explicitly does not support nested transactions!

Microsoft's OLE DB Provider for Oracle:"...At this time, the provider does not support nested transactions, which is how it would expose save points."

first link here:

Note navigating directly to the expert exchnge site will not show the answer, google have forced them to show answers at the bottom of the page, hence to indirect link.
This post is much more for me to find this link again.

Basically if you are using .Net and Oracle, use the Oracle drivers

Tuesday, May 19, 2009

MassTransit Publishers


So we feel we have something that the world needs to know about, we have messages to publish. This is what kicks off the events that make up the Pub/Sub system. The IT division have told you they are sick of modifying the HR application to call a growing number of web service to let those services know about  new or updated employee information. You decide this may be a good candidate for some Pub/Sub love. We will start with New Employees, firstly we would need to create a suitable message to publish, say "NewEmployeeNotificationMessage". This has all the relevant info in the message. As part of the creation process all we need to do is create a message of the given type and publish it.

var message = CreateNewEmployeeMessage();

That is it. Well.... its not, but as far as the publishing code goes that all there is too it, there is a little bit of infrastructure set up that goes on at start up, but to publish a message is really that simple.

There are times where you may want to know of a response if a subscriber sends one, this can be done by setting a response address in a delegate as part of the publish eg:

_bus.Publish(message, x=> x.SetResponseAddress(_bus.Endpoint.Uri));

If a response is expected then the service publishing the message should also be a consumer of the response message type, see the consumers post

The bus is a MassTransit.IServiceBus that is injected in to the service. We will cover setting up the bus later on in the series.

*this may be a bit over the top example. If you a re building enterprise wide service and integrating system perhaps MT is a little too light weight, judge for yourself. Personally I am angling at using for intra component messaging.

MassTransit Consumers/Subscribers


A messaging system does make a lot of sense if no one or nothing is listening, consuming or subscribing to those sent messages. If you are interested in a particular event that a message represents then you subscribe to that event.

Continuing on with the idea of a new employee at a company, lets assume that head office have decided that all staff members must do a new online intranet based safety course and any new employees must do the safety course as part of the induction. We can create this online application, send out the notifications to all existing staff, but how do we ensure all new staff do the course? well we know that HR publish a New Employee Notification when an employee joins the company so we decide to subscribe to the message so our application notifies the new employee and his supervisor that this course must be completed as part of their induction.

Ok, so how do we do this in MassTransit?

Well one option is to create a consumer, a service that subscribes to the message and acts on it when it happens.

public class NewEmployeeService : Consumes<NewEmployeeNotificationMessage>.All
private IServiceBus _serviceBus;
private UnsubscribeAction _unsubscribeToken;
public void Consume(NewEmployeeNotificationMessage message)
//Notify user and supervisor of course requirement
public void Dispose()
public void Start(IServiceBus bus)
_serviceBus = bus;
_unsubscribeToken = _serviceBus.Subscribe(this);
public void Stop()

A couple of things to note here:

The NewEmployeeService implements the "Consumes<T>.All" interface. This means we are subscribing to any message published of type T, in this case NewEmployeeNotificationMessage. By doing so we must implement Consume(T message), this is the method that will be called when the message arrives.  Start and stop are methods we have defined that get call when the host starts up the hosting service (we will cover this is later posts). More importantly and something that may not be obvious is the unsubscribeToken. When subscribing to the bus the subscribe method returns an UnsubscribeAction delegate that can be called when the subscription is no longer required. Therefore calling this delegate on the stopping of the service would be a good idea :)

A service can subscribe to many  messages by specifying and implementing more of the consume interfaces, as it is not a base class you are not limited to a single inheritance. So you may want to define the class as :

public class NewEmployeeService : 

It is also worth while to note that the message can be responded to:


This will send the message back to the response address specified by the client, see the Starbucks example: CashierSaga.ProcessNewOrder(..) and OrderDrinkForm. NB: The OrderDrinkForm also implements the consume interface for the response message, otherwise it will not know what to do with the message

MassTransit Messages


Messages are the backbone of MassTransit, without them there would not really be a need for the solution. Messages IMO should be a Verb. "Customer" is not a suitable message name as it has no intent, "NewCustomerCreated" is therefore a more suitable name. As far as MassTransit goes a message just needs to be a class that is marked as [Serializable]. For most scenarios is have encountered I actually want to track a specific message, i.e. I want to know its identity (which we will cover soon), so I have my message implement the interface "MassTransit.CorrelatedBy<T>" which gives the message a Correlation Id so I can track it. It is probably a good time to mention that messages are Immutable dumb DTO's. I have worked on several systems now that try to ignore this and every time it has ended in trouble. The message is a trigger, it should never be the entity you manipulating.

An Example from the MassTransit Pub/Sub Sample is below:

public class RequestPasswordUpdate :
private readonly string _newPassword;
private readonly Guid _correlationId;
public RequestPasswordUpdate(string newPassword)
_correlationId = Guid.NewGuid();
_newPassword = newPassword;
public string NewPassword
get { return _newPassword; }
public Guid CorrelationId
get { return _correlationId; }

Using the correlation Id means that  later on when I want to listen for associated messages I can. This will be covered in [Consumers/Publishers]

Getting started with MassTransit

Ok so I continue to play with MassTransit and I really like it. Unfortunately I still think there is a small barrier to entry that is stopping people from using it. The guys who have written it have done a great job of building an easy to use stack, but as it grows it may feel a bit like you don't know where to start.What I aim to do here is break the whole thing down to easy to understand pieces (theory) and then get the pieces together (practice!)

MassTransit leans on the concept of Publish/Subscribe or Pub/Sub. The idea being I can raise an event by sending a message (publishing) and any number of consumers that are interested in that message can listen in and consume that message (subscribing). This means as new subscriber become known the publisher itself does not have to be aware of its existence, the Bus (MT) will deal with it, providing a nice sense of loose coupling.

An example could be a new person starts at your place of work. His new boss goes in to the HR system and creates a new employee request*. This goes off to HR where it is actioned and a new Employee notification* is made. A slew of process are now kickoff that HR has no idea about , nor do they care. These could include the new employees desk set up, Identification preparation, security clearance checks, various inductions bookings... who knows? If a department or application are interested in that message they just subscribe to it.

*These are the potential published message

Right, lets delve in to the specifics:





Host & Set up




For background on MassTransit see my previous intro post here and some here