MvcPowerTools 1.0. Released

By Mike on 3 July 2014

After 6 months of work, I'm proud to announce to the Asp.Net Mvc community that MvcPowerTools (MvcPT) , aka FubuMvc goodness for AspNet Mvc, is finally released. Aimed at the seasoned developer, MvcPT brings the power of conventions to the asp.net mvc platform. What's the big deal about these conventions, you ask? Well, grab a be... chair and pay attention for 3 minutes.

Imagine that you want to write maintainable code. How do you define maintainability in a few words? "Easy to change" . Simply put, you want the code to be changed easily and that's why you want to write decoupled code, prefer composition over inheritance, use an event driven architecture, respect SOLID principles etc. Using conventions is yet another tool in achieving maintainability. And not only that, it's about control. Do you want to force your app into a framework's constraints or do you want to make the framework adapt to your style? Who's in control, you or the framework?

Using conventions means you decide the rules that a framework uses to provide a behaviour. Take for example, routing. If the routes are generated based on conventions (rules) you can come up with any convention you want that will create one or more routes based on a controller/action. Want to use attributes? OK! Want namespace based routes? Ok! Want to mix'n'match things based on whatever criteria? You get to do that! And best all, want to change all things at once? Just change the relevant conventions.

Let's talk about html conventions i.e rules to create/modify html widgets. When using the power of conventions, you can make bulk changes, without touching anything BUT the convention itself. Think about forms. In a big business app you may have hundreds of forms. If the form fields are generated using html conventions i.e the code looks like "@Html.EditModel()", then making ALL of the fields from all the forms use boostrap css amounts to define a single convention (2-3 lines of code). Change that convention and all hundreds of fields automatically change. It's simply DRY.

Some devs don't like conventions because it does look like magic, I suppose it depends on the developer if they want to control or fear the magic. I do find useful, for example, to see the [Authorize] on top of a controller, but I get tired very quickly to write that attribute anywhere I need it and inheriting a AuthorizedController base class is just a bad solution. I just want to say directly: "All the controllers from Admin need to be authorized" or "All the controller BUT these [] need to be authorized" or "All my WebApi controllers respond ONLY to Ajax requests".

Conventions allows you to do mass control as opposed to micro managing things. Tired of repeating yourself? DRY! Use a convention. Want to make bulk changes with minimum effort? Use conventions!

I'm not cheering for a new golden hammer, but for another tool that will help us write maintainable code. And if you liked Jimmy Bogard's put your controllers on a diet approach you'll find a nice surprise built in MvcPowerTools.

Now, go read the docs, get the nuget and start writing more maintainable apps!

Introducing Make# aka MakeSharp

By Mike on 20 June 2014

I don't always have many needs when using a build automation tool (with C# as the scripting language) but when I do, I make a mess with the procedural approach. Static methods and global variables simply don't work if you want a tidy, reusable and flexible functionality. At least in my case it doesn't work, since I'm so used with the object oriented mindset and I always feel the need to define a class and extract some things into private methods etc.

So, I've decided to take my CSake (C# Make) project to the next level i.e I've rewrote it to be more compatible with the OOP mindset and to use the (not so) new scriptcs that everyone's raving about. Enter Make#(MakeSharp) the build automation tool where you can use C# in an OOP way (like God intended). And because I do like intellisense, all of my projects using Make# also have a Build project referencing Make# executable and helper library where I can code my build 'script' with all the VS goodness. And the script is in fact a bunch of classes (you can almost call it an app) which do the actual work.

You can read details and a generic example on the Make# github page but in this post I want to show a bit more advanced usage. This is the build script for a project of mine called MvcPowerTools (well, in fact there are 2 because it contains the WebApi tools, too). My needs are:

  • clean/build solution;
  • update nuspec files with the current version, as well as versions for some deps. One dependency is a generic library I'm maintaining and evolving and many times I'd need some features which turn out to be generic enough to be part of that library. And since I don't want to make 3 releases a day for that library, I'm using it directly, hence its version it's not stable. So, I want to update the dep version automatically;
  • build packages as pre release, this means the project's version in nuspec would have the "-pre" suffix;
  • push packages.

I need all this steps for both projects and I want my script to allow me to specify I want only one or both projects to be built. This is the script (you can see it as one file here) .

public class PowerToolsInit : IScriptParams
{
    public PowerToolsInit()
    {
        ScriptParams=new Dictionary<int, string>();
		Solution.FileName = @"..\src\MvcPowerTools.sln";
    }

    List<Project> _projects=new List<Project>();

    public IEnumerable<Project> GetProjects()
    {
        if (_projects.Count == 0)
        {
            bool mvc = ScriptParams.Values.Contains("mvc");
            bool api = ScriptParams.Values.Contains("api");
            bool all = !mvc && !api;
            if (mvc || all)
            {
                _projects.Add(new Project("MvcPowerTools",Solution.Instance){ReleaseDirOffset = "net45"});
            }
            if (api || all)
            {
                _projects.Add(new Project("WebApiPowerTools",Solution.Instance){ReleaseDirOffset = "net45"});
            }
        }
        return _projects;
    }

    public IDictionary<int, string> ScriptParams { get; private set; }
}

You see that I'm defining a class using init data: PowerToolsInit. This is how I override the default implementation of IScriptParams, I'm providing my own. In this class I'm deciding which projects are to be built based on script arguments. Solution  and Project are predefined helpers of Make# (intellisense really simplifies your work ). I have only one solution so I'll be using it as a singleton.

public class clean
 {
     public void Run()
     {
        
         BuildScript.TempDirectory.CleanupDir();
        Solution.Instance.FilePath.MsBuildClean();        
     }

 }

[Default]
[Depends("clean")]
public class build
{

    public void Run()
    {
        Solution.Instance.FilePath.MsBuildRelease();
    }
}

Self explaining. BuildScript is another predefined helper. MsBuildClean and MsBuildRelease are windows specific helpers (found in MakeSharp.Windows.Helpers.dll which comes with Make#) implemented as extension methods.

[Depends("build")]
public class pack
{
    public ITaskContext Context {get;set;}

	public void Run()	
    {

	    foreach (var project in Context.InitData.As<PowerToolsInit>().GetProjects())
	    {
	        "Packing {0} ".WriteInfo(project.Name); //another helper
            Pack(project);
	    }
      
    }
	
   void Pack(Project project)
    {
        var nuspec = BuildScript.GetNuspecFile(project.Name);
        nuspec.Metadata.Version = project.GetAssemblySemanticVersion("pre");
	    
        var deps = new ExplicitDependencyVersion_(project);
        deps.UpdateDependencies(nuspec);
        
        var tempDir = BuildScript.GetProjectTempDirectory(project);
	    var projDir = Path.Combine(project.Solution.Directory, project.Name);
        var nupkg=nuspec.Save(tempDir).CreateNuget(projDir,tempDir);
	    Context.Data[project.Name+"pack"] = nupkg;
    }
}


class ExplicitDependencyVersion_
{
    private readonly Project _project;

    public ExplicitDependencyVersion_(Project project)
    {
        _project = project;
    }

    public void UpdateDependencies(NuSpecFile nuspec)
    {
         nuspec.Metadata.DependencySets[0].Dependencies.Where(d=>d.Version.Contains("replace"))
         .ForEach(d=> 
                d.Version=_project.ReleasePathForAssembly(d.Id+".dll").GetAssemblyVersion().ToString());
    }
}

Now this is interesting. The Context property allows Make# to inject a predefined TaskContext that can be used to access script arguments (or in this case the init object) and pass values to be used by other tasks. BuildScript.GetNuspecFile is a helper returning a NuSpecFile object (predefined helper) which assumes there is a nuspec file with the project name available in the same directory as the build script. The GetAssemblySemanticVersion method of Project allows you to specify versioning details like pre-release or build meta as defined by Semver. For Nuget purposes any text suffix marks the package as pre-release.

In order to update the dependencies version, I've created an utility class for that (default task discovery convention for Make# says that a class with the "_" suffix is not a task, i.e just a POCO) and my convention to indicate in a nuspec that a package dependency's version needs to be updated is that the version contains "replace", like this

<dependency id="CavemanTools" version="0.0.0-replace" />

Then I tell the NuSpecFile object to save the updated nuspec in the project's temp folder then I invoke the predefined CreateNuget helper. Then I save the returned nupgk file path into Context.Data so that it can be used by the next task.

[Depends("pack")]
public class push
{
    public ITaskContext Context { get; set; }

    
    public void Run()
    {
        foreach (var project in Context.InitData.As<PowerToolsInit>().GetProjects())
	    {
	        var nupkg=Context.Data.GetValue<string>(project.Name+"pack");     
            BuildScript.NugetExePath.Exec("push", nupkg);
	    }
      
        
       
    }
}

Push doesn't do much. It gets the nupkg paths from Context.Data then invokes the Exec helper to execute nuget.exe to push the pakage. By default, NugetExePath assumes the nuget.exe is in the "..\src\.nuget" directory. You can change that.

And here you have it: the implementation to build MvcPwerTools. I don't know about you, but I think this object oriented approach is much easier to maintain, rather than functions and global variables. MakeSharp is already available as a nuget package.

Domain Driven Design Modelling Example: Brain and Neurons

By Mike on 11 June 2014

 I got this question on stackoverflow

A brain is made up of billions of neurons, so if you have brain as your AR you potentially have a performance issue as its not feasible to retrieve all neurons when you want to activate some behaviour of a brain. If I make the neuron the AR, then its not trivial to validate its behaviour and there is the issue of creating new neurons within a brain. Which should be the AR, or should there be some other AR to encapsulate these

I found the use case quite interesting because of the fact that we're dealing with billions of "children". So is the Brain an Aggregate Root (AR)? Is the Neuron a Value Object (VO) or an AR? Assuming that we want to model things as close as we know how the brain works I'll say it quick: Brain is an AR, Neuron is a VO and the Brain will always require ALL the neurons (yes, billions of them) every time you ask for a Brain object from the repository. So I'm saying to load billions of rows anytime you need a brain (sic). Maybe I need a brain too. Or I'm just a good Domain modeller. Here's how it is:

A brain can't function without neurons, a brain needs the neurons. But does it need all of them? Well, let's see... I'm telling the Brain to learn a poem. The Brain will use some neurons to store that information. I tell the Brain to solve a math problem. The Brain will use the same or other neurons to do that. Now, do I (the client using the brain object) know which neurons are used for what behaviour? Or is it an implementation detail of the Brain? That's the point, only the Brain knows how to use the Neurons. It groups them in regions and it might use 1 or 10 regions for one single task. But nobody outside the Brain knows these details. This is proper encapsulation.

So when you need a Brain you get it whole with all neurons no matter how many they are. You don't get only parts of it, because they don't make sense independently. Btw, the neurons probably are distinct for the Brain but that doesn't mean they are automatically Domain Entity. After all, maybe the brain doesn't care which neurons are used. Maybe it cares more about regions and it just tells a region to supply a number of available neurons. Implementation details...

Should the Neuron be an Entity or even an AR? Maybe, but not in this Aggregate. Here, it's just something the Brain uses internally. If you make it an entity or an AR, you'll have something with an information that only the Brain can understand. And if you understand the neuron information then you don't really need the Brain. In this aggregate, Neuron is at most a VO.

But what about the technicalities? I mean there are still billions of neurons. It's a challenge to load them all, right? Isn't lazy loading the perfect solution here? It isn't, because the ORM doesn't know which neurons the Brain needs. And the brain needs to have some virtual properties (let's say the ORM works with private properties) which should load only specific neurons. But what do you do? You create one property for each possible behaviour so that you can load only those specific neurons? And if the Brain doesn't care about a specific Neuron but only about the signature of the data contained, how can you implement that for the ORM to work with? Lazy loading has just complicated things. And this is why lazy loading is an anti-pattern for Domain entities, it has to know domain details in order to know what data to retrieve. And not everything can be put in a virtual property (to allow the ORM to create the proxy), not if you want to make the object maintainable.

The solution here is to load everything the Brain needs to do its job, because it's how the brain works according to the Domain (it needs ALL the neurons) and it's a maintainable solution. Maybe not the fastest one, but you're dealing with a complex notion where maintainability matters more and it isn't like you're using the same object for queries, right? And it's not about the number of neurons, it's about brain's data size. Do be aware that in this case, the Brain and Neurons have a direct relationship in a way that one can't work or it doesn't make sense without the other.

Other cases might resemble this one, but the relationship can be one of association not definition. In those cases, you have grouping criteria and items (forum and threads) or different concepts working together (table and chairs)

Unit Of Work is the new Singleton

By Mike on 4 June 2014

Let me start with the Singleton: it's a valid pattern for specific cases. For the other 99% cases there are better ways. Simply put, the Singleton became an anti pattern because people misused it.

Enter the Unit Of Work (UoW), a quite trendy pattern these days. Go read again M. Fowler's description . Notice an interesting word there: database. The UoW is a persistence related pattern. It doesn't care about your Domain, it cares only about the database. And yet, you have this extremely harmful tutorial lying around.A lot of people learn from it, because it's from Microsoft. They learn the wrong way to use both the Repository and UoW. Nevermind that the business logic (as thin as it is) is shoved directly in the controllers. The point is that people learn that they should have a UoW class and they can use that in the BL. Yes, they can use an abstraction, that's not the issue, the problem is that the BL is tightly coupled to an infrastructural concern like UoW. Does the UoW has a Business/Domain meaning? I bet that in 99.99% of cases it doesn't. It's a technical thing, worse, it's a persistence detail that has no place in the BL, yet a lot of people are coupling their BL services to the UoW. At most, the BL knows about the Repository because it's defined in that layer. But UoW makes no sense, it's a persistence pattern. It's great when you're developing an ORM, but that's it. The Domain/BL should never know about a UoW.

A little secret you might not be aware of is that when modelling a Domain or designing the Business Layer you should not think as a programmer. A programmer thinks algorithms and that's bad for your design. You need to think like an architect i.e to organize things. You need to think in domain concepts, using domain language. You don't write code, you're expressing the domain using code. And a business transaction is not really a UoW nor a db transaction.

Think about this real life scenario: you order something from Amazon. They misspelled your address, the product goes to another person. What happens then? If you think like a programmer, you'll say "Well, it's part of transaction so let's roll it back". How does that translate in the real world? Well, the package goes back to Amazon, they refund your money and ask you nicely to place that order again. Do you see that happening? No? Why not? Because it's stupid!

Delivering the product at a wrong address is a mistake, an error, but the business transaction is not rolledback. The mistake is corrected and the product goes to your address. And be sure that some heads will roll, because that mistake costs Amazon money. Business always deal with failure, it's not the end of the world, it's a common business scenario. It's annoying and you don't want it to happen often, but that's it. There's no rollback in the business world, that's a technical term and in fairness it should remain at the db level.

While DDD doesn't say much abut how you should implement your persistence, it does say an important thing: your repositories should work only with Aggregate Roots. That's because the AR ensures the aggregate consistency. The repo always persists the whole AR not just parts of it. Persisting may mean partial updates of the db model, but that's a technical persistence detail. A repository might unknowingly use a UoW implementation, but it's not a part of a UoW. The repo responsibility is to save the specified AR, it isn't its concern to care about other ARs that might be or not involved.

UoW can't exist in a properly designed Domain. Take the domain changes A,B,C . They happen in a certain order and that's it. If C has a problem, everything goes forward and the past is not erased. There's no commit or rollback. There's only handling a business case. A business transaction might be a long process. There can be many ARs involved until a transaction is completed. If one AR doesn't accept changes the process doesn't stop there or it's rolled back. Process continues and the business transaction finishes successfully or with a (business) failure. But all changes are persisted. Looking at our Amazon example, delivery failed, that event remains in the past, but now we do a corrective action which will allow the delivery to complete successfully.

In the Domain, everything flows in one direction: forward. When something bad happens, a correction is applied. The Domain doesn't care about the database and UoW is very coupled to the db. In my opinion, it's a pattern which is usable only with data access objects, and in probably 99% of the cases you won't be needing it. As with the Singleton, there are better ways but everything depends on proper domain design.

I'm certain someone will ask me "What if I have at least 2 business objects/AR that need to be persisted together?". I can point you to the obvious solution to pass a UoW to your repositories where UoW = Db transaction. This works in the majority of cases without problems. And it's an easy to understand and implement solution, even easier if you're using an ORM. The downside is that it's a one trick pony which works great as long as you're using one ACID compliant db and everything takes place in one process. Once you have a distributed app or you need to work with different incompatible storages, things become very complicated. If you get there, modelling the business transaction as it's really implemented in the business is the cleanest method (and that implies a Saga and domain events) because it's really scalable and doesn't care about specific databases or if your app is distributed.

My point is that, usually, there are 2 ways to approach the implementation of a business transaction: the programmer way or the domain way. The programmer way is the most straight forward approach, it uses the familiar db transactions and procedural thinking. This is where UoW pattern makes sense regardless of how hard or easy is to implement it. The domain way is more verbose, requires to understand the business process, to come up with the correct semantics (domain events), to use a service bus (or an event delivery mechanism), sagas, eventual consistency. It's very scalable and clean but it requires more expertise and effort to implement compared to the first solution. But you don't need a UoW (well persisting an AR might use a db transaction but that's it).

So you have to  make a decision: you want the straight forward way with the risk of being rigid and not very scalable or the seemingly complicated way that is very flexible and matches the domain process. Basically, you have to decide when to pay the cost: at the beginning or in real time. However, a lot of apps will never need to scale and they'll always use one db etc. In this case, it's obvious the first approach is the best one. And UoW will always be the ORM implementation or a db transaction. But in any case, no matter how big or small the app is, the UoW has no place in BL and it's always a persistence detail. Keep the UoW out of BL.

The Repository Pattern For Dummies

By Mike on 2 June 2014

You read a lot of tutorials about the Repository pattern which seem to contradict themselves. You asked the question on StackOverflow and you also got conflicting answers. What can you do? Can anyone explain this pattern in a simple manner without throwing all kinds of code and pretentious buzzwords at you? Yes, I can!

 Hello, I'm the Business Layer. I hold everything related to the application's Domain. I'm using a lot of business objects and I do create a lot of new objects, too. I know that all these objects need to be persisted or retrieved from some storage. So, I have these shelves where I put all the new objects on. And everytime I need to work with an object I just get it from the shelf, do the work then put it back. I don't know how these shelves work and how they're actually storing or retrieving my objects. I do know that I just put the business objects there and I get them back exactly how they were. It's like magic or something...

Technical people (aka people who like to complicate things) are calling these shelves "Repositories" . As a Business Layer, I have no clue how these are implemented. I only know about some interface of a sort, like a user panel, they call it an abstraction, which accepts my new Business Objects and returns saved Business Objects. I really like this magical shelf, because I don't have to do any work myself. I just ask it to "Get object with Id 2". Or get me all the objects matching a criteria. And it just works!!!

For you technical people, it looks like this (C# code)

public interface IMyRepository
 {
	BusinessObject Get(Guid id);
	void Save(BusinessObject item);
	IEnumerable<BusinessObject> Find(MyCriteria criteria);
 }

Since I know ONLY about my business objects, my magic shelf (aka repository) will work only with those objects. And you can see that, amazingly enough, the interface-like panel is actually called an "interface" at least in some programming languages. But I don't know anything else about my magic shelf. Only that interface. And it's the only thing I need anyway.

Now let me tell you a bit about that criteria stuff. You see, in many use cases I do need to use more than one business object, I actually need to use all the objects which for example have the LastUpdate older than an year. So, I'm telling the shelf to get me all objects matching that criteria. In technical terms, I'm creating a criteria object (which make sense for me, the Business Layer) which I'll pass to the Repository. The Repository takes that object and then applies some black magic on it (I really don't know what the Repository does, since it's not MY concern) and voila... it returns me exactly the objects I've asked for.

public class OlderThan
 {
	public DateTime LastUpdate;
 }

Of course, for this simple scenario, it's easier to just have a special method like this

public interface IMyRepository
 {
	/* other methods */
	IEnumerable<BusinessObject> GetOlderThan(DateTime lastUpdate);
 }

I'll keep the criteria class for scenarios where I need to pass more than one parameter, but it's not like there's a rule, it depends on what's more easier or semantically suited for that use case. It's also important to say that I decide the shelf's functionality. In tech speak, this means that the repository interface is designed by the business layer's needs. That's why all the repository interfaces reside in the business layer, while their concrete implementation is part of the Persistence Layer (DAL).

Oh, by the way, these shelves are ALL MINE! Only I can use them. I don't care about UI, Presentation, Reporting etc these shelves are only serving me (myself and I). If UI wants to do some queries, let it define its own shelves. I won't share mine. I've created them ONLY for MY purposes and they have ONLY the functionality I need! I say that everyone should have their own shelves. Sharing is not caring, sharing is messing (around).

 A special mention to DDD. When I'm called the Domain (ehe!) my shelves are not working with ANY object, but only with Aggregate Roots (AR). And since these AR are modelling important (sometimes complex) concepts, it's a tremendous help that my shelves can handle them so easily, I mean I can only imagine the work they do in order to retrieve such "interesting" objects. As for my 'work', nothing changes, I still tell the shelf to get me this or that. Business (ha! a pun) as usual.