DDD: Identifying Bounded Contexts and Aggregates, Entities and Value Objects

By Mike on 31 October 2014

Let me be clear about one thing concerning Domain objects: they aren't either Entities or Value Objects (VO). You can have simple objects in your Domain and you can have objects which have a business meaning. Only an object representing a Domain concept can be classified as an Entity (it has an id) or a VO (it encapsulates a simple or composite value).

Bounded Context (BC)

First thing before starting an app is to identify the Bounded Contexts. Luckily this is quite simple. First of all let's remember that a BC simply defines a model that is valid (makes sense) only in that BC. If you're designing your app as a group of autonomous components (and you should) aka vertical slices aka microservices (for distributed apps) working together, then each component is a BC. For example, I'm working on a Saas app and these are some of the components: Membership, Subscription, Invoicing, Api Server, Api Processor. Each is like a mini app in itself, each has their own domain and thus their own model which makes sense only for that component (BC).

The components being autonomous, they don't depend one on another (at most they know about other's abstractions) so they don't share the model i.e the model of one BC doesn't cross its boundaries to be available in other BC. But, a subscription is related to an account so it might seem that Subscription must know about Membership model. Similarly, Invoicing is related to an account and needs data usually found in Subscription. How do we solve this, without coupling the components (breaking the boundaries)?

Each BC has its very own definitions of concepts even id it happens that they share the same name. An Account is a fully defined concept in Membership but it's just an id in Subscription, Invoicing or any other BC and that's because the BC itself controls the definition. So the Account is not just an id because the Membership Account entity has a property Id, but because the other component said so. However, in practice, a concept which is an entity in other BC will at least be defined as an id in other BCs. But the VO are a different story and each BC really has their own definitions. Of course, some concepts may be generic enough that their definitions will be identical. But that's just a coincidence, and in fairness, if a BC shares a lot of concept and definitions with another, perhaps they should be merged.

What about the data that a BC might need from another? A BC should be like a black box. You don't see what's inside, you know only about input and output. In a message driven app, this means you send commands to be handled and you get events. Those are DTOs and contain all the relevant (in/out)data for an use case. So we have our BC listening to other BC's events. If you really want to keep things decoupled, because you want a reusable component, then you can use a mediator that will listen to one BC then send the relevant data to the other one. That's the approach I've taken with the Invoicing component (so that it wouldn't know about anything Subscription).

So, a BC takes DTOs or commands as input then publishes events as output. A BC model exists only in that BC and all the DTOs are just flatten aspects of that model. Since an event is just a data structure with no business rules attached (consider it a read model ;) ) that data can travel to any other BC that can extract the relevant information from it. In the end, BCs 'communicate' via simple data that only have the role of input/output.

Aggregates

Identifying Aggregates is pretty hard. An aggregate is a group of objects that must be consistent together. But you can't just pick some objects and say: this is an aggregate. You start with modelling a Domain concept. For a non trivial concept, you end up with at least 1 entity and some VO. Regardless of how many entities, VOs or simple objects will be involved, the important things are that you'll have a 'primary' entity (named after the concept) and other objects that work together to implement the concept and its required behaviour. Those together form an Aggregate and the 'primary' entity is the Aggregate Root (AR).

The purpose of an AR is to ensure the consistency of the aggregate, that's why you should make changes to one only via the AR. If you change an object independently, the AR can't ensure the concept (the aggregate) is in valid state, it's like a car with a loose wheel.

Before rushing out to name any entity an AR, first make sure you know the aggregate then make sure the AR really is responsible for the aggregate consistency. An entity acting as a container for other objects is not an AR. The consistency part makes it an AR.

Domain Entity

Modelling a Domain concept is more than coming up with a name and some properties. You have the name 'Foo' then what? If you start thinking about its properties or behaviour you'll end up with a data structure (resembling a table schema) with functions. That's because we think about all things that we want to persist and not about how the concept is defined and used by the Domain. We want to come up with a model fit for everything and that's the wrong approach.

In order to define Foo you start with the use cases of Foo, better yet, implement those use cases as tests. This is how you identify the relevant properties and behaviour, that's how you design the entity. Once you have the interface defined, you can go wild with the implementation. In practice, you'll notice how you start with one entity and you end up with a whole aggregate. And that's how it should be, anyway.

MvcPowerTools In Action: Multilingual Sites

By Mike on 10 September 2014

Recently I've "developed" a simple brochure site which had the "catch" that it must be bilingual. Since it was mainly a static site (no db whatsoever) I needed to write the texts directly in html. Also, for SEO purposes the urls should change according to the selected language.

Basically the plan was this: I have the layout in common with localized partial for meta description and each site page was a view per language and urls look like this: "mysite.com/home" , "mysite.com/accueil" . No language identifier in urls, the route for each url contains a 'lang' parameter. The contact page contained a form whose labels should be also localized.

I had a Translator class,nothing fancy, just a dictionary and some helpers. Let's start with the urls. Using routing conventions I came up with this convention:

    public class MultilingualRouting:IBuildRoutes
    {
        private MultilingualTexts _translator;

        public MultilingualRouting()
        {
            _translator = Translator.Instance.Data;
        }

        public bool Match(ActionCall action)
        {
            return true;
        }

        public IEnumerable<Route> Build(RouteBuilderInfo info)
        {
            var texts = _translator.TranslationFor(info.ActionCall.GetActionName());
            return texts.Select(t =>
            {
                var route = info.CreateRoute(t.Value.ConvertAccentedString().MakeSlug());
                route.Defaults["lang"] = t.Key;
                return route;
            });

        }
    }

The action name is the translation key. The translator returns a KeyValue array "language"=>"text". So, for each available language I generate a route having the localized (link) text converted as url without accents. Then the language is set for that route. All the routes contain the language associated with it.

Next, the view engine conventions. The Views directory structure looks like this

views directory


And the conventions

public class MultiViewEngine:BaseViewConvention
{
    public override bool Match(System.Web.Mvc.ControllerContext context, string viewName)
    {
        return true;
        return viewName.StartsWith("_");
    }

    public override string GetViewPath(System.Web.Mvc.ControllerContext controllerContext, string viewName)
    {
        return "~/Views/{0}/{1}.cshtml".ToFormat(controllerContext.RouteData.Values["lang"], viewName);
    }
}

public class DefaultEngine : BaseViewConvention
{
    public override string GetViewPath(System.Web.Mvc.ControllerContext controllerContext, string viewName)
    {
        return "~/Views/{0}.cshtml".ToFormat(viewName);
    }
}

Not much to explain here. The current language is used as the directory where the view should be. If not, then it should be in the "~/Views" folder.

Finally, the convention for our form labels

 conventions.Labels.Always.Modify((tag, info) =>
        {
            var translator = Translator.Instance.Data.GetTranslations(info.ViewContext.HttpContext.CurrentLanguage());
            return tag.Text(translator[info.Name]);
        });

By default, the label is the field name (the name of the model property) but with this modifier, we'll use it as a translation key to get the localized text then assign it to the label. Other approach would be generate directly the label with the localized text, but I just care about changing the text I don't want to re-write the whole label tag generation regardless how trivial it is. It's much simpler to just change the text.

As a thumb rule, it's better to modify than to build (generate) if the tag already has a default, simple builder. For example, if I want all my text areas to be 7 rows, I'd write something like this

 conventions.Editors
            .ForModelWithAttribute<DataTypeAttribute>()
            .Modify((tag, info) =>
            {
                var att = info.GetAttribute<DataTypeAttribute>();
                if (att.DataType != DataType.MultilineText) return tag;
                var input = tag.FirstInputTag().As<TextboxTag>().Attr("rows", 7);
                return tag;
            });

What about generating links in html? I want to use this

@(Html.LocalizedLinkTo(d => d.Contact()))

And here's the helper code

 public static HtmlTag LocalizedLinkTo(this HtmlHelper html,Expression<Action<DefaultController>> action,string text=null)
    {
        var name = action.GetMethodInfo().Name;
        var translations = html.GetTranslations();
        text = textjQuery15206183123854091122_1410368302650translations[name];
        return html.LinkTo("default", text, action:name,model:new{lang=translations.Language});
    }

I have only one controller in this app (remember, it's a brochure site) so I only need the action name to use it as a translation key, then the current language is passed as a route value.

And this is how you can use MvcPowerTools to build multilingual sites.

Mixing the Domain

By Mike on 4 September 2014

With DDD becoming more popular, it's like a "fight" between the old CRUD and the cool DDD. It's easy to assume that we have CRUD apps and DDD apps: this app is simple therefore it's CRUD, this app seems to have a rich domain, therefore we should be using DDD. But in practice, there are some apps which are 100% CRUD while no app is 100% DDD.

Let's quickly define a CRUD app: it's a database UI concerned with taking the correct (valid) input from users to shove it into the db. It doesn't care about business semantics and contains NO business logic. It does contain some business rules represented by validation.

But once you have 1 object/function encapsulating business logic, it's no longer a 100% CRUD app. But does it matter? A lot of devs are treating ANY app as a CRUD one (especially web apps) because they don't know better. And it works until the business logic becomes more complex and/or changes often. Then you have a maintainability problem.

Noawadays, anyone (loose term) knows that for a domain rich in behaviour, DDD is the way to go. But regardless of how complex the Domain is, there's no such thing as 100% domain containing only Rich Behaviour Objects (RBO), you have at least one domain concept which really is a data structure (CustomerProfile comes in mind). Actually there are more than one, I say that a domain id at least 25% data structures.

Thing is, with DDD we're usually using CQRS, Domain Events or Event Sourcing. But if a good chunk (actually one class is enough) of our domain is just a data structure, in other words it's CRUDy, aren't we complicating our life with at least 2 models (CQRS) or creating events which pretty much are the entity itself?

The problem appears when we decide on a solution up front then force any problem into that solution. RBO with a CRUD mindset is pain, data structures with event Sourcing or CQRS is just a complication. How about we use the optimum solution for each problem? How about we're doing CRUD with the data structures from our Domain and CQRS, Domain Events and Event Sourcing with the RBOs?

We don't need to declare an app as CRUD or DDD. It's an app, let's just use the approach that makes the most sense for each bit of the Domain. It's not about using the latest cool trend/design pattern, it's about coming up with the most maintainable solution. The goal is to solve the problem while making our lives easier. For big apps or where such flexibility is available, I'd say to also use the proper storage fit for the problem.

This means that instead of "we're using Sql Server for everything.", you can use a doc db for your RBO, a stand-alone EventStore and a RDBMS in the same app. It's a hurtful mindset to think: "we do things only this (one) way (CRUD,SqlServer etc) because this works and this is how we've done it until now". It's bordering on stupid to choose the same solution regardless of the problem. An app is not a showcase for a technology or a design pattern, it's an implementation of a service used to bring value to its stakeholders.

As a side note, this is why I'm implementing the persistence last. Even a small app can be 75% data structures and 25% RBO. I'll use CRUD for the data structures (that may involve an (micro)ORM) and CQRS for the reminder of 25%, with the domain objects serialized or stored as events for the objects that are fit for ES. But in order to know that, I have to design and implement the Domain first. Only after, I can start toying with the db. Anything before is wasted time and a poor design (as I'm biased to fit the Domain design to the db structures).

In conclusion, your app is a place where you should use the solutions fit for the problem, all those design patterns can co-exist, the big mistake is to decide on a generic solution up front and then try to force all the problems into it.

MvcPowerTools 1.0. Released

By Mike on 3 July 2014

After 6 months of work, I'm proud to announce to the Asp.Net Mvc community that MvcPowerTools (MvcPT) , aka FubuMvc goodness for AspNet Mvc, is finally released. Aimed at the seasoned developer, MvcPT brings the power of conventions to the asp.net mvc platform. What's the big deal about these conventions, you ask? Well, grab a be... chair and pay attention for 3 minutes.

Imagine that you want to write maintainable code. How do you define maintainability in a few words? "Easy to change" . Simply put, you want the code to be changed easily and that's why you want to write decoupled code, prefer composition over inheritance, use an event driven architecture, respect SOLID principles etc. Using conventions is yet another tool in achieving maintainability. And not only that, it's about control. Do you want to force your app into a framework's constraints or do you want to make the framework adapt to your style? Who's in control, you or the framework?

Using conventions means you decide the rules that a framework uses to provide a behaviour. Take for example, routing. If the routes are generated based on conventions (rules) you can come up with any convention you want that will create one or more routes based on a controller/action. Want to use attributes? OK! Want namespace based routes? Ok! Want to mix'n'match things based on whatever criteria? You get to do that! And best all, want to change all things at once? Just change the relevant conventions.

Let's talk about html conventions i.e rules to create/modify html widgets. When using the power of conventions, you can make bulk changes, without touching anything BUT the convention itself. Think about forms. In a big business app you may have hundreds of forms. If the form fields are generated using html conventions i.e the code looks like "@Html.EditModel()", then making ALL of the fields from all the forms use boostrap css amounts to define a single convention (2-3 lines of code). Change that convention and all hundreds of fields automatically change. It's simply DRY.

Some devs don't like conventions because it does look like magic, I suppose it depends on the developer if they want to control or fear the magic. I do find useful, for example, to see the [Authorize] on top of a controller, but I get tired very quickly to write that attribute anywhere I need it and inheriting a AuthorizedController base class is just a bad solution. I just want to say directly: "All the controllers from Admin need to be authorized" or "All the controller BUT these [] need to be authorized" or "All my WebApi controllers respond ONLY to Ajax requests".

Conventions allows you to do mass control as opposed to micro managing things. Tired of repeating yourself? DRY! Use a convention. Want to make bulk changes with minimum effort? Use conventions!

I'm not cheering for a new golden hammer, but for another tool that will help us write maintainable code. And if you liked Jimmy Bogard's put your controllers on a diet approach you'll find a nice surprise built in MvcPowerTools.

Now, go read the docs, get the nuget and start writing more maintainable apps!

Introducing Make# aka MakeSharp

By Mike on 20 June 2014

I don't always have many needs when using a build automation tool (with C# as the scripting language) but when I do, I make a mess with the procedural approach. Static methods and global variables simply don't work if you want a tidy, reusable and flexible functionality. At least in my case it doesn't work, since I'm so used with the object oriented mindset and I always feel the need to define a class and extract some things into private methods etc.

So, I've decided to take my CSake (C# Make) project to the next level i.e I've rewrote it to be more compatible with the OOP mindset and to use the (not so) new scriptcs that everyone's raving about. Enter Make#(MakeSharp) the build automation tool where you can use C# in an OOP way (like God intended). And because I do like intellisense, all of my projects using Make# also have a Build project referencing Make# executable and helper library where I can code my build 'script' with all the VS goodness. And the script is in fact a bunch of classes (you can almost call it an app) which do the actual work.

You can read details and a generic example on the Make# github page but in this post I want to show a bit more advanced usage. This is the build script for a project of mine called MvcPowerTools (well, in fact there are 2 because it contains the WebApi tools, too). My needs are:

  • clean/build solution;
  • update nuspec files with the current version, as well as versions for some deps. One dependency is a generic library I'm maintaining and evolving and many times I'd need some features which turn out to be generic enough to be part of that library. And since I don't want to make 3 releases a day for that library, I'm using it directly, hence its version it's not stable. So, I want to update the dep version automatically;
  • build packages as pre release, this means the project's version in nuspec would have the "-pre" suffix;
  • push packages.

I need all this steps for both projects and I want my script to allow me to specify I want only one or both projects to be built. This is the script (you can see it as one file here) .

public class PowerToolsInit : IScriptParams
{
    public PowerToolsInit()
    {
        ScriptParams=new Dictionary<int, string>();
		Solution.FileName = @"..\src\MvcPowerTools.sln";
    }

    List<Project> _projects=new List<Project>();

    public IEnumerable<Project> GetProjects()
    {
        if (_projects.Count == 0)
        {
            bool mvc = ScriptParams.Values.Contains("mvc");
            bool api = ScriptParams.Values.Contains("api");
            bool all = !mvc && !api;
            if (mvc || all)
            {
                _projects.Add(new Project("MvcPowerTools",Solution.Instance){ReleaseDirOffset = "net45"});
            }
            if (api || all)
            {
                _projects.Add(new Project("WebApiPowerTools",Solution.Instance){ReleaseDirOffset = "net45"});
            }
        }
        return _projects;
    }

    public IDictionary<int, string> ScriptParams { get; private set; }
}

You see that I'm defining a class using init data: PowerToolsInit. This is how I override the default implementation of IScriptParams, I'm providing my own. In this class I'm deciding which projects are to be built based on script arguments. Solution  and Project are predefined helpers of Make# (intellisense really simplifies your work ). I have only one solution so I'll be using it as a singleton.

public class clean
 {
     public void Run()
     {
        
         BuildScript.TempDirectory.CleanupDir();
        Solution.Instance.FilePath.MsBuildClean();        
     }

 }

[Default]
[Depends("clean")]
public class build
{

    public void Run()
    {
        Solution.Instance.FilePath.MsBuildRelease();
    }
}

Self explaining. BuildScript is another predefined helper. MsBuildClean and MsBuildRelease are windows specific helpers (found in MakeSharp.Windows.Helpers.dll which comes with Make#) implemented as extension methods.

[Depends("build")]
public class pack
{
    public ITaskContext Context {get;set;}

	public void Run()	
    {

	    foreach (var project in Context.InitData.As<PowerToolsInit>().GetProjects())
	    {
	        "Packing {0} ".WriteInfo(project.Name); //another helper
            Pack(project);
	    }
      
    }
	
   void Pack(Project project)
    {
        var nuspec = BuildScript.GetNuspecFile(project.Name);
        nuspec.Metadata.Version = project.GetAssemblySemanticVersion("pre");
	    
        var deps = new ExplicitDependencyVersion_(project);
        deps.UpdateDependencies(nuspec);
        
        var tempDir = BuildScript.GetProjectTempDirectory(project);
	    var projDir = Path.Combine(project.Solution.Directory, project.Name);
        var nupkg=nuspec.Save(tempDir).CreateNuget(projDir,tempDir);
	    Context.Data[project.Name+"pack"] = nupkg;
    }
}


class ExplicitDependencyVersion_
{
    private readonly Project _project;

    public ExplicitDependencyVersion_(Project project)
    {
        _project = project;
    }

    public void UpdateDependencies(NuSpecFile nuspec)
    {
         nuspec.Metadata.DependencySets[0].Dependencies.Where(d=>d.Version.Contains("replace"))
         .ForEach(d=> 
                d.Version=_project.ReleasePathForAssembly(d.Id+".dll").GetAssemblyVersion().ToString());
    }
}

Now this is interesting. The Context property allows Make# to inject a predefined TaskContext that can be used to access script arguments (or in this case the init object) and pass values to be used by other tasks. BuildScript.GetNuspecFile is a helper returning a NuSpecFile object (predefined helper) which assumes there is a nuspec file with the project name available in the same directory as the build script. The GetAssemblySemanticVersion method of Project allows you to specify versioning details like pre-release or build meta as defined by Semver. For Nuget purposes any text suffix marks the package as pre-release.

In order to update the dependencies version, I've created an utility class for that (default task discovery convention for Make# says that a class with the "_" suffix is not a task, i.e just a POCO) and my convention to indicate in a nuspec that a package dependency's version needs to be updated is that the version contains "replace", like this

<dependency id="CavemanTools" version="0.0.0-replace" />

Then I tell the NuSpecFile object to save the updated nuspec in the project's temp folder then I invoke the predefined CreateNuget helper. Then I save the returned nupgk file path into Context.Data so that it can be used by the next task.

[Depends("pack")]
public class push
{
    public ITaskContext Context { get; set; }

    
    public void Run()
    {
        foreach (var project in Context.InitData.As<PowerToolsInit>().GetProjects())
	    {
	        var nupkg=Context.Data.GetValue<string>(project.Name+"pack");     
            BuildScript.NugetExePath.Exec("push", nupkg);
	    }
      
        
       
    }
}

Push doesn't do much. It gets the nupkg paths from Context.Data then invokes the Exec helper to execute nuget.exe to push the pakage. By default, NugetExePath assumes the nuget.exe is in the "..\src\.nuget" directory. You can change that.

And here you have it: the implementation to build MvcPwerTools. I don't know about you, but I think this object oriented approach is much easier to maintain, rather than functions and global variables. MakeSharp is already available as a nuget package.