MvcPowerTools In Action: Multilingual Sites

By Mike on 10 September 2014

Recently I've "developed" a simple brochure site which had the "catch" that it must be bilingual. Since it was mainly a static site (no db whatsoever) I needed to write the texts directly in html. Also, for SEO purposes the urls should change according to the selected language.

Basically the plan was this: I have the layout in common with localized partial for meta description and each site page was a view per language and urls look like this: "mysite.com/home" , "mysite.com/accueil" . No language identifier in urls, the route for each url contains a 'lang' parameter. The contact page contained a form whose labels should be also localized.

I had a Translator class,nothing fancy, just a dictionary and some helpers. Let's start with the urls. Using routing conventions I came up with this convention:

    public class MultilingualRouting:IBuildRoutes
    {
        private MultilingualTexts _translator;

        public MultilingualRouting()
        {
            _translator = Translator.Instance.Data;
        }

        public bool Match(ActionCall action)
        {
            return true;
        }

        public IEnumerable<Route> Build(RouteBuilderInfo info)
        {
            var texts = _translator.TranslationFor(info.ActionCall.GetActionName());
            return texts.Select(t =>
            {
                var route = info.CreateRoute(t.Value.ConvertAccentedString().MakeSlug());
                route.Defaults["lang"] = t.Key;
                return route;
            });

        }
    }

The action name is the translation key. The translator returns a KeyValue array "language"=>"text". So, for each available language I generate a route having the localized (link) text converted as url without accents. Then the language is set for that route. All the routes contain the language associated with it.

Next, the view engine conventions. The Views directory structure looks like this

views directory


And the conventions

public class MultiViewEngine:BaseViewConvention
{
    public override bool Match(System.Web.Mvc.ControllerContext context, string viewName)
    {
        return true;
        return viewName.StartsWith("_");
    }

    public override string GetViewPath(System.Web.Mvc.ControllerContext controllerContext, string viewName)
    {
        return "~/Views/{0}/{1}.cshtml".ToFormat(controllerContext.RouteData.Values["lang"], viewName);
    }
}

public class DefaultEngine : BaseViewConvention
{
    public override string GetViewPath(System.Web.Mvc.ControllerContext controllerContext, string viewName)
    {
        return "~/Views/{0}.cshtml".ToFormat(viewName);
    }
}

Not much to explain here. The current language is used as the directory where the view should be. If not, then it should be in the "~/Views" folder.

Finally, the convention for our form labels

 conventions.Labels.Always.Modify((tag, info) =>
        {
            var translator = Translator.Instance.Data.GetTranslations(info.ViewContext.HttpContext.CurrentLanguage());
            return tag.Text(translator[info.Name]);
        });

By default, the label is the field name (the name of the model property) but with this modifier, we'll use it as a translation key to get the localized text then assign it to the label. Other approach would be generate directly the label with the localized text, but I just care about changing the text I don't want to re-write the whole label tag generation regardless how trivial it is. It's much simpler to just change the text.

As a thumb rule, it's better to modify than to build (generate) if the tag already has a default, simple builder. For example, if I want all my text areas to be 7 rows, I'd write something like this

 conventions.Editors
            .ForModelWithAttribute<DataTypeAttribute>()
            .Modify((tag, info) =>
            {
                var att = info.GetAttribute<DataTypeAttribute>();
                if (att.DataType != DataType.MultilineText) return tag;
                var input = tag.FirstInputTag().As<TextboxTag>().Attr("rows", 7);
                return tag;
            });

What about generating links in html? I want to use this

@(Html.LocalizedLinkTo(d => d.Contact()))

And here's the helper code

 public static HtmlTag LocalizedLinkTo(this HtmlHelper html,Expression<Action<DefaultController>> action,string text=null)
    {
        var name = action.GetMethodInfo().Name;
        var translations = html.GetTranslations();
        text = textjQuery15206183123854091122_1410368302650translations[name];
        return html.LinkTo("default", text, action:name,model:new{lang=translations.Language});
    }

I have only one controller in this app (remember, it's a brochure site) so I only need the action name to use it as a translation key, then the current language is passed as a route value.

And this is how you can use MvcPowerTools to build multilingual sites.

Mixing the Domain

By Mike on 4 September 2014

With DDD becoming more popular, it's like a "fight" between the old CRUD and the cool DDD. It's easy to assume that we have CRUD apps and DDD apps: this app is simple therefore it's CRUD, this app seems to have a rich domain, therefore we should be using DDD. But in practice, there are some apps which are 100% CRUD while no app is 100% DDD.

Let's quickly define a CRUD app: it's a database UI concerned with taking the correct (valid) input from users to shove it into the db. It doesn't care about business semantics and contains NO business logic. It does contain some business rules represented by validation.

But once you have 1 object/function encapsulating business logic, it's no longer a 100% CRUD app. But does it matter? A lot of devs are treating ANY app as a CRUD one (especially web apps) because they don't know better. And it works until the business logic becomes more complex and/or changes often. Then you have a maintainability problem.

Noawadays, anyone (loose term) knows that for a domain rich in behaviour, DDD is the way to go. But regardless of how complex the Domain is, there's no such thing as 100% domain containing only Rich Behaviour Objects (RBO), you have at least one domain concept which really is a data structure (CustomerProfile comes in mind). Actually there are more than one, I say that a domain id at least 25% data structures.

Thing is, with DDD we're usually using CQRS, Domain Events or Event Sourcing. But if a good chunk (actually one class is enough) of our domain is just a data structure, in other words it's CRUDy, aren't we complicating our life with at least 2 models (CQRS) or creating events which pretty much are the entity itself?

The problem appears when we decide on a solution up front then force any problem into that solution. RBO with a CRUD mindset is pain, data structures with event Sourcing or CQRS is just a complication. How about we use the optimum solution for each problem? How about we're doing CRUD with the data structures from our Domain and CQRS, Domain Events and Event Sourcing with the RBOs?

We don't need to declare an app as CRUD or DDD. It's an app, let's just use the approach that makes the most sense for each bit of the Domain. It's not about using the latest cool trend/design pattern, it's about coming up with the most maintainable solution. The goal is to solve the problem while making our lives easier. For big apps or where such flexibility is available, I'd say to also use the proper storage fit for the problem.

This means that instead of "we're using Sql Server for everything.", you can use a doc db for your RBO, a stand-alone EventStore and a RDBMS in the same app. It's a hurtful mindset to think: "we do things only this (one) way (CRUD,SqlServer etc) because this works and this is how we've done it until now". It's bordering on stupid to choose the same solution regardless of the problem. An app is not a showcase for a technology or a design pattern, it's an implementation of a service used to bring value to its stakeholders.

As a side note, this is why I'm implementing the persistence last. Even a small app can be 75% data structures and 25% RBO. I'll use CRUD for the data structures (that may involve an (micro)ORM) and CQRS for the reminder of 25%, with the domain objects serialized or stored as events for the objects that are fit for ES. But in order to know that, I have to design and implement the Domain first. Only after, I can start toying with the db. Anything before is wasted time and a poor design (as I'm biased to fit the Domain design to the db structures).

In conclusion, your app is a place where you should use the solutions fit for the problem, all those design patterns can co-exist, the big mistake is to decide on a generic solution up front and then try to force all the problems into it.

MvcPowerTools 1.0. Released

By Mike on 3 July 2014

After 6 months of work, I'm proud to announce to the Asp.Net Mvc community that MvcPowerTools (MvcPT) , aka FubuMvc goodness for AspNet Mvc, is finally released. Aimed at the seasoned developer, MvcPT brings the power of conventions to the asp.net mvc platform. What's the big deal about these conventions, you ask? Well, grab a be... chair and pay attention for 3 minutes.

Imagine that you want to write maintainable code. How do you define maintainability in a few words? "Easy to change" . Simply put, you want the code to be changed easily and that's why you want to write decoupled code, prefer composition over inheritance, use an event driven architecture, respect SOLID principles etc. Using conventions is yet another tool in achieving maintainability. And not only that, it's about control. Do you want to force your app into a framework's constraints or do you want to make the framework adapt to your style? Who's in control, you or the framework?

Using conventions means you decide the rules that a framework uses to provide a behaviour. Take for example, routing. If the routes are generated based on conventions (rules) you can come up with any convention you want that will create one or more routes based on a controller/action. Want to use attributes? OK! Want namespace based routes? Ok! Want to mix'n'match things based on whatever criteria? You get to do that! And best all, want to change all things at once? Just change the relevant conventions.

Let's talk about html conventions i.e rules to create/modify html widgets. When using the power of conventions, you can make bulk changes, without touching anything BUT the convention itself. Think about forms. In a big business app you may have hundreds of forms. If the form fields are generated using html conventions i.e the code looks like "@Html.EditModel()", then making ALL of the fields from all the forms use boostrap css amounts to define a single convention (2-3 lines of code). Change that convention and all hundreds of fields automatically change. It's simply DRY.

Some devs don't like conventions because it does look like magic, I suppose it depends on the developer if they want to control or fear the magic. I do find useful, for example, to see the [Authorize] on top of a controller, but I get tired very quickly to write that attribute anywhere I need it and inheriting a AuthorizedController base class is just a bad solution. I just want to say directly: "All the controllers from Admin need to be authorized" or "All the controller BUT these [] need to be authorized" or "All my WebApi controllers respond ONLY to Ajax requests".

Conventions allows you to do mass control as opposed to micro managing things. Tired of repeating yourself? DRY! Use a convention. Want to make bulk changes with minimum effort? Use conventions!

I'm not cheering for a new golden hammer, but for another tool that will help us write maintainable code. And if you liked Jimmy Bogard's put your controllers on a diet approach you'll find a nice surprise built in MvcPowerTools.

Now, go read the docs, get the nuget and start writing more maintainable apps!

Introducing Make# aka MakeSharp

By Mike on 20 June 2014

I don't always have many needs when using a build automation tool (with C# as the scripting language) but when I do, I make a mess with the procedural approach. Static methods and global variables simply don't work if you want a tidy, reusable and flexible functionality. At least in my case it doesn't work, since I'm so used with the object oriented mindset and I always feel the need to define a class and extract some things into private methods etc.

So, I've decided to take my CSake (C# Make) project to the next level i.e I've rewrote it to be more compatible with the OOP mindset and to use the (not so) new scriptcs that everyone's raving about. Enter Make#(MakeSharp) the build automation tool where you can use C# in an OOP way (like God intended). And because I do like intellisense, all of my projects using Make# also have a Build project referencing Make# executable and helper library where I can code my build 'script' with all the VS goodness. And the script is in fact a bunch of classes (you can almost call it an app) which do the actual work.

You can read details and a generic example on the Make# github page but in this post I want to show a bit more advanced usage. This is the build script for a project of mine called MvcPowerTools (well, in fact there are 2 because it contains the WebApi tools, too). My needs are:

  • clean/build solution;
  • update nuspec files with the current version, as well as versions for some deps. One dependency is a generic library I'm maintaining and evolving and many times I'd need some features which turn out to be generic enough to be part of that library. And since I don't want to make 3 releases a day for that library, I'm using it directly, hence its version it's not stable. So, I want to update the dep version automatically;
  • build packages as pre release, this means the project's version in nuspec would have the "-pre" suffix;
  • push packages.

I need all this steps for both projects and I want my script to allow me to specify I want only one or both projects to be built. This is the script (you can see it as one file here) .

public class PowerToolsInit : IScriptParams
{
    public PowerToolsInit()
    {
        ScriptParams=new Dictionary<int, string>();
		Solution.FileName = @"..\src\MvcPowerTools.sln";
    }

    List<Project> _projects=new List<Project>();

    public IEnumerable<Project> GetProjects()
    {
        if (_projects.Count == 0)
        {
            bool mvc = ScriptParams.Values.Contains("mvc");
            bool api = ScriptParams.Values.Contains("api");
            bool all = !mvc && !api;
            if (mvc || all)
            {
                _projects.Add(new Project("MvcPowerTools",Solution.Instance){ReleaseDirOffset = "net45"});
            }
            if (api || all)
            {
                _projects.Add(new Project("WebApiPowerTools",Solution.Instance){ReleaseDirOffset = "net45"});
            }
        }
        return _projects;
    }

    public IDictionary<int, string> ScriptParams { get; private set; }
}

You see that I'm defining a class using init data: PowerToolsInit. This is how I override the default implementation of IScriptParams, I'm providing my own. In this class I'm deciding which projects are to be built based on script arguments. Solution  and Project are predefined helpers of Make# (intellisense really simplifies your work ). I have only one solution so I'll be using it as a singleton.

public class clean
 {
     public void Run()
     {
        
         BuildScript.TempDirectory.CleanupDir();
        Solution.Instance.FilePath.MsBuildClean();        
     }

 }

[Default]
[Depends("clean")]
public class build
{

    public void Run()
    {
        Solution.Instance.FilePath.MsBuildRelease();
    }
}

Self explaining. BuildScript is another predefined helper. MsBuildClean and MsBuildRelease are windows specific helpers (found in MakeSharp.Windows.Helpers.dll which comes with Make#) implemented as extension methods.

[Depends("build")]
public class pack
{
    public ITaskContext Context {get;set;}

	public void Run()	
    {

	    foreach (var project in Context.InitData.As<PowerToolsInit>().GetProjects())
	    {
	        "Packing {0} ".WriteInfo(project.Name); //another helper
            Pack(project);
	    }
      
    }
	
   void Pack(Project project)
    {
        var nuspec = BuildScript.GetNuspecFile(project.Name);
        nuspec.Metadata.Version = project.GetAssemblySemanticVersion("pre");
	    
        var deps = new ExplicitDependencyVersion_(project);
        deps.UpdateDependencies(nuspec);
        
        var tempDir = BuildScript.GetProjectTempDirectory(project);
	    var projDir = Path.Combine(project.Solution.Directory, project.Name);
        var nupkg=nuspec.Save(tempDir).CreateNuget(projDir,tempDir);
	    Context.Data[project.Name+"pack"] = nupkg;
    }
}


class ExplicitDependencyVersion_
{
    private readonly Project _project;

    public ExplicitDependencyVersion_(Project project)
    {
        _project = project;
    }

    public void UpdateDependencies(NuSpecFile nuspec)
    {
         nuspec.Metadata.DependencySets[0].Dependencies.Where(d=>d.Version.Contains("replace"))
         .ForEach(d=> 
                d.Version=_project.ReleasePathForAssembly(d.Id+".dll").GetAssemblyVersion().ToString());
    }
}

Now this is interesting. The Context property allows Make# to inject a predefined TaskContext that can be used to access script arguments (or in this case the init object) and pass values to be used by other tasks. BuildScript.GetNuspecFile is a helper returning a NuSpecFile object (predefined helper) which assumes there is a nuspec file with the project name available in the same directory as the build script. The GetAssemblySemanticVersion method of Project allows you to specify versioning details like pre-release or build meta as defined by Semver. For Nuget purposes any text suffix marks the package as pre-release.

In order to update the dependencies version, I've created an utility class for that (default task discovery convention for Make# says that a class with the "_" suffix is not a task, i.e just a POCO) and my convention to indicate in a nuspec that a package dependency's version needs to be updated is that the version contains "replace", like this

<dependency id="CavemanTools" version="0.0.0-replace" />

Then I tell the NuSpecFile object to save the updated nuspec in the project's temp folder then I invoke the predefined CreateNuget helper. Then I save the returned nupgk file path into Context.Data so that it can be used by the next task.

[Depends("pack")]
public class push
{
    public ITaskContext Context { get; set; }

    
    public void Run()
    {
        foreach (var project in Context.InitData.As<PowerToolsInit>().GetProjects())
	    {
	        var nupkg=Context.Data.GetValue<string>(project.Name+"pack");     
            BuildScript.NugetExePath.Exec("push", nupkg);
	    }
      
        
       
    }
}

Push doesn't do much. It gets the nupkg paths from Context.Data then invokes the Exec helper to execute nuget.exe to push the pakage. By default, NugetExePath assumes the nuget.exe is in the "..\src\.nuget" directory. You can change that.

And here you have it: the implementation to build MvcPwerTools. I don't know about you, but I think this object oriented approach is much easier to maintain, rather than functions and global variables. MakeSharp is already available as a nuget package.

Domain Driven Design Modelling Example: Brain and Neurons

By Mike on 11 June 2014

 I got this question on stackoverflow

A brain is made up of billions of neurons, so if you have brain as your AR you potentially have a performance issue as its not feasible to retrieve all neurons when you want to activate some behaviour of a brain. If I make the neuron the AR, then its not trivial to validate its behaviour and there is the issue of creating new neurons within a brain. Which should be the AR, or should there be some other AR to encapsulate these

I found the use case quite interesting because of the fact that we're dealing with billions of "children". So is the Brain an Aggregate Root (AR)? Is the Neuron a Value Object (VO) or an AR? Assuming that we want to model things as close as we know how the brain works I'll say it quick: Brain is an AR, Neuron is a VO and the Brain will always require ALL the neurons (yes, billions of them) every time you ask for a Brain object from the repository. So I'm saying to load billions of rows anytime you need a brain (sic). Maybe I need a brain too. Or I'm just a good Domain modeller. Here's how it is:

A brain can't function without neurons, a brain needs the neurons. But does it need all of them? Well, let's see... I'm telling the Brain to learn a poem. The Brain will use some neurons to store that information. I tell the Brain to solve a math problem. The Brain will use the same or other neurons to do that. Now, do I (the client using the brain object) know which neurons are used for what behaviour? Or is it an implementation detail of the Brain? That's the point, only the Brain knows how to use the Neurons. It groups them in regions and it might use 1 or 10 regions for one single task. But nobody outside the Brain knows these details. This is proper encapsulation.

So when you need a Brain you get it whole with all neurons no matter how many they are. You don't get only parts of it, because they don't make sense independently. Btw, the neurons probably are distinct for the Brain but that doesn't mean they are automatically Domain Entity. After all, maybe the brain doesn't care which neurons are used. Maybe it cares more about regions and it just tells a region to supply a number of available neurons. Implementation details...

Should the Neuron be an Entity or even an AR? Maybe, but not in this Aggregate. Here, it's just something the Brain uses internally. If you make it an entity or an AR, you'll have something with an information that only the Brain can understand. And if you understand the neuron information then you don't really need the Brain. In this aggregate, Neuron is at most a VO.

But what about the technicalities? I mean there are still billions of neurons. It's a challenge to load them all, right? Isn't lazy loading the perfect solution here? It isn't, because the ORM doesn't know which neurons the Brain needs. And the brain needs to have some virtual properties (let's say the ORM works with private properties) which should load only specific neurons. But what do you do? You create one property for each possible behaviour so that you can load only those specific neurons? And if the Brain doesn't care about a specific Neuron but only about the signature of the data contained, how can you implement that for the ORM to work with? Lazy loading has just complicated things. And this is why lazy loading is an anti-pattern for Domain entities, it has to know domain details in order to know what data to retrieve. And not everything can be put in a virtual property (to allow the ORM to create the proxy), not if you want to make the object maintainable.

The solution here is to load everything the Brain needs to do its job, because it's how the brain works according to the Domain (it needs ALL the neurons) and it's a maintainable solution. Maybe not the fastest one, but you're dealing with a complex notion where maintainability matters more and it isn't like you're using the same object for queries, right? And it's not about the number of neurons, it's about brain's data size. Do be aware that in this case, the Brain and Neurons have a direct relationship in a way that one can't work or it doesn't make sense without the other.

Other cases might resemble this one, but the relationship can be one of association not definition. In those cases, you have grouping criteria and items (forum and threads) or different concepts working together (table and chairs)