Building Domain Driven Architecture in .NET – Part 1 (Overview)

Having read Vaugh Vernon’s book on DDD implementation, I decided to portgrade (port+upgrade) my old clunky Windows based N-tier desktop expense tracking application to ASP.NET Core MVC application using some of the Domain Driven Design concepts I studied in the book.

In this post, I will describe the overall architecture of the new expense tracking application and how various components fit together. In subsequent posts, I will describe Domain Modeling, Repository Design, Application Services design and top it off with a ASP.NET MVC web application front end. I am not going to go into the actual implementation details of the application itself because the point is to show my general approach, thought processes and philosophies towards building a DDD application.

This may not be a “by-the-book DDD” implementation but its my sincerest interpretation of various concepts crystallised into a working application. I might have taken some liberties with some of the ideas and implemented them differently. So please take this with a pinch of salt. Here goes!

To visualise the general architecture of the application, I created this diagram which is a layered version of the original Onion Ring design or Hexagonal Ports and Adapters design. I doubt most DDD applications would deviate very far from this fundamental architecture:

DDD Arch
General DDD Architecture

Instead of starting at the extremities, I start at the centre with the Domain model. This is a set of C# libraries that contain the Object Oriented model of the problem space i.e. a bounded context in DDD speak. Within this realm, all classes and their relationships have a certain meaning that’s appropriate to the problem we are trying to solve i.e. expense tracking and reporting. The classes are logically grouped together in cohesive structures called aggregates or object graphs. This is simply a folder in the assembly that groups various classes together. This helps in ensuring the consistency of a whole bunch of objects per transaction which is why its important that an aggregate is not too big i.e. doesn’t contain too many and unrelated classes.

Domain model classes contain not only data that’s relevant to the domain but also behaviour i.e. use case methods/functions that are appropriate to the domain. The calling code calls these method passing in all necessary information and the relevant class takes care of executing that use case at a lower level. This ensures that the aggregate (or the object graph) is consistent and valid post-use case execution.

This way of thinking about the problem space & domain modeling has the following advantages:

  1. I can focus completely on the business domain and model it from inside out without worrying about databases, UIs etc.
  2. I can build the domain model and simultaneously test it in complete isolation using unit tests to prove that all the use cases work as expected. This will improve the overall quality of the code.
  3. Encourages loose coupling and encapsulation by putting only relevant behaviours in the appropriate classes and hiding the internal implementation level detail from the calling code. For e.g. in my domain model, all properties are marked private set so they can only be modified by the relevant domain object from within these behaviour methods which can guarantee its consistency.

The next thing out from the centre, for me, is the repository. Every application will need some form of persistent storage to save the data for long term, the problem is that there are so many databases out there. Which one should I use? Can I postpone that decision for now and just use something where I can dump the data and pretend its stored for long term just so I can prove that the application works?

It turns out, I can!

The repository pattern allows me to decouple the application from the database by introducing a layer of abstraction in the middle. This way the application only needs to talk to this abstraction (which I control) and at some point an implementation of this abstraction will make sure that it does the low level grunt work of actually saving the data somewhere. The calling code doesn’t need to worry about where or how the data is getting saved, as long as it can get the results its needs and store the data that it needs.

The reason this is possible is a technique called Inversion of Control/Dependency Inversion. Its the D in the SOLID principles of software design. I can create a C# interface (usually created in the domain model layer to be either generic or aggregate specific) that exposes methods (completely independent of any database concerns) that the calling code can call into and an implementation of this interface will provide the actual  data store specific implementation of these methods to realise the persistance. This way my application could be completely persistance agnostic, I can swap out one data store for another if my business needs change in the future and nothing in the rest of the application would need to change.

The other advantage this provides is I can unit test my application without putting any persistance in the back. I can either mock it out or use an in-memory persistance for e.g. a ConcurrentDictionary which stores all the data in memory but clears off when the application shuts down. This will help me build the rest of the application relatively unimpeded by database decisions until such a time when I feel the need to actually persist data somewhere more permanently.

The repository assemblies add references to the relevant domain model assemblies so that an implementation can persist objects of those types. For the expense tracking application I used SQL Server as the default data store with Entity Framework Core as the ORM, but just to show an example of flexibility I also built a repository implementation for Microsoft’s cloud hosted NoSql offering – Cosmos DB (erstwhile Document DB). It took me all of half an hour to do it including testing. Imagine the benefits of this kind of flexibility in a real enterprise level application, it may take longer than 30 minutes but its still much quicker with this kind of architecture as opposed to the traditional tightly coupled applications of the past. Maintainability and extensibility are key to software success.

Next layer out, is the application services. Not to be confused with WCF or any other web-services, application services are simply C# class libraries that expose coarse grained use case functions/APIs to the calling code. Application service assemblies reference both Repository and domain model assemblies and any other infrastructure level components as well for e.g. logging, caching etc. These use case functions might internally translate into one or more repository calls required to accomplish the use case. Application services can also interact with third party external web-based APIs/services to get the data it needs for the use case. The key is, that all this chaos is encapsulated away from the client i.e. the calling code and it makes sure that external dependencies are not leaking into the calling code and risking tightly coupling it to anything it doesn’t need to be tightly coupled to.

In the case of my expense tracking application, application services expose methods to my MVC controllers that when called into, “coordinates the chaos” of the various domain objects and repositories to execute a use case in an atomic manner. For e.g. the app service method might load a domain entity from the repository, invoke the domain behaviour method against the entity to mutate its state and then call back into the repository to persist those changes back (this is the get->mutate->update pattern). Application services are only responsible for facilitating these use case transactions and themselves don’t take part in business/domain logic enforcement, that’s the job of the domain model.

At this point, its largely a matter of choice and business requirement whether I would go ahead and put an ASP.NET MVC web front end in front of this application and/or create RESTful API endpoints to allow external applications to consume data and services of my application.

I am quite literally free to put any presentation layer in front of this design and the application would behave in more or less the same way apart from a few application type specific configuration changes that I might need to do. For e.g. ASP.NET Core comes with a Dependency Injection framework out of the box but for a WinForms application I will need to  set-up the Dependency Injection in the composition root manually using some kind of third party DI framework like Autofac, Ninject etc. Once that’s done, all I need to do is take a dependency on the appropriate application service(s) and call into its use case APIs and that’s it!

The screenshot below shows the way I structured my expense tracking application solution (application name has been redacted but the message is still valid):

Sol Struct

The solution folders represent various logical rings of the Onion Ring architecture and is mirrored on the disk as well for consistency.

In part 2, I will tackle domain modeling in a bit more depth.

6 Replies to “Building Domain Driven Architecture in .NET – Part 1 (Overview)”

  1. Thanks for this series. I’ve been trawling for a decent practical example of DDD and this is the first one that I can relate to. Good intro, and nice diagram.

    1. You are welcome! I am glad you found this useful. I had been in a similar situation as you, so I decided to port my old application to a DDD-esque architecture and blog my experiences. DDD is a very broad application design style and seems to have plenty of variations depending on the complexity of the domain.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.