Friday, 26 March 2021

Outside-in Diamond 🔷 TDD #2 (anatomy of a style)

After talking about the WHY and the reasons that have motivated the emergence of this style of TDD over the years (basically, being compatible with people's psychology and helping them with their recurring misunderstandings about TDD), this second article will be about the HOW. To do this, we will see some examples and list the main characteristics of tests written with this style.                                                                                

Note: For those who would like to know more about it, you can refer to the talk I recently gave to DDD Africa, available here: https://youtu.be/djdMp9i04Sc or to the first article of this series (exploring the WHY)


Outside-In Diamond TDD


First, it's a workflow

Before we see the various kind of tests Outside-In Diamond 🔷 TDD cares about, I would like to focus on the writing dynamics of these.


As its name suggests, Outside-In Diamond 🔷 is a style where we draw the shape of our System-Service-API-Application from the beginning based on our use cases and business needs (which each turn into acceptance tests). The ability to drive everything from the outside (i.e. from the consumption of our System) allows us not to get lost along the way and to avoid coding useless things that would not be directly necessary for one of our use cases. This is what makes this TDD-style devilishly efficient. But nothing new here (or related to Outside-In Diamond 
🔷). It's just classic old Outside-In bringing its intrinsic benefits.

Outside-In vs. Classicist TDD


A frugal style

Indeed, this outside-in dynamic (through exclusive external uses) combined with triangulation allow us to stick to the YAGNI principle (You Ain't Gonna Need It).

One may notice here that triangulation is more often associated with classist TDD (i.e. Inside-Out workflow) than with Outside-In TDD. And that's one of the reason why it is often complicated for some TDD practitioners to realize that these are orthogonal topics.

As with the traditional double-loop presented by Nat PRYCE and Steve FREEMAN, we start by writing a first acceptance test against a black box (i.e., our System-Service-API-Application) which does not yet exist and whose outlines will be sketched from our interaction with it. 


For instance, if I'm coding a web API, my subject under test -the entry point of my black box- will usually be a web controller on which I'm going to call a public method (i.e. our very first Operation for this System-Service...).

Once our very first test is red (RED), I will generally turn it green as quickly as possible, by hard-coding in my web controller the response expected by my test (GREEN). The refactoring phase will undoubtedly be an opportunity to do some design by bringing out a hexagon type (if I'm using an Hexagonal Architecture) or any facade for my domain (REFACTOR).


I will then continue by writing a second acceptance test (notice that we are still in the big loop), which will usually suggest another case for the same operation (RED). To turn green as quickly as possible, I will usually add another hard-coded value in my implementation code, while preserving my first case-test using an if statement (GREEN). 

The refactoring-design phase will then serve me to perhaps introduce a right-side port (in the hexagonal architecture sense) in order to start replacing the hard-coded value in my domain by a value found or computed from another hard-coded value returned by my right-side port. Usually, this will slightly impact the initialization code of the test by injecting the right-side port (interface) to our SUT/Subject Under Test. In our case: our web controller (REFACTOR).


One and a half loop... and not a mockist style

At this point, I still haven't written a fine-grained test (the one belonging to the small/inner loop), and I'm about to write my 3rd acceptance test (still in the big loop). This will add a new case to be handled by my current API operation (RED). And this is where triangulation comes in. 


In the context of TDD, Triangulation is the fact of generalizing an implementation only from the second or third (hard-coded) case. 


Let's see that in action.

Here we can make our 3rd test pass as quickly as possible by adding a second if statement and a third hard-coded value (GREEN). Then we can dedicate our design step by refactoring our implementation in baby steps mode and through what looks like a strangler pattern strategy (as once pointed out to me by my friend and talented eXtreme Programmer: Philippe BOURGAU). 


This can be achieved by positioning our new domain material in the implementation code just before the existing if statements. Once our old implementation is eclipsed by the new one, we can remove all these hard coded values from our code (REFACTOR).

The idea is to move with baby steps and without our tests being broken (note: in C#, I'm working with NCrunch, a live test runner that automatically builds and runs all my tests in the background as soon as I change my code. This is really helful and act as gamification in order to be always GREEN during the REFACTORING step through baby steps).


Having mostly Acceptance tests doesn not mean that we aren't taking baby steps! (far from it)


This is the appropriate moment to create intermediate fine-grained (unit) tests in what we call the double or small or inner loop. My personal heuristic over time is to only write those tests if I feel the need for it (ie when facing with a difficulty or when being in a "tunnel effect" for more than 10 minutes). 


In the end, the Outside-in Diamond 🔷 style doesn't put too much pressure on writing lots of little fine-grained (unit) tests. It's à la carte (depending on the maturity of the people I'm mobbing or pairing with). 


One and a half loop - Outside-In Diamond TDD


Moreover, these intermediate tests are very often removed once the implementation is complete. A bit like removing wedges and wooden battens that helped us assemble a concrete wall when it is dry.

I will probably come back to the Outside-in Diamond 🔷 workflow  later (and how it fits with design decisions) in an upcoming live coding session. That should be easier to illustrate all this.

Now let's see what the Outside-In Diamond 🔷 TDD acceptance tests look like.


Focus on our Acceptance tests

Yes, let's zoom-in on our favorite tests (the Acceptance ones). These are:

  • Short: no more than 7-15 lines of code per test. To achieve this we use builders to initialize the test context, fuzzers to quickly and randomly generate values, and helpers methods for intention-driven assertions (expressed in 1 line). In addition to relieving our mental load when reading our tests, the advantage of having short tests lies in being as little "implementation detail" as possible and as "intention" oriented as possible. Intentions are generally less fragile than implementations.
Sample of Outside-In Diamond TDD


  • Domain-Driven: Indeed, our tests should express Domain concerns with words belonging to our considered Context (see “ubiquitous language” and "Bounded Contexts" from DDD). We will particularly use builders to declare business intentions, and not getting lost with implementation details. Good test builders publicly expose domain intentions and behaviors, and fully encapsulate implementation details and stub configuration privately.
Ubiquitous Language DDD Outside-In Diamond TDD


  • Blazing Fast: between sub-millisecond and 400 milliseconds max per test. To achieve this, we will use stubs to be able to avoid any I/O (always more expensive in terms of latency budget). Stubs will only be used for systems external to our System-Service-API-Application. Warning: Outside-In Diamond TDD is an outside-in style, but it is not a "mockist" style.

  • Isolated and autonomous: no use of member variables or mutable private fields belonging to the test suite | fixture. No initialization via [Setup] methods for the test suite | fixture either. Even if we use builders and fuzzers, any creation (with its intentions) must be declared from the test and stored in local variables (inaccessible by other tests). This helps to avoid the cognitive overload that occurs when people have to go elsewhere to check what is already prepared or initialized before each test. And please, don't get me started with the painful TestFixture|Suite inheritance and setup made in a TestSuite base class ;-)

  • Deterministic: even if we intensively use Fuzzers which propose random values, a means must be provided (in general by the fuzzing library) to be able to replay the same Test under very exactly the same conditions (in general by reusing the same seed). This is essential in order to be able to reproduce and understand any failure occurred once in a test execution (on the software factory or on a dev workstation).

  • Behavior-driven: we try to hide everything that is technical. These tests are always doing the same thing: we ask our black box (to do) something and we check that its answers suit our expectations. The checks (or assertions made) are generally encapsulated in test helper methods that allow us to be concise and business-oriented (regardless of the assertion library you use and the number of checks).
Outside-In Diamond TDD Acceptance test sample


  • Similar: We highly rely on the power of sameness (for instance to reuse the same variable names for the same domain concepts across tests) in our test code in order to smooth our future tests refactoring (e.g.:  to ease possible search and replace). We consider our test code as production code. We improve it and refactor it regularly too.

  • Antifragile: by default, thanks to all the features mentioned, one must easily change and refactor our implementation code without breaking contracts exercised by our tests. To put it another way, our tests age well and don't break when we change the internal structure of our code. They should only break if we introduce a bug or a regression. 

  • Broad spectrum: even if they hide it well (thanks to builders and helpers), our Acceptance tests cover a broad spectrum of our code base and include the real adapters in case of hexagonal architecture (instead of subbing them). This is the opposite of what people usually recommend, but this is the most effective testing strategy that I have ended up over the past 8 years of putting hexagonal architectures in production (in different contexts and for different customers). Since this is more than an Argument from authority ;-) I will dedicate the next article to illustrate and explain all these trade-off.


Next episode: integration tests & hexagonal architecture tradeoffs

In order to write articles that are a little shorter than usual, I will not describe here the characteristics of our contract tests (i.e. our integration tests). These will be presented in detail in our next article dedicated to our test strategy in case of a hexagonal architecture.

Talk to you soon & Happy testing



PS: Since it has been asked many times, the fuzzer we use at work is Diverse.

Friday, 12 March 2021

Outside-in Diamond 🔷 TDD #1 - a style made from (& for) ordinary people

This article is the first of a series devoted to a style of TDD that I have gradually elaborated throughout my 16 years of TDD practice in very different contexts. Each article will look at a particular theme around this style (what concrete problems does it tackle? What are the main characteristics of this style? What are the most frequent trade-offs and objections one could make, etc.)? This first article takes up the reasons that have motivated the emergence of this model over the years: to be compatible with the psychology of dev people, regardless of their level or maturity


Note: For those who would like to know more about it, you can refer to the talk I recently gave to DDD Africa, available here: https://youtu.be/djdMp9i04Sc





Sad Panda


Less than 2 years ago, I started to work for a large international hotel group. One of my missions was to transform 2 back-end developers (working on all web sites and mobile apps) into a real "API team" (there are 6 of us now). Our goal: to support the different business lines on every possible topic (marketing, distribution, finance, etc.).


When I joined this client, I found the people, the platform and the domain very fascinating, but their testing strategy was definitely not one of their strengths. Despite competent and motivated people, there was a whole bunch of sub-best-practices present and lots of hesitations, ultimately all because there were not enough tests. Indeed, the writing of tests was far from being a regular practice and the only one actually writing tests was applying a test-after strategy.


As a result, we had an infernal PR-based system of branches and quality gate code reviews over several days (despite the fact that there was only 3 people working on the same code base). As you can guess, we were very far from continuous delivery… People also felt a little bit infantilized by the numerous remarks and roundtrips made with the tech lead during the gated code reviews. Due to lack of pair programming and automated tests, this was mandatory for everyone before being able to push some code into production.


And if you wanted to add tests, you might be intimidated. Indeed, the overall design of the API was quite complex and our test suite was incredibly painful and complex too. We had never-ending test suite setups (more than 700 lines of test initialization in one test suite class for instance), side effects everywhere (with initialization made through test suite private fields/members) and strange beasts all over the place (i.e.: half concrete implementation / half stub) thanks to some dark magic with our mock framework (partial stubs… trust me, you don’t want to know ;-). For a reason that I still didn’t get (because he's so smart otherwise), the tech lead had a very strong personal principle he was insisting on with everyone: “one should never change our implementation design for testing reasons” (do not add a constructor for testing purpose for instance…)


In short: when they existed, the tests were considered as second-class code. They were complex and rarely refactored cranking out code... Enough to tie knots in our brains on a daily basis, and not attractive at all for those who yet did not write tests… Every time we touched the code base, we needed both code review and to run huge QA end-to-end tests campaign. Some of them were a little bit discouraged and disengaged from this bureaucratic burden.


Typically, the kind of situation I have seen a dozen times here and there in my experience. Because unfortunately, this is not an isolated case. Throughout all these years of practicing Test Driven Development I had to face the fact:



There are still very few people who practice TDD (unfortunately)


I've added "unfortunately" because when I see what we are able to do now with this API team on other components, when I think back to what this practice and way of thinking/coding has brought to me personally (in terms of serenity, efficiency and pleasure), I tell myself that it's a shame that TDD is still not shared more widely.


I already said it multiple times in articles or conferences in French: from a personal perspective, TDD has acted for me as a bulwark against many of my former biases (procrastination, blank page anxiety / analysis paralysis, doubts). Err… actually I still procrastinate sometimes but never while coding anymore 😉


So yes, I definitely think that it’s a shame that most people have not tried it, nor experienced it in pleasant conditions. I talk about pleasant conditions because there are indeed many pitfalls related to this great practice (and I think I’ve fallen into every one of them over the years). One can even turn it a nice situation into a quagmire if left unchecked to help.


Which leads me to my second observation: those who have tried it also often stopped along the way, instead of persevering in the face of certain implementation difficulties. TDD is one of the most trolling Straw Man in IT (and not only on Hacker news ;-) People are talking about something they think it is but isn't actually. Sad Panda.


But to be honest, some of these difficulties with TDD are ubiquitous. I found them so many times in so many different teams with various levels & maturity. I found them in so many different types of companies, in so many different domains too (finance, health, hospitality, energy, transport,) during all these years.


Always the same again and again. These 3 recurrent difficulties are:



click on the image to zoom


Disclaimer: Some people may notice here that testing is just a side effect of TDD (which serves a lot more than that). Of course, that is true. But I think that testing is nonetheless an important chapter too.


Reason why we will detail later to each of these commonly encountered difficulties in the next article. In the meantime, I would like to zoom-in today on what I find problematic for the overall understanding of people.


A negative influence on people that I have observed over and over again onto people: the mental model of the pyramid of tests (and the confusing term of "unit test").



The pyramid of confusion



Pyramid of Tests (of confusion)


Okay. This is not really breaking news. Many people have already approached this theme over the last decade or more, warning everyone against the fuzziness of this mental model and the pitfalls of understanding one should not fall into with this metaphor (see. Seb Rose in particular: https://cucumber.io/blog/bdd/eviscerating-the-test-automation-pyramid/ ).


But I will be rougher with this "monument" that the Pyramid of Tests has become. Indeed, all these years of noticing the harmful effects of this mental model onto teams and people's code bases, makes me finally think that this pyramid is really harmful for the vast majority. It’s not “just” a “good opportunity to realize that one may write different kinds of tests”. In most cases it’s a cargo-cult factory.


Same applies to the term “unit test”.



Lost in translation


Unit test is confusing

The fact that we are still using this “unit test” terminology despite the fact that we all know that this is confusing for everyone still pisses me off. You want to start an argument on twitter? You just have to say something about “unit tests” and you will see how many different definitions and mental models people will suggest or impose on you (implicit for them).


The definition I liked is not the definition the majority of people I’ve met retains. I liked the “unit test” definition from Kent Beck:


“tests that “runs in isolation” from other tests”


And don’t miss that specific point: It’s about the isolation of the test, not about the isolation of the topic/system under test (as brilliantly silver lined by Ian Cooper in his great talk: “TDD Where Did It All Go Wrong”: https://www.youtube.com/watch?v=EZ05e7EMOLM).


But there is one thing for sure: when you ask people what a “unit test” is… 90% of the answers will be “a test of a tiny module, a type, a class, a method, a function…”


Hence, we can try whatever we want… it doesn’t seem to work mainstream over the years despite the books, the conferences, the blog posts, the coaching tips, etc. I personally tried to foster Beck’s definition all around me over the last decade and more, but I have to admit: this is a game lost in advance. Definitely.


We aren’t talking about bullshit here, but it somehow reminds me of Alberto Brandolini’s law:

 

The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it”

 

Reason why I stopped being precise with this concept of “unit test” over the years. I finally avoided as much as possible to use this “unit test” term and to refer to its bundled pyramid too (unless I have a discussion with an expert on the topic).



And I’m not the only one. Other people also avoided the “unit test” terminology over the years (like Gee Paw Hill and his concept of “Microtests” for instance). My strategy was to simply focus on Acceptance tests instead (which are Component/API tests). The very same coarse grained “Acceptance tests” Nat Pryce and Steve Freeman are talking about in their great GooS book (http://www.growing-object-oriented-software.com/).


It’s just a pragmatic decision in order to avoid confusion and misunderstanding all around me again and again.



Anyway, as for the term “unit test”, the Benefit-Cost ratio of the Pyramid being really negative, I was looking for a better way to have discussions about testing strategies with teams and beginners.



Testing Gem


While I was looking for an alternative to this bloody pyramid a few years ago, I finally came with the Diamond shape (See. https://twitter.com/tpierrain/status/964018082434945024?s=20 ).


As a visual incentive, this fits perfectly with my invitation to write more acceptance tests (coarse grained) than fine grained ones (the latter being those that people - not experts - continue to call "unit tests").


Outside-in Diamond TDD


The "diamond" shape largely clarifies things for beginners. Me and some friends had the confirmation of it many times in my experiences at work.



Make the implicit, explicit


In my experience, the 

do not test implementations, test behaviors instead” 

recommendation that we used to bundle as a reading guide with the pyramid was not enough. It still left lots of beginners on the straw (because too vague and complicated to understand and translate into action).


On the other hand, I found much more readable and actionable for everyone the hint:

write more (coarse grained) acceptance tests than fine grain tests (what the majority of people - but not experts - call “unit tests”)” 


At least that's what I've experienced around me for the last 10 years now.



BDD or not BDD? (that is a question)


Even if the diamond is more precise and more readable for the vast majority of people that I have met, there is still something that bothers some of them. This usually comes with one of those 2 questions:


  •  “Are your Acceptance tests written in Gherkin (a.k.a. Given-When-Then, aka ”BDD” tests)? 
  • Are your acceptance tests expressed in classic code, by and for dev people?”


My answer is rather yes to the latter. As far as I'm concerned, I've decided to only pay the cost of adaptation with a BDD technical Framework if I have business stakeholders who are interested in it. 


Which, in the end, is very, very rare.


I do BDD very often, but mostly the “discovery” part, rarely the “formulation” part (Gherkin land) and almost no longer the “automation” part (Cucumber, Specflow, etc.). But whatever the context in which I work, these 2 types of tests ("Acceptance" or "BDD" to put it simply) are coarse grained acceptance tests targeting the Behavior of a component, of a service or of an API.


They mainly differ by their expressiveness (because of their target audience) and the additional cost of implementation with the "BDD-Gherkin" support. 


For the record, acceptance testing is what most experts consider to be true unit tests (whereas the mainstream dev people continue to think "unit tests" as type or class level fine-grained tests).



How my TDD practice evolved: the origins of Outside-in Diamond 🔷 TDD


Beyond the diamond model to illustrate the overall testing strategy, it is both a style and a workflow. Some important expression characteristics and a way of designing and writing software.


We will see all of this in the future articles of this series. But before that, I'd like to end this article by emphasizing how I converged to this style (and those compromises).

The diamond is just a marketing or mnemonic aspect of Outside-in Diamond 🔷 TDD.



Fueled with Empathy


All the trade-offs that I have been adopting over the last decade have the same starting point: the psychology and the reactions of the dev people I have worked or discuss with (in meetup events, user groups, conferences).


In front of a recurrent problem, I tried to bend my practice so that it fixes the situation (instead of thinking that one can change people). Here are some examples:


-----

PROBLEM: The testing pyramid mislead people despite our warnings and disclaimers year after year


TRADE-OFF: I searched another model that speaks more to the majority (and not only to the experts) => the Diamond


-----

PROBLEM: Mainstream people don’t share the same definition for “unit tests” and really struggle to understand what kind of tests to write.


TRADE-OFF: I interpret “unit test” like the vast majority (i.e., fine grained tests against classes etc.) and I advocate people to talk about and to write (coarse grained) Acceptance tests instead.

 

-----

PROBLEM: We have tiny bugs in technical code because as dev we love to write "business" tests BUT we struggle to put as much energy and carefulness into slower and more technical contract / integration tests (i.e., “happy path” kingdom ;-)


TRADE-OFF: I stub only the last miles just before the I/Os so that all the code before may be also included in our blazing fast Acceptance tests

 

-----

PROBLEM: (variant) When using Hexagonal Architecture, our Adapters on the infrastructure side often have blind spots & silly bugs because it is said everywhere that we need to cover the adapter only in Contract tests (slow, painful to write and -thus- often considered as second-class citizens for people)


TRADE-OFF: I suggested another out-of-the-box but successful testing strategy that includes the adaptation code of our adapters in our acceptance tests. These tests remain blazing fast, without I/Os and which nevertheless remain concise and Domain-driven.

 

-----

PROBLEM: Our tests code and setup are too long, complex and introduce mental burden and cognitive overload.


TRADE-OFF: I shortened and simplified Acceptance tests expressivity to nothing more than 8-12 lines thanks to domain driven helpers, fuzzers and builders. Everything is directly driven from the test (and not elsewhere).


Etc. etc.



A long Journey


The result of this process, which took years, is what my partner (Bruno Boucard) and I now call Outside-in Diamond 🔷 TDD.


The basis of this style has been there for 10 years, and it was only recently that I was able to solve some problems and get a presentable form (with the help of fuzzers  in addition to the Builders for my acceptance tests which remain short, antifragile & Domain-driven).


Now that we have covered the why, the next articles ofthis series will talk about the how and the associated trade-offs.


Stay tuned!


Note: for those that would like to dive more into this style of TDD, you can refer to the recent talk I've made on the subject for DDD Africa: https://youtu.be/djdMp9i04Sc


TDD is evolving


 

Sunday, 29 November 2020

Hexagonal or not Hexagonal?

TL; DR: hexagonal architecture is a fabulous pattern that has more advantages than the ones for which it has been originally created. One can think in an orthodox vision that patterns do not evolve. That it is important to keep Alistair Cockburn’s pattern like it was described back in the days. One can think that some patterns may evolve, that Hexagonal Architecture has more facets than we think. This article discusses and compares these 2 approaches, takes an example with one new facet related to DDD, before asking an important question to its creator and the dev community at large (at least important for the author of this post ;-)

 

First Adapters were technological

Hexagonal architecture has been initiated during a project that Alistair Cockburn had done once related to a weather forecast system. His main objective seemed to support a great number of different technologies to receive weather conditions as inputs but also to connect and publish the results of their weather forecast towards to a huge number of external systems.

This explains why he found the concept of interchangeable plugins (now we say ‘configurable dependencies’, following Gerard Meszaros) for technologies at the borders of the app, and a stable application-business code inside (what Alistair called the “hexagon”).

The pattern had some traction by the time (see the amazing GOOS book: http://www.growing-object-oriented-software.com talking about it for instance) but it is after more than a decade after that a community took back this pattern out from its ashes ;-) and made it the new way to be explored deeper. This community was gathering people who wanted to focus and foster more on the business value of our software (rather than pure tech fads): the Domain Driven Design (DDD) community.

 

Then came the DDD practitioners

As a member of this DDD community, I found the pattern very interesting for many other reasons. But the main one being the capability to protect and isolate my business domain code from the technical stacks.

Why should I protect my domain code from the outside world?!?

I still remember that day in 2008 when I witnessed a bad situation where a major banking app had to be fully rewritten after we have collectively decided to ban a dangerous low latency multicast messaging system (I was working for that bank at the time). We had taken that decision because we were all suffering from serious and regular network outages due to multicast nack storms. We were in a bad and fragile situation where any slow consumer could break the whole network infrastructure shared by many other applications and critical systems of the bank. Sad panda.

Why couldn’t this dev team just switch from one middleware tech to another? Because their whole domain model was melted and built with that middleware data format at its core (i.e. instances of XMessages everywhere in their core domain 😳).

Moreover, the entire threading model of this application was built upon the one from the low latency middleware library. In other words: the need of being thread-safe or not in their code was depending on the premises and the things that were guaranteed so far by the middleware lib. Once you removed that lib, you suddenly lost all these thread-safe guarantees and premises. The whole threading model of this complicated application would have vanished, turned the app unusable with tons of new race conditions, Heinsenbugs and other deadlocks. When it’s an app that can lose Millions of Euros per minute, you don’t play this decision at head or tail ;-)

I’m rarely proponent of rebuilding from scratch and often prefer the refactoring of an existing code that already brings value. But in that case where everything was so entangled... it was really the best thing to do. But I let you imagine how selling this kind of full reconstruction to the business was challenging...

Anyway. I wanted to tell you this story in order to emphasize that splitting your domain code from the technology is a real need. It has real implications and benefits for concrete apps. It’s not a simple architect coquetry. I’m not even talking about the interesting capability to switch from one techno to another in a snap as needed by Alister. No. Just a proper split between our domain code and the infra-tech one.

It’s crucial. But it’s just the beginning of this journey.


Models Adapters

Indeed. With his pattern, Alistair was trying to protect and keep his application domain code stable in front of the challenge of having a huge number of different technologies all around (reason why I talked about Technologist Adapters earlier). But splitting and preserving our domain code from the infrastructure one may be not enough.

As DDD practitioners, we may want to protect our domain code from more than that.

As DDD practitioners we may also want to protect it from being polluted by other models and external perspectives (one can call them other Bounded Contexts).

>>>>> The sub part in grey below explains the basics of DDD you need to understand the end of the post. You can skip it if you already know what are Bounded Contexts (BCs) and Anti-corruption Layers (ACLs).


Bounded Contexts?

A Bounded Context (BC) is a linguistic and conceptual boundary where words, people and models that apply are consistent and used to solve specific Domain problems. Specific problems of business people who speak the same language and share the same concerns (e.g. the Billing context, the Pre-sales context...)

DDD recommends for every BC to have its own models, taylor-made for its own specific usages. For instance, a ‘Customer’ in a Pre-sales BC will only have information like Age, socio-professional categories, hours of availability and a list of products and services we already bought etc. Whereas a ‘Customer’ in the Accounting-Billing BC will have more specific information such as Address, Payment method etc. DDD advises us to prevent from having one ‘Customer’ model only for every BC in order to avoid conflicts, misinterpretation of words and to allow polysemy between 2 different BCs without incidents.

The interesting thing with the Bounded Context of DDD is that they can be designed autonomously comparing to the other ones (reason why some people are saying that DDD is a way to do Agile architecture). 

DDD also comes with lots of strategic patterns in order to deal with BCs relationships. One of them being the famous Anti-Corruption Layer (aka. ACL).

Anti-corruption Layer FTW

As I already explained in another post, the Anti-Corruption Layer is a pattern popularized by the DDD which allows a (Bounded) Context not to find itself polluted by the inconsistency or the frequent changes of a model coming from another Context with which it must work.

>>>>> 

 

 

An ACL is like a shock absorber between 2 Contexts. We use it to reduce the coupling with another Context in our code. By doing so, we isolate our coupling in a small and well protected box (with limited access): the ACL.

Do you see where I am going with this?

ACL is a kind of Adapter. But an Adapter for Models, not purely technological stuff

As I already wrote elsewhere, an anti-corruption layer can be implemented in various forms.
At least 3:

  • external gateway/service/intermediate API
  • dedicated in-middle database (for old systems)
  • or just an in-proc adapter within a hexagonal architecture

And this is the last version that will interest us for the rest of this post.

Indeed, the [technological] adapters of Alistair’s first description of his pattern may be a sweet spot for the [models] adapters we need too when we focus on languages and various models (like when we practice DDD).

And if we do agree on the occasional need of having Models Adapters (aka. ACL) in our Hexagonal Architectures, we can start discussing the options and the design tradeoffs

 

ACLs in Hexagonal Architecture, OK. But where? 

The next question we may ask ourselves is: where to put our ACL code when we want to have it within our Hexagonal Architecture? (I.e. the last choice of the three presented above). There have been debates about it on Twitter recently. And the main question was:

Should we put the ACL outside or inside the hexagon?

As an important disclaimer, I would say that there is no silver bullet nor unique answer to that question. As always with software architecture, the design choices and the tradeoffs we make should be driven by our context, our requirements and set of constraints (either technical, business, economical, sourcing, cultural, socio-technical...).

That being said, let’s compare these 2 options.


Option 1: ACL as part of the Hexagon

Spoiler alert: I’m not a big fan of it. To be honest, been there, done that, suffered a little bit with extra mapping layers (new spots for bugs). So not for me anymore. But since it has recently been discussed on twitter, I think it’s important to present this configuration.

 

This is the ‘technological’ or the orthodox option if I dare. The one saying that driven Ports and Adapters on the right-side should only expose what is available outside as external dependencies. And to do it without trying to hide the number nor the complexity of what it takes to talk or to orchestrate with all these external elements.

We usually pick that option if we consider that coping with other teams’ structural architecture is part of our application or domain code. Not the technical details of them of course. But their existence (i.e. how many counterparts, APIs, DBs or messaging systems do we need to interact with).

And for that, we necessarily need a counterpart model in our hexagon FOR EVERY ONE OF THEM!

Why? Remember, we don’t want our hexagon to be polluted by external technical DTOs or any other serialization formats. So, for every one of them, there will be an adaption (in the proper driven Adapter) in order to map it with its non-technical-version. The one we need for our hexagonal code to deal with it. An Hexagonal-compliant model (represented in blue in my sketch above).

It’s important here to visualize that our ‘Hexagonal code’ in this option, is composed by the Domain code + the ACL one (but an ACL that won’t have to deal with technical formats). 

Why I abandoned this option over the years with my projects

To conclude with that first option, I would say that there are 2 main reasons why I abandoned it in lots of contexts:

  1. It forces us to create 1 non-technical-intermediate model (in blue) for every external dependency (leading to 5 different models in our example). This is cumbersome, and bug-prone. I saw lots of devs being tired of all those extra layers for a very limited benefit (i.e. just to follow Hexagonal Architecture by the book)

  2. It opens the door for junior devs or newcomers to introduce technical stuff within our hexagon. “But... I thought it was ok since we already have an ACL in that module/assembly?!?” It reduces the clarity of the domain-infra code duo.

These are the reasons why I progressively moved over the years towards another tradeoff. A new option which is one of my favorite heuristics now.

 

Option 2: ACL within a driven Adapter

This option consists of putting our ACL code into one or more adapters.

If we think that it makes sense to replace 2 different Adapters into one ACL Adapter doing the orchestration and the adaptation, we can even avoid coding the intermediate layers we previously had for every Adapter (in blue on the option 1 diagram). It means less plumbering code, less mapping and less bugs.


When something changes in one of the external backends used by the ACL Adapter (let’s say a pink square), the impact is even reduced comparing to the Option 1.

Indeed, all you have to change in that situation is your ACL code adapting this external backend concept to one of your domain code's (black circles on the diagrams).

With option 1, you will have more work. You will also have to change the corresponding intermediate data model in blue (with more risk of bugs in that extra mapping). 

 

Clarifications

As I am not an English speaker (one may have noticed ;-P I think it is worth clarifying several points before concluding:

  1. I’m not saying that one should always put ACL in our hexagonal architecture

  2. I’m saying that if you need to have an ACL in your hexagonal architecture, you should definitely use the sweet spot of the Adapters to do so.

  3. I’m saying that in some cases, you can even merge 2 former Hexagonal Adapters into a unique one that will play the ACL role

  4. I always want my ports to be designed and driven by my own Domain needs and expressivity. I don’t want my domain code to use infrastructure or someone else external concepts that should not bother my domain logic. In other words: Putting a driven port for my domain concept, instead of putting a driven port for each external system is not a mistake. It’s an informed decision.



To conclude: hexagonal or not hexagonal?

When Alistair created his pattern, his main driver was to easily switch one technology with another without breaking his core domain code. Adaptability was his main driver and big variance of technologies was his challenge.

I call this the "technological facet" of the pattern, recently confirmed by Alistair on twitter:

“The Ports & Adapters pattern calls explicitly for a seam at a technology-semantic boundary” (https://twitter.com/totheralistair/status/1333088400459632640?s=21)

But one of the keys to the pattern's success in my opinion was Alistair's lack of detail in his original article. We all saw value in it, but almost everyone struggled to understand it. Almost all of us have had our own definition of what a port is and what an adapter is for years (you can open some books if you want to check that out ;-)

This fuzziness allowed some of us to play with it, freely to discover new facets or properties from it.

One may discover that Ports and Adapters were top notch for testing (the "testability facet"). Another one may discover that Ports and Adapters were truly awesome to properly split our domain code from the infrastructure one (the "tactical DDD facet"). Another one may find out that it may help to reduce layering and complexity of our architecture (the "simplicity facet"). Another one may realize that the initial premise of the Pattern was rarely the reason why people were using it. That the ports were often leaky abstractions preventing you to properly cover multiple technologies behind (the devil is in the detail). One may find it interesting for having a good time to market and quick feedbacks about what is at stakes (the "quick feedback facet"). One may find it intersting to postpone architectural decisions at the right moment (the "late architectural decisions facet"). One may find it interesting to Adapt more than technologies. To adapt not only technologies but also external models, like I described here (the "strategic DDD facet").

Even Alistair evolved and changed his mind over the years about important things such as the symmetry or the absence of symmetry of his Pattern (now we all know that the left and right side are asymmetrical ;-)

A multi facets pattern?

I personally think that the beauty of this pattern stands on our various interpretations and implementations.The fuzziness of the original Hexagonal Architecture article from Alistair also has something in common with Eric Evan's Blue Book. It’s so conducive to various interpretations that it ages really well.

Like the image of the Hexagon itself, it’s a multi facets pattern. Maybe richer and more complex than Alistair realized it so far.

My intent here is to ask Alistair and every one of you in the DEV community: should we keep talking about Hexagonal Architecture and its multiple facets, or should we start finding new names for some of those facets and awesome properties?

I’m more than keen to have your answers. 


 

Monday, 13 April 2020

Adapters are true Heralds of DDD

A few days ago, I posted an article to warn people about some pitfalls one should avoid when implementing an hexagonal architecture. One of these pitfalls is to leak part of our domain logic to one or more adapters (therefore on the infrastructure code side). Whereas I'm convinced that this is something to be avoided at all cost (in order for our Domain code to remain coherent and not entangled with technical issues), I would like here to linger a little and revalue an area that I have myself got into the habit of shouting slightly: the code outside the hexagon. Today, the passionate practitioner of Domain Driven Design that I am wants to assert that the right-side adapters of our hexagonal architecture deserve much better than the treatment usually reserved for them. What if this infrastructure code could ultimately be just as important? What if it could be the place of major challenges for our systems?



The origin of DDD

Domain Driven Design brings together many things. But originally,

DDD started from an attempt by Eric Evans to give more autonomy to the people and different teams 

who worked - sometimes with too much friction - in the same structure where he was also (finance industry).

To get there, Eric had the idea of ​​bringing out his key notion of Bounded Contexts. A Bounded Context is a linguistic and conceptual boundary where words, people and models that apply are consistent and used to solve specific Domain problems. Specific problems of business people who speak the same language and share the same concerns. We can therefore say that DDD militates for use-case-centric specific models, modelled on groups of people.

Of course, all this being very specific to an organization and the business processes set up in it. In the same company, we can find, for example, one or more Bounded Contexts related to "marketing", another to "accounting", another one specific to "delivery service", etc. When doing DDD, we generally try to identify and characterize these different Contexts to know where we are, whether it really brings business value or only relates to support functions. It will drive our effort and ways of working out.

In the process, we are also trying to identify the other Contexts with which we will have to collaborate. All this is usually illustrated in a diagram which represents Bounded Contexts (represented like big potatoes) and which one names Context Map (see the example below).

The entire strategic chapters of the DDD describes these relationships between Contexts (human and technical) and provides us with patterns to manage this or that situation. To connect and communicate between Context given the power relations involved.

Here is a naïve example of Context Map applied to a hotel group which here distributes its hotel rooms through its own distribution platform (web and apps).

Context Map from the hospitality industry


SOA FTW!

As we read a lot of "different" things on the subject ;-) I would like to give here a few ideas of how this concept of Bounded Context may relate to other more tangible software artefacts you already know. To make the implicit explicit (which is another mantra of DDD). Also, I will quite naturally talk about service oriented architecture here, since it is a style of architecture which I find particularly useful when we do it right (note: remove all this former soap things from your head ; -)

When practicing DDD, the size of a service does not matter, we are rather looking for its proper alignment with the business (and therefore within a Bounded Context).

This is why DDD may help a lot people to escape the micro-services quagmire (where people are only focusing on the size...)

Within the same Bounded Context, we will often find one or more services (web APIs in general nowadays). Since Alistair's pattern is very handy for implementing a service I will often use it to implement them.

The general case is to have one hexagon per process (which can be scaled out with multiple containers or VMs) but it is not mandatory.

In some cases, one can assembly multiple hexagons within the same process (In-proc). Every hexagon will interact with others using the same ports but with different Adapters in between that will only do in-proc calls (instead of network calls). We usually call this multiple-hexagon situation as Modular Monolith, which is a good thing actually (as opposed to old-school Monoliths or even worst: distributed Monoliths/Distributed big-ball-of-mud ;-)



One can have multiple hexagons or services in a Bounded Context.
Or even multiple hexagons in the same process (Modular Monolith)

A common DNA

Making autonomous models, splitting them by Context and preventing them from being entangled. Doesn't ring a bell to you?

No surprise Alistair’s pattern suits really well DDD practitioners. The ports and adapters effectively allow us to prevent DTOs of an external API (often coming from another context therefore) from parasitizing and entering our hexagon which is supposed to only manage problems from the Bounded Context to which it belongs.

Without that, coupling between those 2 different Contexts would be hellish because we would have some bits other contexts within our business code. Imagine now the impact on our code in case the other Context dev team constantly changes their API or DTO contracts without asking our opinion. To protect ourselves from such a situation, the DDD toolbox offers us an interesting pattern: The Anti-Corruption Layer (ACL).



Anti-Corruption what?!?

The Anti-Corruption Layer is a pattern popularized by the DDD which allows a (Bounded) Context not to find itself polluted by the inconsistency or the frequent changes of a model coming from another Context with which it must work.

It’s like a shock absorber between 2 Contexts. We use it to reduce the coupling with another Context. By doing so, we isolate our coupling in a small and well protected box (with limited access): the ACL.

Then, any untimely change of the external context will have very little impact on our business code which acts as a consumer of the ACL and no longer directly of the unstable code coming from the external context.


No ACL here, so the model of the other Bounded Context leaks everywhere within our hexagon


Here, our ACL Adapter prevents models of another Bounded Context from entering our hexagon


An anti-corruption layer can be implemented in various forms:
  • external gateway/service/intermediate API
  • dedicated in-middle database (for old systems)
  • or just an in-proc adapter within a hexagonal architecture.


Which Eurythmics to choose for your ACL?

Any good writing on DDD must now have a "Heuristics" moment ;-) Here is mine:

If you have several consumers originated from your Bounded Context (potentially several APIs or components) who need to interact like you with the same portion of the other Bounded Context outside, it is in your best interest to do emerge a shared facade service (API) that would act as an ACL.

If you only have one consumer for these interactions with another Bounded Contexts (i.e. your hexagonal domain code), you can just implement your needed ACL at the Adapter level of your Hexagonal Architecture.

It is very common to start by implementing an in-proc adapter here, and then to move out as a stand-alone API if the need arises.

An Anti-Corruption Layer can be hosted in an external dedicated process/API/gateway


An ACL often implies orchestration of various calls towards the same or different external systems in order for your domain code to have the behaviours or the answers it expects.

It is quite logical because none of those existing external Bounded Context systems was made specifically for you and your needs. They usually speak with their own language and do not directly answer questions you ask yourself in your own other Bounded Context. In most cases, the external systems you will need to operate with are rotten legacy systems, or a mix between an old system and a brand new API that only works on a limited number of cases.

The ACL-Adapter of your hexagonal architecture will therefore very often have to call these multiple sources to be able to offer a consolidated vision to your domain.


For instance:

  1. Calling a first service of another BC to recover incomplete data
  2. Translating or adapting part of its response into another data structure ready to be used to forge an intelligible request to another external service (often belonging to the same second BC but not always)
  3. Sending this second request to the second system or endpoint
  4. Getting the result of this second call and adapt the whole set to produce a response that will finally suit your own Hexagon with its Domain-level semantic

  Example of ACL-Adapter orchestrating different calls to different back-ends / APIs of another BC



My Domain is not your domain

In most cases, all this awareness of the other external Bounded Context Systems, the discrepancies of how to ask them questions has nothing to deal with your own Domain code. These implementation details must remain at your adapter level

In other cases, you will find important that the center of your hexagon explicitly goes to look for 2 different things/concepts via two different operations or two different right-side ports before doing some stuffs or even assembling them in order to become something useful for you.


There is no silver bullet

Do not expect me to give you a universal or absolute rule to answer this question. It's Design. And of course, it will depend on your context and your Domain.

The right question to be asked every time is: is it legitimate and useful for my domain code to be aware of these concepts or details?

If so, your hexagonal Domain code should explain these interactions and exchanges. If not (when we don't want to be polluted by the messy external BC for instance), this knowledge should rather remain at your right-side adapter level.


Mea culpa

In the light of these cases and explanations, you will understand why I really regret having written in my previous post that a "good adapter is a pretty stupid adapter". This is the case when there is a simple 1-to-1 relationship between your hexagon and another external system.

But in practice it is quite often the opposite (and even more so if your adapter is an ACL).

And therein lies the paradox. For years I have only been focusing on the Hexagonal Domain code. With practice and all the non-trivial cases encountered in my career, I realized that


These Adapters are much more than just Extras. 

As such, they also deserve much better than just being stubbed in our hexagonal architecture acceptance tests

But on that testing strategy topic, I will come back very shortly with a new episode of this series devoted to hexagonal architecture.

Stay tuned!