Saturday, 31 December 2016

Don't be the fool looking the master's finger

More and more I hear people promoting either the Spotify model or the Google one. In a nutshell, they are saying: "if we copy them, we will be successfull" (regardless of their own culture or context). Also, some people think that you can change your own culture by simply adopting web giant practices. Remember Culture? It's that tiny thing that eats new strategies (and practices) at breakfast every morning ;-) Peter Ducker if you hear us...) This reasoning makes me think about a Chines proverb:
"When the wise man points at the moon, the fool looks at the finger"

Let me try to explain why. Spotify first:

The Spotify model

I won't detail that case too much and let you read Ben Linders' InfoQ post instead: Don't Copy the Spotify Model

My main point is that people should be inspired by the same driver and objective as Spotify (i.e. Aligned Autonomy) instead of copying and focusing on one strict organizational answer found by the Spotify guys during their Lean journey.

Here is the moon (i.e. the Aligned Autonomy objective):

And here is the finger our entire industry is focusing on instead ;-(

What bothers me more in that whole Spotify consultancy feast is the lack of result from an end-user perspective (on that topic, see my previous post: (controversy) Is Spotify a model to follow actually?)

The Google bias

Google is probably the case that annoys me the most. And I won't talk here about their problematic business model and its impact on our privacy (The client is the one who pay. If it's free, then it's because you are the product...Sad Panda)

No, I will just talk here about the huge bias most of us have when we talk about that successfull company. For many people, every post, practice, interview technique, tool,... coming from Google is took as gospel truth (regardless of everyone's context in term of funding, activity, culture, contraints, etc).

A little bit like with Spotify, most of us are looking at the master's finger, saying: "if we want to succeed, let's follow their advices and work like google does!" (last example here: Why do some developers at strong companies like Google consider Agile development to be nonsense?).

Before you follow that dangerous path, let me tell you a few things about Goggle. Or let me sum up it with a picture:

For more details, see the article: Google Makes So Much Money, It Never Had to Worry About Financial Discipline—Until Now.

A Cash-Cow story

Yes, Google has a tremendous a cash-cow (i.e. advertising) and is desperately trying to find its next cash-cow massively investing on whatever-is-cool-with-top-notch-engineers.

Then, unless you are rich like Donald Trump, don't try to mimic Google or to blindly follow their advices: almost all their projects are loosing money and work because they are founded by their advertisement cash-cow. If you don't want to be like the fools looking at the master's finger, don't listen people saying that you should work exactly as google does.

At best, you can be inspired by their striving for excellence (which is very impressive, mos def).

What they are currently building and lobbying with all our data and the destruction of our privacy (with concrete consequences on our lifes), will probably deserve another post...

Monday, 25 April 2016

Hexagonal != Layers

People around me know that I am a strong supporter of the hexagonal architecture (a.k.a. ports & adapters). Whether at work, during conferences or around coffee machines, I do not miss an opportunity to try to highlight the value of this architectural pattern created by Alistair Cockburn and which is DDD-friendly.

For me it's a real mystery: why almost no one actually leverages on hexagonal architecture, despite its huge ROI and value for projects...

During these discussions with people, a remark keeps coming again and again:
"... hexagonal and layers architectures are the same, right?"

Hell no! The aim of this article is to explain how different they are.

Some history about the "Layers" pattern

The concept of layers is not new in IT (eg OSI model), but the best description of this architecture pattern for me can be found within the POSA reference book (vol 1, published in 1996).

And like with any pattern, it is much more interesting to remember the intent / the problem to be solved than the solution. 

Otherwise there is a risk we use it because we know it instead of using it because it fulfills to our need...

I used to see many people using Layers by habit and without being able to justify why they use it. Let's make a test: ask colleagues who use it around you why they do it ;-)

Before comparing Layers with Hexagonal, let's resume the intent of the Layers architecture patterns: "helps to structure applications that can be decomposed into groups of subtasks in which each group of subtasks is at a particular level of abstraction."

The main objective: managing the complexity of a large application by decomposing it.

Now let's have a look at the solution: "Structure your system into an appropriate number of layers and place them on top of each other (...) observe the following rule: a J layer provides services used by its top Layer J+1 and delegates subtasks to its bottom layer J-1 (...) The main structural characteristic of the Layers pattern is that the services of Layer J are only used by Layer J+1; there are no further direct dependencies between layers."

The respect of the latter rule is crucial; it allows to limit the impact of the replacement of a layer with another one (only the top layer and possibly the bottom layer being impacted).

Although the pattern leaves open the number of layers, it is clear to me that the interest of the solution starts from 3 layers (for at least J,  J+1 and J-1). In fact in real life, you can usually count the number of layers within an application by knowing the number of tech leads previously involved on the project ;-) Leading to what we called a "Baklava architecture" (anti-pattern).

In the past I've worked on projects where there were so many layers... it was difficult to understand and follow the application / business logic.

I initially become interested in the Hexagonal Architecture to fix the poor signal / noise ratio found in those Layers architectures.

Hexagonal architecture: origin and usefulness

Created by Alistair Cockburn in the 2000s, the hexagonal architecture was designed to prevent the infiltration of the business logic into the UI code (such infiltration leading to less testable and more difficult to maintain apps).

The proposed solution is to divide our software in 2 distinct regions:
  • the inside (i.e. the business application logic)
  • the outside (i.e. the infrastructure code like the APIs, the SPIs, the databases, etc.).

2 distinct zones. No more, no less. 

Two distinct areas of our code with adapters positioned in what looks like a DMZ to protect the domain code from the infrastructure one. We just have to rely on the Dependency Inversion Principle (DIP) and the Repository pattern to prevent the domain code to be binded to the infrastructure code when it want to get some information outside, and voila!

Adapters? Ports?

There has been much misunderstandings and debates on this subject but after having check with Alistair, the "port" describes the intention (in C# or Java, a port is an interface belonging to your domain), whereas the "Adapter"is the code that bridges the 2 separated worlds: the infrastructure code where the adapter belongs and the Domain code the Adapter interacts with via a port.

Since I already wrote a previous post about the Hexagonal Architecture , post a code sample of the hexagonal architecture but also made a talk with cyriux (slides available here), I won't detail further here.

How the onion case complicates things

There is a variant of the hexagonal architecture: the onion architecture from Jeffrey Palermo.

To be honest, I don't really like this version which is mixing the concept of layers (like an onion) with the outsourcing of the infrastructure code as explained by Alistair.

As people usually only remember the "Layers" part, I think Jeffrey's pattern is misleading because it lead people to miss the most interesting part of the hexagonal pattern IMO (i.e. the segregation in 2 zones which simplifies everything).

Still not convinced that Hexagonal is not the same as Layers?

For sure you can mix both patterns by putting layers here and there within each of the 2 areas of the hexagonal architecture (like Jeffrey did), but I don't think you should. Indeed, this complicates the architecture of your system and has to be considered carefully (why would we do that? does it bring value in our context?).

Such mix-in would be our hybrid choice, and is not part of Alistair's description of the hexagonal pattern.

Can we say that 2 patterns with different motivations and different solutions are equivalent? 

Not in my opinion. What bothers me most with this speech is that it prevents people to discover the real power of the Hexagonal Architecture ("yeah, I don't see why I should look in detail this pattern, cause it's already what we do with our layers..."). Sad Panda.

According to me, the hexagonal architecture has two main virtues:

  1. To prevent domain code to be contaminated by the infrastructure code (DDD friendly)
  2. To simplify the architecture of our systems (avoiding the accidental complexity associated with extra layers for no explicit reason)

Fostering simplicity is hard but valuable

Hexagonal Architecture helps us to simplify our applications' architecture (and thus maintenance) by dividing it in 2 zones instead of a baklava architecture.

I like to finish that post with a tribute to Matteo Vaccari ASCII art comparison between Layers and Hexagonal. These diagrams helped me many times to explain those differences to colleagues at work.

Enough diagrams, let's see some code now!

I've published a few C# code samples of hexagonal architecture on my github space:

  1. On the (LunchBox) SimpleOrderRouting project as explained here
  2. In the CQRS (without ES) app sample published here (with its hexagon composition made here or like this in the acceptance tests)

Tuesday, 26 January 2016

(controversy) Is Spotify a model to follow actually?

Spotify is a major flagship for the "Agile consulting industry" (the one that sells certifications ;-) ... but I have to say that the music-addict I am has been deeply disappointed by the Spotify offer. Used to be a Spotify customer for many years, last year I decided to quit Spotify to join another platform (i.e. Deezer). 

I don't know if there is a link or not with the way they work and try to scale agile (tribes, guilds, etc), but I found Spotify's ergonomy very poor as an end-user. More annoying: the lack of common reflex between the various Spotify apps (on mac, on windows, on iOS, etc), and the poorness of the Spotify recos. In more than 5 years of Spotify usage, I have almost never discovered a new artist/band based on my actual playlists and preferences (contrary to what I've discovered thanks to Deezer's recommendations).

As a software developer, that makes me wonder many questions about the relevance of the Spotify agile model in order to deliver value to its end-users/customers.

Sunday, 30 August 2015

Event Storming: my rookie mistakes

This week, I was happy to be asked by the teams to organize a second Event Storming session after our first successful experience last week (details posted in my previous post here)

Motivated but still a rookie in that field, I asked my mate Radwane (radwane_h) to help me to co-animate this new session. Indeed, we planned to experience more advanced event storming concepts like Commands, Actors, Aggregates etc. and I was happy to leverage on his Event Storming experience for such journey.

This session didn't went as I expected and I thought it could be helpful -for any new Event Storming organizer- to share here few lessons learned.

"Where is our wall?!?"

It started with a first difficulty: the audience was not prepared at all to face another empty wall... Indeed, since we don't have any war-room for the project (but the ability to use nice meeting rooms here and there), I had to trash the previous tablecloth after our first session. This seemed logical to me. After all, and as I repeated many times to the audience the week before: "Event storming is not about producing a new model/deliverable. It's mostly a live brainstorming to efficiently distill business flow among various actors, to identify key concepts & get rid of ambiguities." (all of this being useful to impact our code & other official artifacts in a second step).

Yeah, this sound logical to me. Indeed, most of the posts on Event Storming you can find on the internet advocate for restarting from scratch every time. It allows you to master the flow, but also to let the door open for better approaches (the first "solution" being not always the best option). This may require more effort during the first round(s), but this is a way to prepare our mental models to save time later.

As a consequence I intended this friday to make us zoom on a central part we identified the week before, involving 2 bounded contexts and few ambiguities in between (at least in my mind ;-). But facing the hostility of some of the actors (harms crossed, frowned eyebrows): I didn't succeed to motivate them to rework on the same topic without our previous wall with stickies on it as a support. One said to me: "If we don't start with the previous wall. We are loosing our time here!". Of course I had the pictures of the previous wall on my phone, but I was afraid to turn it into a lonely exercise if I had to transcript them to the wall (with potential disengagement of the audience).

Even now, I'm deeply convinced that it would have take us less than 10 or 15 minutes to put back all the stickies for this subpart of our business flow and I still blame myself for not having convinced all of them to do it.

But the audience was not ready for such reboot

It was like if I made them loosing their precious time (I think our Event Storming session came after few days of other different workshops without precise goals or obvious results; thus the impatience and the fidgets).

Anyway. Facing the astonishment of 12 people in front of me (and an empty wall ;-), I finally asked them which topic they wanted to choose for the hour and a half to come. After few seconds of awkward silence and minutes of collective hesitation... we picked a topic that we didn't studied at all during the first round. Something that I will refer here as the "Onboarding" (I can't say more here for non-disclosure reasons). That was interesting, but as we event stormed I detected an overall disengagement from the audience (around 12 people). Excepting 2 very active contributors, the rest of the audience was stood up too far from the wall with stickies, behind a table that can't be moved. They were trying to contribute to the debate but were somehow helpless and disengaged more and more.

I realized -too late- that we picked a topic with almost no domain expert for it in the room... Sad Panda.

In those conditions, the benevolent help of my friend Radwane was a kind of waste (I was sorry for him). He helped me and the audience, but that was still an Event Storming without domain experts in the room... (for this topic I mean).

The only positive outcome here was that we all realized we knew almost nothing about this "Onboarding" process (important pre-requisite for the rest of the project). We also identified some domain experts to be interviewed on the business-side, and found other stakeholders related to this process that we put on yellow stickies (handful for the months to come).

half of our wall: we discovered few things anyway

Nonetheless, I was truly disappointed having detected annoyance from some domain experts. I felt that I've lost an opportunity to make us work on the core topic for the weeks to come.

I would wrap this debriefing by few observations I've made to myself last friday:

  1. Even if I'm not able to get back with the gigantic tablecloth of the previous session, I'll bring some prints of the pictures I took on my phone. This will allow us to quickly and collectively reboot our session even with people reluctant to redo the whole thing from scratch (it's sometimes hard to think out of the 'efficiency' box)
  2. Before starting to Event Storm with 12 people, I think I'll check first if we have at least one domain expert in the room for this topic... ;-)

Hope this help.

Sunday, 23 August 2015

Event storming: Domain distillation under steroids

Last friday, I organized an event storming session at work (one of the major investment bank in France). It was my first time as an organizer and I must admit that I've been truly impressed by the efficiency of this way to distill a domain.

Created by Alberto BRANDOLINI, Event storming is a fast way to explore a business domain or problem with many people in the room -including non-IT stakeholders. In a nutshell: you put every domain experts in the same room with few ITs and you make them describing what happen or must happen in their Business. To do so, you make everyone naming and positioning domain events (stickies here) on a large time-line-oriented-paper on the wall.

"What do you mean by 'Domain Event'?" is probably the first question you'll face as an event storming organizer. To quote Mathias VERRAES (another DDD expert and event storming practitioner), a domain event is

"Something that has happened in the past that is of interest to the business"

We are NOT talking about IT details here, like applications, databases or any button click... (unless its part of your core domain like for Google or other ad-resellers). That's why I stated at the very beginning of our session: "Let's forbid us to speak IT or technical things for the couple of hours to come. Business only!".

"It's developer's understanding, not your knowledge that becomes software!"

"What will be the outcome of our session?" is another legitimate question from the audience when it's the first time. In our case, the objectives were:

  1.  To share a common understanding of core business concepts and what to do within our project.

    The project I'm talking about is an ambitious greenfield IS reconstruction for an entire business line. The fact that we have to deal with all functions -front to back- is a real challenge. Indeed, there are many bounded contexts and teams involved in our case. Everyone coming from various functional and technical backgrounds. As a Domain-Driven-Design (DDD) practitioner, I was recently worried by the fact that we were talking a lot about technical data formats (mediums/vehicules) or ambiguous concepts during workshops, and less about the core business concepts in stake. Fortunately we have lots of expertise here, tons of talented BAs & IT specialists. But to quote Alberto when he faces recalcitrant domain experts: "It's developer's understanding, not your knowledge that becomes software". In other words: it was time for non-domain-experts like me of few other developers to catch-up with the others.

  2. To Fight vagueness and to clarify various ambiguous concepts faced during our previous workshops. The outcome here would be to initiate a 'lexicon' for our project, taking care about the scope of the definitions (by indicating the bounded context to which a definition applies for instance).

    For non DDD practitioners: a Bounded Context is a context within which a model applies (e.g.: marketing, negotiation, settlement, etc.).  Although it is important for a model & language to be consistent within a given context, we can perfectly have the same word indicating completely different concepts - or most likely - different perspectives on the same core concept depending on the context or usage we are talking about. Working without those contexts in mind could be really misleading. Moreover, sharing the language of our business (a per bounded context exercise) is also the starting point to have what we call in DDD an Ubiquitous Language (i.e.: to always rely on the language of the business -including within our code). As benefits, we avoid misunderstanding and "translation" issues, but we are also able to have efficient discussions with the business all along.

By the way, I forgot to tell you that there were no people from the business within our event storming. We were 11 people in the room including tons of domain experts in various contexts; profiles such as Business Analysts (mainly), domain-expert-IT-managers, and some developers/tech leads & a use-case-driven architect ;-)

Modus operandi

First thing's first: the preparation. Discovered in 2013 during a DDD meetup in Paris with Mathias VERRAS, I've since read and watch lot of things about event storming. Nonetheless, the fact that I was quite noob in this field made me asked tons of questions before our real session at work. And I'd like here to thank my friend Tomasz JASKULA (co-organizer of the Paris DDD meetup) but also Mathias, Alberto and Emilien PECOUL for their time and availability when I asked questions on Twitter few hours before my session.

Our event storming was schedule between 2PM and 4PM this Friday. In term of logistic I finally decided to go to the mall aside at lunch, in order to buy a roll of garden tablecloth to make what Alberto calls the "unlimited modeling space" on the wall.

Yes, Roberto warn me few hours before on Twitter: "make sure that location, stickies availability and space aren't constraining the discussion." So I also came with a bag full of working markers and tons of stickies.

source: Event Storming recipes (A.BRANDOLINI)

Before pinning my tablecloth on the wall, I moved out all the chairs in a dead-end corner of the room so that:

  • people remain active during the workshop

  • to be able to observe the body language of all the attendees (much more complicated when they are seated as stated by Alberto).

Indeed. Like when we are playing poker (I confess ;-), it's very important to observe low signals, body languages and tells that betray possible disagreements among people (whether it's due to introversion or political agendas). As said earlier on the phone by Tomasz, "one of the objective of event storming is to identify possible disagreements on how we understand our domain". Either to settle them on the fly, or to park them for after (during another session for instance).

Once the room ready, people started to join me. It took me less than 8 minutes to explain the rules and the objective of this event storming to the eleven attendees before we put our first stickies on the wall.

There are many kind of stickies and possible approaches for an event storming (I let you read the "Event stormers" google plus group or Alberto and Mathias' blogs for further information). But this friday, we focused on "Domain Events" only (not the Commands, the Aggregates or the Actors). Thus, we almost only use "orange" stickies with past-tense sentence on it (e.g.: "Price Requested", "Order Executed", etc.).

source: Event Storming recipes (A.BRANDOLINI)

After one hour and a half, we wrapped up our session by identifying all the core concept of our domains (reading what we wrote on our stickies to do so), and started to collectively gave definitions for few of them on the left side of the tablecloth.

Takeaways from this first experience

  • Used to Reactive Programming or CQRS architectures since various years from now I am pretty comfortable with the notion of event ;-) But I was pretty impressed that everybody followed the "Use past-tense sentence" rule (on Domain Events stickies). It sounded really a natural way for everybody to tell our business story.
  • It seemed important to clarify from the beginning that we weren't here to produce another new official model/artifact for the project but to interact instead. Indeed, we were here to be on the same page and to collectively explore ambiguous part of our Domain. I had the feeling that it helped some of us to relax and not to fall into model perfectionism.
  • It was hard for some of the attendees to avoid talking about IT things, applications, existing systems (even from Business Analysts). Nonetheless, we collectively succeeded to stick to our initial objective ("Business only!")
  • Disagreements about some core concepts (e.g.: trade done vs. order executed) were quickly resolved with the support of all participants in the room able to add views and precisions (including naive questions from the non-experts). Visually also, with the support of other related Domain Events all around.
  • It's been hard for some of us to realize that things that are currently done by existing IT systems correspond to real (Business) Domain Events (e.g.: the trading portfolio routing/attribution)
  • Some people (4/11) started to take back some chairs and to sit. I hesitated to ask them to stand up like the others but after few minutes of observation, I realized that they were still actively contributing, answering to questions from the others. At the end of the day, I had the feeling that they needed somehow this background position to be able to see the big picture on the wall.
  • We had few sub-groups during the session (distilling a subpart of the wall in parallel during few minutes here and there), but almost everyone actively followed the overall business story. Before the session, I feared some kind of post-it contention on the wall with 11 people (and disengagement as consequence) but nothing like that happened. Event Storming is really a fluent way of working together. It's amazing the efficient bandwidth we had to discuss and get in-sync with multiple people.
  • As stated by Emilien PECOUL on Twitter, it is important to let people do their things and to not constrain them too much: "Let paralell discussion emerged". Nonetheless, I found interesting to make few "attention call to everyone" when we had to decide an important point or to answer a key question.
  • To quote Alberto BRANDOLINI: "it is KEY that location, stickies availability and space aren't constraining the discussion". After this first Event Storming session at work, I can't agree more. The space or lack of space is really impacting the discussion. BTW during the session, I had to pin another piece of paper on the right while everyone was busy on my left, dangerously close to the end of the tablecloth ;-)
  • Fortunately, I'd read Mathias' "Facilitating Event Storming" blog post before organizing this session: "Know when to step back. Don’t do the modelling, guide the modelling. Ask questions". It allowed me not to intervene too much in the modelling as the organizer of this Event Storming session. In a nutshell, I just tried to ask questions, to rephrase or to identify blur zones or implicits concepts.
  • As for any other workshop, it was crucial to ensure a benevolent atmosphere so that everyone feels comfortable and willing to contribute in public. We joked here and there during this 2 hours sessions, but it was really a positive co-construction mindset.
  • I waited almost one hour before suggesting the group to trace bounded contexts on the wall (between the stickies). In my opinion: if done too early, there is a risk to constrain the space and thus, the discussion.

To conclude on this first Event Storming experience at work, I would say that I've been truly impressed by the efficiency of this dispositive.

I would recommend it for any business domain exploration

For this first experience at work we focused only on the Domain Events (i.e. the "Orange" stickies). It probably eased the ramp-up for everyone in the room. But I'm very keen now to retry an Event Storming with the other tools such as querying the Domain Events causality (i.e. identifying Commands, People or External Systems), but also by identifying clearly the Aggregates involved (Stickies with other colors).

But as I said on Twitter just after this session:

Stickies are just a pretext for high bandwidth discussions & ultra efficient domain distillation.

And for that, Event Storming is an awesome tool.

Monday, 8 December 2014

Static or dynamic types?

Last friday, Gilles and I have organized the second edition of the "Mate, you have to see this!" event.

Gilles Philippart is the head of the tech-coach unit within our company (30 passionates developer-coaches that are transversally sharing their skills & experience all over the organization). He is also a big, BIG, fan of the Clojure language (and co-organizer of the Paris Clojure User Group).

"Mate, you have to see this!" is a kind of event that I was recently launching within our organization. The concept (already described here) is pretty simple:

  1. We book a meeting room at noon
  2. We project and eat our sandwiches in front of an inspiring talk in video that one of us has really liked
  3. And we all debate afterward (it's a 2 hours event)

For the first edition last month, I had chosen an excellent french talk from Gregory WEINBACH (related to DDD).

This time, I've setup a doodle poll and we had reached the quorum for Rich Hickey's "Simple made easy" talk. At the end of the day, some of us recommended to replace it with a more seminal talk from Rich Hickey: Are we there yet?. I was not convinced by this choice actually (I find "Simple made easy" more inspiring & accessible), but we ended with it for our session.


Did I told you that Gilles was very pushy around Clojure adoption? ;-) Yeah? no?

Because this time, he had contacted two major french ambassadors of Clojure to join us for watching the video and debating afterward. Those french ambassadors are:

And to be honest, even if I really enjoy Rich Hickey's talks in general, the Q&A session at the end was far more interesting for me.

Let me sum up some of them for you:

The language

"Why did you choose Clojure and not another more-or-less functional language (like: Haskell, F#, ERLANG, Scala)?". 
Christophe GRAND -that started to use Clojure just few months after its creation- explained that he wanted to try something else at the time (than Java) and that the vision & the mindset of its creator was decisive. Rich Hickey is a pragmatist, and his programming language is part of a new breed of functional programming (FP) languages that are very pragmatic too (like F#). With their interoperability capabilities, we can use them in our day to day job (vs. Haskell).

Hiram explained that he wanted to exit his confort-zone and to find an alternative to Java few years from now. He was so amazed by Clojure (& its expressiveness) that he's now trying to favor as much as he can Clojure for the applications he's working on.

Clojure's toolset

"Compared to a mainstream languages like java or C#, how is the Clojure development environment today (IDE, test tools, etc)?" 
Well... it seems that all the reasons that made LISP a painful experience for some of us (remember your 'vi' sessions without any help nor syntax highlighting?) are obsolete with Clojure & its relatively mature ecosystem.

Code-Designing Process

"FP removing the side-effects that we found within OO code, do you still need to write tests? If yes, when do you write them? And is TDD a valid approach with Clojure/FP?"
Ok, neither Christophe nor Hiram are following TDD to drive their design process, but both of them confirmed that they write tests for their Clojure applications. In fact, REPL -Read-eval-print loop- is the alternative here (i.e. the ability to interactively execute part -or- all the lines of code from within a console). The usage of the REPL facilitates exploratory programming and debugging, for something that Uncle Bob is calling "Eye Driven Development" (EDD). The developer in that case is "playing" with his code within the REPL until he's reaching what he wants. Once he has found a satisfying result, he can then put his code under a test harness (for non-regression, but also for a means of documentation). But even if both Christophe and Hiram don't do TDD in Clojure, it seems that other guys do (like here, or like Jérémie GRODZISKI -according to my mate Radwane HASSEN).

For me who am not strongly confortable with a FP language yet (lazy bast... I am), this is a very intriguing question. I mean, I can easily grasp the utility of the REPL for building and troubleshooting small functions. When I TDD with tools such as NCrunch and NFluent (in .NET), I somehow transform my augmented IDE into a kind of REPL (to have this immediate feedback on my baby steps). But I can barely grasp how to leverage on the transiant REPL for more than few small functions. For bigger code bases I mean. Will I pollute my developer's head with tons of contextual details (such as: did I include all the relevant code parts for this experimentation to be relevant)? But for now, I guess I need to learn and play more with FP languages than I already done (in a casual mode so far).


"Since Clojure is hosted within a JVM, are you sometimes jeopardized by stop-of-the-world-GCs?" 
As a disclaimer I would add here that every time Rich Hickey is talking about "good performances" within his talks, you can translate it into "good throughputs". It's a fact that Clojure code - with all those immutable data structures - is made for parallelization. On the other hand, an important memory allocation policy under GC may have severe impact on your application latency.

Well... even if Clojure immutability usually embraces short-lived memory allocation -which is very GC friendly- Christophe admitted that for some "niche" usages, he has been forced to tweak the JVM, to hunt down hotspots and even to rewrite or optimize part of his corresponding Clojure code in rare cases, to reduce allocation. Fortunately this is not needed for all Clojure applications, but only for the very stressful ones like we can found sometimes in financial front-office (i.e. a niche usage).

We also shortly talked about the concept of Software Transactional Memory (STM) that Rich Hickey addresses in his "Are we there yet?" (not the best part of his talk according to me, and a clumsy topic actually).

But our conclusion for this performance "chapter", was that we need to measure, to measure everything before going into production (confronting our systems to our use cases and contraints; not to our intuitions ;-). In clear, that means performance tests for everyone ;-P


"Do you have an example of a real Clojure application that is under production since various years and that has been maintained by someone else than you?" 
Was one of my questions. Haunted since many years now by this maintainability concern (i.e.: what makes a system surviving to changes? how to preserve original intentions despite turn-over?), I am curious about this challenge with FP languages. Will the utter concision of some FP language make this challenge even more complicated than with OO mainstream languages? As much as complicated? Christophe told us that he had an example of that for one program he started to build for the CERN various years from now and that has been upgraded and released by other guys once Christophe had left the site.

Static or dynamic types?

As I was explaining to the guys that I was currently hesitating between starting to learn F# or Clojure, the major difference between those 2 pragmatic FP languages aroused the discussion.
Before we had this discussion, my gut feeling favored the static type option (thus like F#), but
according to Christophe, it seems that there is a interest for not relying too soon on strong types when you are designing your application. Indeed when you start a program, there are lots of undefined things, including the kind of data you will work on (usually coming from outside/integration constraints). Creating types to early may be painful, but also hurts the reuse of your applicable functions and limits your expressiveness.

Radwane, currently looking at F# seriously was saying that he really enjoyed its static type system. According to him, it's so easy and fast to declare a type in F# (one or 2 lines) that it wasn't really a problem to create them. As a side-effect ;-) : "when your program builds, it means it works".

Well... I guess I'll have to play a while with those 2 languages before I can make my choice.

Anyway. I'd like to thanks here again Christophe and Hiram for having join us and made this "Mate, you have to see this!" episode so interesting.

Wednesday, 3 December 2014

Will the Ultra Messaging guys wake up AMQP?

Originated in 2003 by John O'Hara at JPMorgan Chase in London, the Advanced Message Queuing Protocol - which aims to be the wire-level standard specification for messaging interoperability (vs. API-level standardization like JMS) - is catching on rather slowly in the contemporary landscape.

To be honest, before yesterday I was very skeptical about the future of AMQP as a standard. Now, I'm still not enthusiastic, but a little bit less skeptical ;-) Reasons why:

With a version 1.0 less than more adopted by the various broker-based projects (see here for details), the new things for AMQP would probably come through a broker-less (and daemon-less) product: Informatica Ultra Messaging (a.k.a. IUM & formerly known as 29West).

Why a broker-less & daemon-less solution decided to implement AMQP?

Started few years from now as the fastest "low latency" messaging solution for finance, Ultra Messaging is now covering all the standard use cases of Message Oriented Messaging (MOM) year after year. Adding persistence, HA-support, load balancing (with stickiness!) and even once and once only guaranteed delivery.

From a "nothing in the middle" overall design approach allowing us to pick the transport we need on a per-use case basis (i.e. Reliable Multicast, Reliable Unicast, TCP, RDMA, IPC, SMX, ...), Ultra Messaging was familiar with the concept of adding components aside their peer-to-peers architecture. Not in the middle like we have in a broker-based architecture. A-side the critical path of the data distribution.

A full AMQP 1.0 solution

With its upcoming version 6.8 that will be release December 10th 2014, IUM will now support both a new AMQP transport, and a fully compliant AMQP 1.0 broker. That won't change the core & existing transports of Ultra Messaging, but simply provide a new feature (AMQP) if needed.

In fact, Ultra Messaging should deliver the first AMQP broker-based solution supporting High Availability (HA) with full automated fail-over mechanism.

No NIH syndrome

Initially started to work on their own queue-based architecture & component a few years from now (i.e.: UMQ), the Ultra Messaging guys were asked by their customers to support interoperability scenarii with existing market or open source solutions. You know, the kind of solutions already deployed for the guaranteed delivery use cases: TIBCO EMS, Apache ActiveMQ, IBM WebSphereMQ, Pivotal RabbitMQ, etc.

Instead of continuing to implement their own UMQ solution, the Informatica guys wisely decided to leverage on an existing one: Apache ActiveMQ, and to improve it. In fact they've forked it in order to provide the product expected by their customer (including HA). To do so, they have added a part that is currently missing in the AMQP space & standard: the broker-to-broker interactions.

At the end of the day, it's a nice occasion for the Ultra Messaging team:

  1. To provide their customers a full HA compliant AMQP broker (relatively mature) solution
  2. To provide their customers a new AMQP transport within the IUM toolset,allowing interoperability scenarii with other AMQP 1.0 brokers (for AMQP idealistic people?)
  3. To contribute back to the Apache ActiveMQ project (with their pull requests)
  4. To wake up the AMQP standardization initiative by suggesting something for the broker-to-broker interactions part

The truth about AMQP...

I would like to end this post with an anecdote. While I was in React Conf last April in London thanks to my Adaptive friends, I had the opportunity to dine with the speakers. During the last night, I was located right next to Peter Hintjens, creator of  ZeroMQ.

I had a few drinks that night (London, you know... ;-) but I can clearly remember Peter saying to me that AMQP was originally his idea... but also the trigger for him few months after, to create ZeroMQ (when he saw that AMQP was leaded into a political nest). Fortunately, I recently found this old post confirming that I wasn't drunk when Peter was explaining that to me. Extract:

"the AMQP spec which Red Hat read in 2007 was largely my invention: I dreamed up exchanges and bindings, hammered them into shape with my team and the guys at JPMorganChase, explained how they should work, explained how to extend AMQP with custom exchanges, wrote thousands of pages of design notes, RFCs, and diagrams that finally condensed — by my hand and over three years — into the AMQP spec that Red Hat read in 2007."

I never seen any mention to Peter's contribution within the AMQP site or standard, but I thought it was important here to Render unto Caesar the things that are Caesar's.

Sunday, 9 November 2014

Adapt, connect, innovate or die...

Last Tuesday evening, I was invited by Thomson Reuters to attend one of their special events dedicated to innovation in finance.

The location was quite prestigious (i.e. the "Maison Blanche" in Paris) and both the organization and the speakers were truly excellent.

To be honest, I was expecting much more marketing and sales speech from Thomson Reuters than what we had actually. But Mind you, I am not complaining ;-)

The event started by a short but brilliant 20 mn talk from Navi Radjou. He's Innovation & Leadership strategist, but also co-author of the "Jugaad Innovation" book (A frugal and flexible approach to innovation to the 21st century).

We then continued with an excellent 45 minutes panel about HFT, algo trading and innovation in finance. This panel was animated by Guillaume Thouvenel (Executive Coach & IT Emergency Manager) and speaker were Jean-Marc Bouhelier (CEO Celoxica), Dominique Ceolin (CEO ABC Arbitrage), Olivier Martin (« Unified Platform » COO, Thomson Reuters), Philippe Musette-Sykes (Senior Advisor to the Board, Kepler Cheuvreux) and Riadh Zaatour (Quant Analyst, McKay Brothers).

After this panel and a conclusion by Rémy Granville (Thomson Reuters), we all networked around a buffet... in front of the illuminated Eiffel Tower.

Yes the panel was very interesting, but I'd like to focus here on Navi's intervention.

Innovation by Navi RADJOU

The topic was "how to reinvent financial services in the 21st century".  And its core topic was really about innovation.

Navi first tackled some of the CEO common beliefs/myths: "Our group is Invicible/Irremovable, our group is Universal, and" ... another belief  which I can't remember here, sorry ;-(


In fact, the average lifetime of big company moved from 75 years (in 1940) to 15 years (in 2014)... Too big to fail? Nah... Navi took few examples of Disruptive effect.

For instance: Marriott & Accor are really big companies in the hostelery/inn business. But Airbnb jumped out of the blue in 2008. They are now able to provide more rooms to end-users than Accor and Marriott can. They can even do it in some emerging locations, were the ol' groups aren't able to create Inns or to take market shares.

Music, Taxis, ... in fact, the concept of impermanence seems to generalize in every business domain.

Wealth management is another digital disruption victim for instance.

Gulliver and Lilliputians ...

Because everything is possible now with crowd-founding. It's a new deal. For everyone (e.g. the recent 41 M$ raise for the RSI video game... 41 Millions $ !).


By the way, do you know that Nigeria emerges 3rd most profitable stock market in the World?!?

Navi tackled the "Universal" belief of CEO (i.e. our activity or business model is universal). His main point was that there are lots of innovations and disruptive models emerging from all around the world. Africa, Asia, South America, ...

Let's take M-PESA for instance. This phone money transfer solution has dramatically transformed the lives of all Kenyans (no need to take the risk to walk with money in their pockets anymore). It has also revolutionized the way Kenya does business. Without banks.

According to Navi, there are lots of things to learn from what currently happens in those emerging countries & markets.

So. How to adapt, change and innovate to succeed and survive in that moving world?

Agility advocacy

Agility is a must for all modern companies. Indeed. When everything is changing all the time. Companies have to get ready to embrace those changes. Quickly enough.

The GAFA threat

Google-Apple-Facebook-Amazon. If you don't change and embrace new technologies & ways to make business, those giants will do it for you.

And there is a pattern here. Google is emblematic, but all those successful companies have built their own ecosystem to speed up & ease any of their new moves now. For financial services (and any other industries), there is probably something to learn from.

Transcend the Business/IT silo to innovate

According to Navi, the solution to innovate and be successful nowadays is to transcend the Business/IT silo. At last, I would say.

Open & connect with the others

Another key to success for companies is to open themselves to the others:
  •  To open close discussions with their clients (through social networks, or via prototypes)
  •  To open for co-construction with their partners
  •  To open for co-innovation with their technology suppliers

"Innovation": a modern bias?

Navi also warned us about a bias that most of the company have nowadays: they create dedicated teams for innovation, dedicated awards, dedicated labs... Openness is the real path to follow. You have a lab? why not. But don't let dedicated people play alone to the game of innovation. Let's open it to everyone, including external partners.

By the past, innovation came from inside the 4 walls of the companies. Innovation is now happening outside the 4 walls. It's happening among partners, thanks to active networking that companies need to foster.

Navi then introduced us the 4 roles of innovation: the Inventor, the Transformer, the Sponsor, the Connector.

IT guys should be the Connectors for innovation. Not the inventors!

Instead of desperately trying to be Inventors, successful IT divisions are now much more Connectors.

IT guys must inform their business of what's going on with technologies.

You don't need to invent to innnovate! Knowledge is NOT power. Finding and sharing knowledge IS power.

And technology watch is not sufficient. We should also watch new usages  for inspiration. This is exactly what BBVA is doing with its Cross-Country Emerging Markets Unit.


Navi left us with few takeaways:

  1. IT guys must exit the pure function of back office and become more strategic partner with their business
  2. IT need to reinvent its relationship with its suppliers. Fostering more on (technological) partnerships and co-construction
  3. IT needs to foster and rely on an "Agile Architecture" to sustain its business and to continuously innovate.

Agile Architecture?

Yes, with mostly 3 drivers/qualities:
  1. Simplification
  2. Openness (towards 3rd parties)
  3. Evolution

Talking about simplification, you definitely have to see this!

Yes, I'd like to conclude this post with something that my mate Cyrille had shown me. It's a short, but exceptional TED talk from Yves MORIEUX:

> As work gets more complex, 6 rules to simplify <.

I strongly advise any people working in a big organization to take 12 minutes to see it. You won't regret it. A truly engaging & inspiring contribution.

Friday, 22 August 2014

Raising the bar

It's been a while now that we organize and attend various kind of Software Craftsmanship events at work.

Usually at noon, we leverage on our big company infrastructure (meeting rooms, town halls,...) to gather more and more people -month after month.

Our objectives: to meet, share, connect, learn, debate, code, hack...

After few years of difficulties (we faced some issues while our organization was scaling), we started now to raise the bar collectively by improving here and there our overall mindset and curiosity.

There is still lots of things to improve -mos def- but I have the feeling that we have triggered some kind of snowball effect here. A positive snowball effect.

Let me describe you the kind of events that continue to help us on that journey.

Brown Bag Lunches

The concept of Brown Bag Lunch (a.k.a. BBL) was probably the spark that ignited the fire of that positive dynamic. And this is still a very popular kind of event here. Probably the most popular one.

Introduced within our company by Romain LINSOLAS -our local Huggy Bear- and from an original idea of David GAGEOT (famous french Software Craftsman), the concept is pretty simple : we contact a speaker -usually an external one- and we welcome him for a free talk within our wall during lunch time. It's free, opened to every developer (first-come, first-served policy for seats). The topic may be whether on a craftsmanship technique, an open source library, a noSQL db, a kind of architecture (e.g. reactive, distributed, lambda, hexagonal), a low-latency middleware, etc.

Everyone is coming with his own Brown Bag (for the sandwich), and the organizer generally pays the speaker's lunch.

Usually between 1 or 2 hours depending on the topic and the Q&A session, this is the perfect spot for anyone to learn stuffs (even for the "I  have no-time for market watch" guys).

For the speaker it's a good way to prepare himself and to rehearse before a more official conference (like we did for our DEVOXX sessions). It's also the opportunity for him or her to meet and chat with other craftsmen, or to be the first one that the company will call when expertise on that area or product will be needed.

Since we have a lot of speakers working within our walls, but also a crowded audience of developers, we recently started to organize internal BBL sessions (with internal speakers). Some of us are also now "baggers", being capable to meet you within your company to talk (done with my mate Cyrille DUPUYDAUBY at betclic for instance).

If you are in France and want to organize BBLs, don't hesitate to consult the official site: to find speakers near your office. You won't regret the experience!

Coding Dojos

Widely-known, the coding dojo is an excellent way to learn and discover other concepts or techniques. It's also a great occasion to connect with and to have lots of fun with other passionate developers.

The concept? We meet at noon (10-20 persons in the same meeting room) and we try to solve in one hour or two, a small problem (the code kata) proposed by the chair of the session. 15 minutes before the end of the session, we all stop our progress, and make a public retrospective to share and challenge our various approaches. The occasion also for the chair to give us more hints about this kata that he usually has made before (few times, in other contexts).

Regarding the logistic, almost everyone brings his personal laptop. But since we usually pair, even those that didn't have a laptop may code. Depending on the kata also, everyone is picking the language of his choice to work it out (Java, C#, Javascript, F#, Scala, Clojure, Haskell, ...). 

Firstly introduced by my friend and eXtreme Programmer Philippe BOURGAU, Cyrille MARTRAIRE and Gilles PHILIPPART were truly the ones that institutionalized the coding dojos then, as a regular event. Indeed, even if this summer was pretty quiet, last spring there were almost one coding dojo per week (usually nicely chaired by Eric LE MERDY). But I must admit that I miss the fun, creative and very informative sessions chaired by Jean-Laurent DE MORLHON (working elsewhere now). A true mindset inspiration for me, when I organized some of them afterwards.

By the way, here is a hint for you if you want to launch this kind of events within your working environment: don't hesitate to find an incentive to break the ice and make new developers joining the movement.

The chicken and the egg situation

Indeed, someone that never attended a coding dojo will be often reluctant to join one, but those that have tasted a coding dojo once, will usually be recurrent attendees.

That's a fact, a coding dojo may firstly appear impressive if you never attend it before. Will I be able to pair with guys I don't know yet? Will I be able to code as fast as the other attendees? ... are common questions silently un-asked. But once you've attended a first session, the result is always the same: we all realized that it is not only easy, but truly fun! (and BTW, kata aren't about complex algorithms, nor about any given tech stack that you wouldn't know)

To reach some .NET developers that were not used to contribute to such events (and probably a little bit shy or anxious, since they had lots of excuses to refuse again and again), my solution had been to make a deal with our purchase department and Microsoft, and to offer temporary MSDN account activations (i.e.  the ability to download a tons of MS product for free) to every developer that will attend their first coding dojo.

Whether you are contractor or internal employee, you will be able to install Visual Studio on your personal laptop, but only if you attend a coding dojo first. Here, I have to thanks again our purchase division mates, and our contact in MS for that. Because with that thing... No more chicken and egg problem for dojo attendance.

I won't detail much more on coding dojos, there already are lot of literature on that area, but simply give you some of my favorites: Gilded Rose (legacy code refactoring), The ("office") code carpaccio (my personal adaptation of the code carpaccio kata. i.e. how to slice your work in order to add business value on every 8 minutes iterations), Mars Rover (perfect for TDD design), the Cash Register (how to avoid being blocked by fragile tests when business requirements are changing blazing fast), etc.


Organized by our company, and hosted by Ecole42,  this hackfest has mobilized lots of developers during an entire WE (too bad that wasn’t during the week), around the theme: "create new kind of collaboration tools for distributed dev teams". A nice initiative that will surely be reproduced in the future.

Some other ideas we started to launch recently

"Hack da cafeteria"

During a recent discussion with Bernard NOTARIANNI (co-founder of the Paris #XProPa), we had an idea for a new kind of lunch-event.

Indeed, while I was explaining my proposal of organizing dinners in Paris for XP practitioners: wine, food and craftsmen (explanations here in French), he stopped me and said: "nice idea! but why don't we start here, within this company, by hacking the cafeteria at noon for debating around development topics?", "That's right! Why don't we?" ...  The concept of the Hack da cafeteria event was born.

This concept is pretty simple: we join the cafeteria with a topic to debate and to share about with other real practitioners (e.g. "How to convince skeptical dev and project managers to allow pair programming?", "Mob programming, how does it work for you?", "CQRS & event sourcing in action with concrete cases", "How DDD practices helped us within our projects", etc).

We split ourselves into tables of 6 to lunch and debate the topic of the day. After a 1 hour of lunch-debate, every tables merge for the coffee in order to share with the others tables their 2 or 3 highlights.

At the end of the day, a short minutes may be forwarded to anyone interested by this topic (especially the people that are not practitioner already, or located at the other side of the globe ;-)

The very first session is already scheduled and will occur early September. I'll surely post something about it after.

All those events are easily affordable, and are perfect occasions to break silos and to connect people from different cultures, teams and habits.

"Mate, you have to see this!"

Is another stuff to do at lunch-time. Something I would have liked to organize since a long time, but that I will actually schedule in 2 or 3 weeks from now (I've made my poll, and it seems that there is appetite on this too ;-)

The idea: to display somewhere at noon, the video of a talk, conference, quickie, ... that strongly impacted one of us, and that we wanted to share with peers. Not only to share its content by the way, but also to debate all together on it afterwards.

You don't have time to do some market watch? Let yourself leverage on other mates' best discoveries! Simple, easy, and the opportunity to open some technical debates, even if you can't attend some meetup evening events in Paris (what I call my geek evenings).

I've read pretty much the same concept within Sandro's book with tons of tips for the logistic when we don't have access to the ideal rooms/theaters (even if I'll try to borrow our official theater at work, if this initiative is having success). And we already have lots of videos (and debates) in mind for the first representations...

And last but not least, I'm proud to introduce you

The Lunch-box mob

How would we properly implement event sourcing to add more business value and audit trail capabilities to an Order Management System (OMS)? Will we be able to leverage on the LMAX Disruptor without being forced to reimplement all the associated LMAX stacks in .NET? (i.e. ressources pooling, cache-friendly collections, etc) How would we implement a Smart Order Routing system (SOR) with relevant low latency and throughput performances, but without scarifying code readability & maintainability for non-experts? Which programming paradigm to choose for this kind of reactive system: LMAX disruptor-based? In-house Sequencer-based? RX? F#? 

... were some of the questions that led Tomasz JASKULA (DDD and F# Paris co-founder) and I to pair together at noon with our laptops in order to spike and build stuffs (yeah: enough said! time to code ;-)

That why we created the Lunch-Box github organization, and started to work on an open source Simple Order Routing (a kind of Smart Order Routing financial system, but without the smart algorithm part). The #SORLunchBox project was born! (see details on the project's wiki)

We've been since joined by some mates that have both the SOR functional knowledge and the envy to collaborate. Again:  our Mob programming crew was born! (with 4 people for 1 keyboard and a big screen).

This is the very beginning, but since we are not working on the same teams, we decided to meet at least 2x2hours a week -at noon- to work on it. We split our project into various journeys, and will probably keep a logbooks to diffuse what we will discover, next to our code which is open sourced.

This kind of experimentation has lots of benefits. It has already learned us various things (functional, mob programming organization, etc.) and bring lots of fun too! And who knows : maybe this spike open source project will help other teams, but also help ourselves (through the feedback of people elsewhere).

Experiment, talk, learn, debate, build, and share

What i'm saying, as a conclusion, is that you always have lots of benefits to experiment, talk, learn, debate, build, and share. Yes, sharing your passion, your failures, your discoveries is good, and helpful. For you, and the others.

Most of the events and occasions I talked about in that post are free and very easy to organize. Don't wait that your company or structure help you to start. Also, don't expect to have lots of people with you to start doing things. Start with 2 people, communicate on it, have fun... and let the other join the initiative.

Be the spark that will ignite the fire of Software Craftsmanship within your organization! 

Friday, 20 December 2013

Debunking the stupid myth that performance is a technical concern

I often hear people saying that latencies and response times are not business topics. I strongly disagree with that vision, and I like the punch-line used by Gojko here: “Debunking the stupid myth that performance is a technical concern”.

Indeed, It’s a fact that extra latencies or bad response times strongly impact the business. As a customer, I can’t stand to wait ages (even minutes;-) until someone (a salesman, a web site,…) is able to answer my questions, or to help me to quickly buy and checkout the product I already identified myself. It’s rather more true nowadays where our expectation level has been dramatically increased with our mobile phone usages.

Asking our business questions like : “what are your objective in term of performance?” is rarely productive. With such approach, we usually end up with average values for our implementation. Clearly the kind of setup fully loaded of implicit, leading to unclear situations,inappropriate architecture choices, and often crisis and strong tensions when we experience fires under production. Sad panda ;-(

On the other hand, by asking our business questions such as “ok, you want an average response time of 1 second, but is it ok for some things to take more than 10 minutes once a day?”, we can start to obtain reactions, deeper involvement, and answers that will  help us to build and validate the service level expectation we need to leverage on.

By leveraging on, I mean: to be verified via a continuous performance test harness, and carefully monitored under production (capacity management). A much more mature way of supporting the business of our clients.

I’d like to end this post with a reference to the excellent presentation of Gil TENE: “How NOT to measure latency”. In particular, Gil is sharing with us the typical kind of questions he asks to his clients in order to establish the performance requirements/service level expectations. 

The outcome of such exercise is something like:
  • 50% better than 20msec
  • 90% better than 50msec
  • 99.9% better than 500msec
  • 100% better than 2 seconds

The entire presentation worth the look, but the typical questions for the interview are showed from the slide #98.

Very useful…

Saturday, 14 December 2013

Simple Binary Encoding (SBE): the next protocol buffer?

Even if it's still in beta, the SBE ultra-fast marshalling API (available in C++, Java & .NET) seems very promising. Designed from a financial (FIX Trading community) standard, this first implementation is fully open source, and made by -wait for it ...

  • Martin Thompson (former LMAX CTO) 
  • Todd Montgomery (former 29West CTO)
  • and Olivier Deheurles (for the .NET part)
If you want to discover both the big picture, the history, and how you can use SBE, check the nice post written by my friend Olivier

Another really interesting part to read is the description of the SBE design principles chosen by the authors and published on the SBE wiki. A clearly mechanical sympathy approach ;-)

No doubt this has to be closely followed...

Sunday, 8 December 2013

Entreprise Architecture, TOGAF, and the ol' pitfall of big upfront design...

Executive summary:

Entreprise architecture is answering to real concerns & challenges for big-scale companies. After just being TOGAF certified, I don't think that this approach "applied by the book" is the proper answer to support those challenges for any situation where software development is concerned.  Reasons why detailed here (note: allow 27 hours of reading :-)


This week, I followed a training course validated by 2 exams at the end (theory & practice) in order to be TOGAF (9.1) certified. This course, now mandatory for almost every architect in my company, was about Entreprise Architecture (TOGAF stands for The Open Group Architecture Framework).

Wait a minute! What entreprise architecture stands for?

Good question! and as a DDD practitioner, I think it worth here to define our ubiquitous language for the rest of this post. Especially if you are an IT guy like me used to build and deal with softwares, you should not confuse the Entreprise Architect with what you think a Software/Technical Architect is. We are not talking about the same things here... Not at all. You should also not confuse Entreprise Architecture with IT Urbanism here (a kind of autistic & accountable-less french interpretation of Entreprise Architecture as explained by our TOGAF trainer ;-)  

Before defining Entreprise Architecture, TOGAF defines the Entreprise as "any collection of organizations that has a common set of goals". From a TOGAF perspective, an entreprise has a strategy and some capabilities to support this strategy (e.g. to sell its products in EMEA, to produce its products in China, ...). And capability is a keyword here. 

The Entreprise Architecture aims to guide the organizations to identify, specify and assess the changes necessary to execute their strategies. These changes are usually related to the management of their capabilities (did I told you before?). Whether to add a new capability (e.g. to be present on a new market), to change an existing one (e.g. to be compliant with a new regulatory constraint) or to dismantle an existing one (e.g. to stop the production of a given product). 

Of course since "software is eating the world" now, most of these changes will be supported by software development or integration, but having an Entreprise Architecture capability within a company is also important to support non-IT projects like: adding the "capability to build a production factory for our products in Italy in less than 2 years" for instance ... 

Ok. That makes sense to me. This being said, time to talk about TOGAF now.

The TOGAF vision

According to TOGAF, allowing the companies to execute their strategies in an efficient and safe manner is the job of highly skilled people that don't need to have IT knowledge: the Entreprise Architects.

All they have to do, is to leverage on technical experts from various domain to do their job (note from myself: if they think they need to do it...). 

What skills should have Entreprise Architects according to TOGAF? leadership, teamwork, oral and written communications, logical analysis, risk and stakeholders management. Lots of qualities, huh? simply too bad that TOGAF is not mentioning humility, empathy, and continuous learning / improvement mindset...

By the way, I explained the acronym but I didn't explained yet what TOGAF is. TOGAF is a framework including "the methods and tools for assisting in the acceptance, production, use, and maintenance of an entreprise architecture". 

There are 3 main pillars in TOGAF
  1. The Architecture Development Method (ADM) which is a methodology cycle
  2. The Architecture Content framework (describing the needed artefacts) 
  3. The Architecture Capability framework (i.e. the description of what is needed to have an architecture capability within a company, including the governance aspects). 
But ADM is clearly the most important part of this huge document of 633 pages. ADM describes the 10 possible phases (steps and deliverables included) of every architecture project according to TOGAF.

ADM cycle and phases:

What I liked during this training

I must admit that I really enjoyed the introduction of this 5 days course, with statements coming from the (very smart) instructor like: 
  • "Entreprise Architecture considers and supports the entreprise under the angle of its capabilities" (understand: use case driven)
  • "the map is not the territory; we should always stay humble regarding any kind of modeling" 
  • "TOGAF is an Utopia vision, it's mandatory to pragmatically bend this framework to our needs and entreprise's context"
  • "Architects are not here to be simply conceptual: we are here to support people (i.e. our sponsors), and to demonstrate value. We need to deliver! We should not turn ourselves into IT urbanists (i.e. use-case-focusless ivory tower guys ;-)"
  • "we should always know and remind us who we are working for (i.e. the sponsor of the entreprise architecture initiative)"
  • TOGAF favors a "just enough" approach. If what you are doing is not linked to a requirement,  a risk mitigation or an impact. Stop working on it! ;-)
  • The non-testable requirements are not acceptable
  • "Using our veto card means that you've already lost as an architect"
  • "There should be no architect within the entreprise architecture board! Indeed, the entreprise architects aren't the big persons (i.e. the executive decision takers with money). They are simply here to help big persons to take the good decisions regarding the objectives of the company (they've usually defined before)."
But it's important to notice here that most of those statements were coming from the instructor's experience, and not directly from TOGAF itself.

Regarding TOGAF now, I also appreciated that its ADM cycle was requirements-centric. ADM dedicates most of its phases to work on the "architecture" (from phase A to D); architecture meaning here "the description of the requirements to be taken into account". The "solution" arrives quite late in the process (mostly from phase E).

Since It's a common pitfall for everyone to be solution-driven instead of being use-case driven, I was quite pleased to find that mindset in TOGAF.

The 2 last thing I really appreciated within TOGAF, were:
  1. The clear expression of the business/architecture principles (including their implications)
  2. But also the exigences related to a good governance. Including the importance of the transparency of the architecture board decisions (minutes) to all the stakeholders (everyone within the company)

What bothers me within TOGAF? It's clearly not compliant with the agile and continuous delivery approaches...

Not really surprising, since TOGAF was initiated early 1990s. But even if TOGAF indicates that almost all phases of the architectural projects should be executed and reviewed iteratively, I fully disagree with its big upfront design approach. Indeed, TOGAF ADM cycle is basically saying: Entreprise Architects will specify and prepare everything BEFORE the implementation project is launched and transmitted to the involved PMO and dev teams

That doesn't match at all with the agile approach which is saying: confront your vision to the reality and gather feedback on your work as soon as possible... (remember: the map is not the territory...)

All the phases, steps and deliverables indicated within the TOGAF ADM cycle really reminded me the old Unified Process (UP). For those that are too young ;-) UP was the 1990s answer to the major tunnel effect of the former waterfall software methodologies. Ok, UP was far better than the waterfall-based processes, but its bureaucratic pitfall was one of the reasons why the agile manifesto was created at the time (too many deliverables per UP phases and iterations).

Also, the segregation made by TOGAF between the data & application concepts (see phase C) doesn't seem relevant to me. It reminds me the obsolete Merise french method ;-) According to me, considering the data aside the applications' use cases is the best way to fall into the pit of "mini-model the world" anti-pattern. i.e. the kind of situation where people are loosing their time on useless data/topics, simply to improve parts of the model that won't even have usage afterward (on that topic, see the excellent french presentation of Gregory Weinbach). 

There are also some good ideas within TOGAF (most of them are not new), but it clearly doesn't match with today's ways of building IT solutions (whether agile, lean, continuous delivery & lean start-up).

For non-IT needs on the other hand (e.g. "to build a factory from scratch in another country"), TOGAF seems quite applicable & interesting.

But the worst part for me is...

... the caricatural description of the actors involved that you can find within TOGAF (see the matrix presented in the chapter 52.5, or the extract below). Basically, it explains that the Entreprise Architects have all possible skills you can imagine. Even much more skills than the architecture board members (it's probably not the smartest communication move of the open group guys... Cause remember? the Architecture board members are the executive directors of the company that are sponsors of the entreprise architecture initiative ;-)

And finally, Entreprise Architects have much, much more accurate skills -of course- than the lame "IT designers" (solution architects? tech leads?). Hopefully for the Entreprise Architects' ratings, TOGAF doesn't retain "humility" as a criteria... ;-)

By the way, this lack of humility pitfall reminds me a former & excellent tweet from Martin Thompson (probably one of the finest technical architect you can find), saying:

How to be an architect:
1. Stop programming
2. Stop learning
3. Convince yourself nothing has changed since you did #1 and #2.

And finally, even if it's highly recommended to blend TOGAF and ADM to your context and organization, the huge number of outputs, tools and steps per phase (10 phases at max, with 8-15 steps per phases) can clearly lead us to a dogmatic and bureaucratic hell if the people in charge of setting up the architecture capability within our entreprises are not mature, or pragmatic enough.


More than TOGAF, the practice of entreprise architecture seems key to ensure that large scale organizations are able to align themselves to their strategy and to face the challenges related to the management of their capabilities.

TOGAF has some good ideas, but is intrinsically not adapted to agility best practices (nor continuous delivery for instance). Thus, we will need to find how to bend TOGAF strong enough to be continuous delivery compliant, or -more likely- to find something else than TOGAF for our modern entreprise architecture practice. 

But whatever the methodology and since the ivory tower syndrome is threaten every architect, it seems crucial to me to setup some risk mitigation/protection mechanisms within those organizations, in order to avoid situations where architect without continuous improvement mindset (and not aware that they need to seek assistance from other people's expertises) to be those who hold the keys to achieve the entreprise's objectives. 

When a developer stops learning things and improving himself as a professional, bad things happen. But when an architect is falling into the same laziness, VERY bad things happen ;-(

Wanna start the brainstorming to find solutions to those questions?