Sunday, 9 November 2014

Adapt, connect, innovate or die...

Last Tuesday evening, I was invited by Thomson Reuters to attend one of their special events dedicated to innovation in finance.

The location was quite prestigious (i.e. the "Maison Blanche" in Paris) and both the organization and the speakers were truly excellent.


To be honest, I was expecting much more marketing and sales speech from Thomson Reuters than what we had actually. But Mind you, I am not complaining ;-)

The event started by a short but brilliant 20 mn talk from Navi Radjou. He's Innovation & Leadership strategist, but also co-author of the "Jugaad Innovation" book (A frugal and flexible approach to innovation to the 21st century).

We then continued with an excellent 45 minutes panel about HFT, algo trading and innovation in finance. This panel was animated by Guillaume Thouvenel (Executive Coach & IT Emergency Manager) and speaker were Jean-Marc Bouhelier (CEO Celoxica), Dominique Ceolin (CEO ABC Arbitrage), Olivier Martin (« Unified Platform » COO, Thomson Reuters), Philippe Musette-Sykes (Senior Advisor to the Board, Kepler Cheuvreux) and Riadh Zaatour (Quant Analyst, McKay Brothers).

After this panel and a conclusion by Rémy Granville (Thomson Reuters), we all networked around a buffet... in front of the illuminated Eiffel Tower.

Yes the panel was very interesting, but I'd like to focus here on Navi's intervention.


Innovation by Navi RADJOU


The topic was "how to reinvent financial services in the 21st century".  And its core topic was really about innovation.

Navi first tackled some of the CEO common beliefs/myths: "Our group is Invicible/Irremovable, our group is Universal, and" ... another belief  which I can't remember here, sorry ;-(

Irremovable?


In fact, the average lifetime of big company moved from 75 years (in 1940) to 15 years (in 2014)... Too big to fail? Nah... Navi took few examples of Disruptive effect.

For instance: Marriott & Accor are really big companies in the hostelery/inn business. But Airbnb jumped out of the blue in 2008. They are now able to provide more rooms to end-users than Accor and Marriott can. They can even do it in some emerging locations, were the ol' groups aren't able to create Inns or to take market shares.

Music, Taxis, ... in fact, the concept of impermanence seems to generalize in every business domain.

Wealth management is another digital disruption victim for instance.

Gulliver and Lilliputians ...


Because everything is possible now with crowd-founding. It's a new deal. For everyone (e.g. the recent 41 M$ raise for the RSI video game... 41 Millions $ !).

Universal?


By the way, do you know that Nigeria emerges 3rd most profitable stock market in the World?!?

Navi tackled the "Universal" belief of CEO (i.e. our activity or business model is universal). His main point was that there are lots of innovations and disruptive models emerging from all around the world. Africa, Asia, South America, ...

Let's take M-PESA for instance. This phone money transfer solution has dramatically transformed the lives of all Kenyans (no need to take the risk to walk with money in their pockets anymore). It has also revolutionized the way Kenya does business. Without banks.

According to Navi, there are lots of things to learn from what currently happens in those emerging countries & markets.


So. How to adapt, change and innovate to succeed and survive in that moving world?

Agility advocacy


Agility is a must for all modern companies. Indeed. When everything is changing all the time. Companies have to get ready to embrace those changes. Quickly enough.

The GAFA threat


Google-Apple-Facebook-Amazon. If you don't change and embrace new technologies & ways to make business, those giants will do it for you.

And there is a pattern here. Google is emblematic, but all those successful companies have built their own ecosystem to speed up & ease any of their new moves now. For financial services (and any other industries), there is probably something to learn from.

Transcend the Business/IT silo to innovate


According to Navi, the solution to innovate and be successful nowadays is to transcend the Business/IT silo. At last, I would say.

Open & connect with the others


Another key to success for companies is to open themselves to the others:
  •  To open close discussions with their clients (through social networks, or via prototypes)
  •  To open for co-construction with their partners
  •  To open for co-innovation with their technology suppliers

"Innovation": a modern bias?


Navi also warned us about a bias that most of the company have nowadays: they create dedicated teams for innovation, dedicated awards, dedicated labs... Openness is the real path to follow. You have a lab? why not. But don't let dedicated people play alone to the game of innovation. Let's open it to everyone, including external partners.

By the past, innovation came from inside the 4 walls of the companies. Innovation is now happening outside the 4 walls. It's happening among partners, thanks to active networking that companies need to foster.

Navi then introduced us the 4 roles of innovation: the Inventor, the Transformer, the Sponsor, the Connector.



IT guys should be the Connectors for innovation. Not the inventors!


Instead of desperately trying to be Inventors, successful IT divisions are now much more Connectors.

IT guys must inform their business of what's going on with technologies.

You don't need to invent to innnovate! Knowledge is NOT power. Finding and sharing knowledge IS power.

And technology watch is not sufficient. We should also watch new usages  for inspiration. This is exactly what BBVA is doing with its Cross-Country Emerging Markets Unit.


Takeaways

Navi left us with few takeaways:

  1. IT guys must exit the pure function of back office and become more strategic partner with their business
  2. IT need to reinvent its relationship with its suppliers. Fostering more on (technological) partnerships and co-construction
  3. IT needs to foster and rely on an "Agile Architecture" to sustain its business and to continuously innovate.

Agile Architecture?

Yes, with mostly 3 drivers/qualities:
  1. Simplification
  2. Openness (towards 3rd parties)
  3. Evolution


Talking about simplification, you definitely have to see this!


Yes, I'd like to conclude this post with something that my mate Cyrille had shown me. It's a short, but exceptional TED talk from Yves MORIEUX:

> As work gets more complex, 6 rules to simplify <.

I strongly advise any people working in a big organization to take 12 minutes to see it. You won't regret it. A truly engaging & inspiring contribution.




Friday, 22 August 2014

Raising the bar

It's been a while now that we organize and attend various kind of Software Craftsmanship events at work.

Usually at noon, we leverage on our big company infrastructure (meeting rooms, town halls,...) to gather more and more people -month after month.

Our objectives: to meet, share, connect, learn, debate, code, hack...


After few years of difficulties (we faced some issues while our organization was scaling), we started now to raise the bar collectively by improving here and there our overall mindset and curiosity.

There is still lots of things to improve -mos def- but I have the feeling that we have triggered some kind of snowball effect here. A positive snowball effect.

Let me describe you the kind of events that continue to help us on that journey.


Brown Bag Lunches

The concept of Brown Bag Lunch (a.k.a. BBL) was probably the spark that ignited the fire of that positive dynamic. And this is still a very popular kind of event here. Probably the most popular one.


Introduced within our company by Romain LINSOLAS -our local Huggy Bear- and from an original idea of David GAGEOT (famous french Software Craftsman), the concept is pretty simple : we contact a speaker -usually an external one- and we welcome him for a free talk within our wall during lunch time. It's free, opened to every developer (first-come, first-served policy for seats). The topic may be whether on a craftsmanship technique, an open source library, a noSQL db, a kind of architecture (e.g. reactive, distributed, lambda, hexagonal), a low-latency middleware, etc.

Everyone is coming with his own Brown Bag (for the sandwich), and the organizer generally pays the speaker's lunch.



Usually between 1 or 2 hours depending on the topic and the Q&A session, this is the perfect spot for anyone to learn stuffs (even for the "I  have no-time for market watch" guys).



For the speaker it's a good way to prepare himself and to rehearse before a more official conference (like we did for our DEVOXX sessions). It's also the opportunity for him or her to meet and chat with other craftsmen, or to be the first one that the company will call when expertise on that area or product will be needed.


Since we have a lot of speakers working within our walls, but also a crowded audience of developers, we recently started to organize internal BBL sessions (with internal speakers). Some of us are also now "baggers", being capable to meet you within your company to talk (done with my mate Cyrille DUPUYDAUBY at betclic for instance).


If you are in France and want to organize BBLs, don't hesitate to consult the official site: http://www.brownbaglunch.fr/ to find speakers near your office. You won't regret the experience!

Coding Dojos

Widely-known, the coding dojo is an excellent way to learn and discover other concepts or techniques. It's also a great occasion to connect with and to have lots of fun with other passionate developers.

The concept? We meet at noon (10-20 persons in the same meeting room) and we try to solve in one hour or two, a small problem (the code kata) proposed by the chair of the session. 15 minutes before the end of the session, we all stop our progress, and make a public retrospective to share and challenge our various approaches. The occasion also for the chair to give us more hints about this kata that he usually has made before (few times, in other contexts).


Regarding the logistic, almost everyone brings his personal laptop. But since we usually pair, even those that didn't have a laptop may code. Depending on the kata also, everyone is picking the language of his choice to work it out (Java, C#, Javascript, F#, Scala, Clojure, Haskell, ...). 

Firstly introduced by my friend and eXtreme Programmer Philippe BOURGAU, Cyrille MARTRAIRE and Gilles PHILIPPART were truly the ones that institutionalized the coding dojos then, as a regular event. Indeed, even if this summer was pretty quiet, last spring there were almost one coding dojo per week (usually nicely chaired by Eric LE MERDY). But I must admit that I miss the fun, creative and very informative sessions chaired by Jean-Laurent DE MORLHON (working elsewhere now). A true mindset inspiration for me, when I organized some of them afterwards.

By the way, here is a hint for you if you want to launch this kind of events within your working environment: don't hesitate to find an incentive to break the ice and make new developers joining the movement.

The chicken and the egg situation


Indeed, someone that never attended a coding dojo will be often reluctant to join one, but those that have tasted a coding dojo once, will usually be recurrent attendees.



That's a fact, a coding dojo may firstly appear impressive if you never attend it before. Will I be able to pair with guys I don't know yet? Will I be able to code as fast as the other attendees? ... are common questions silently un-asked. But once you've attended a first session, the result is always the same: we all realized that it is not only easy, but truly fun! (and BTW, kata aren't about complex algorithms, nor about any given tech stack that you wouldn't know)

To reach some .NET developers that were not used to contribute to such events (and probably a little bit shy or anxious, since they had lots of excuses to refuse again and again), my solution had been to make a deal with our purchase department and Microsoft, and to offer temporary MSDN account activations (i.e.  the ability to download a tons of MS product for free) to every developer that will attend their first coding dojo.

Whether you are contractor or internal employee, you will be able to install Visual Studio on your personal laptop, but only if you attend a coding dojo first. Here, I have to thanks again our purchase division mates, and our contact in MS for that. Because with that thing... No more chicken and egg problem for dojo attendance.



I won't detail much more on coding dojos, there already are lot of literature on that area, but simply give you some of my favorites: Gilded Rose (legacy code refactoring), The ("office") code carpaccio (my personal adaptation of the code carpaccio kata. i.e. how to slice your work in order to add business value on every 8 minutes iterations), Mars Rover (perfect for TDD design), the Cash Register (how to avoid being blocked by fragile tests when business requirements are changing blazing fast), etc.

Hackathons

Organized by our company, and hosted by Ecole42,  this hackfest has mobilized lots of developers during an entire WE, around the theme: "create new kind of collaboration tools for distributed dev teams". A nice initiative that will surely be reproduced in the future.






Some other ideas we started to launch recently



"Hack da cafeteria"

During a recent discussion with Bernard NOTARIANNI (co-founder of the Paris #XProPa), we had an idea for a new kind of lunch-event.

Indeed, while I was explaining my proposal of organizing dinners in Paris for XP practitioners: wine, food and craftsmen (explanations here in French), he stopped me and said: "nice idea! but why don't we start here, within this company, by hacking the cafeteria at noon for debating around development topics?", "That's right! Why don't we?" ...  The concept of the Hack da cafeteria event was born.


This concept is pretty simple: we join the cafeteria with a topic to debate and to share about with other real practitioners (e.g. "How to convince skeptical dev and project managers to allow pair programming?", "Mob programming, how does it work for you?", "CQRS & event sourcing in action with concrete cases", "How DDD practices helped us within our projects", etc).

We split ourselves into tables of 6 to lunch and debate the topic of the day. After a 1 hour of lunch-debate, every tables merge for the coffee in order to share with the others tables their 2 or 3 highlights.

At the end of the day, a short minutes may be forwarded to anyone interested by this topic (especially the people that are not practitioner already, or located at the other side of the globe ;-)

The very first session is already scheduled and will occur early September. I'll surely post something about it after.


All those events are easily affordable, and are perfect occasions to break silos and to connect people from different cultures, teams and habits.



"Mate, you have to see this!"

Is another stuff to do at lunch-time. Something I would have liked to organize since a long time, but that I will actually schedule in 2 or 3 weeks from now (I've made my poll, and it seems that there is appetite on this too ;-)

The idea: to display somewhere at noon, the video of a talk, conference, quickie, ... that strongly impacted one of us, and that we wanted to share with peers. Not only to share its content by the way, but also to debate all together on it afterwards.



You don't have time to do some market watch? Let yourself leverage on other mates' best discoveries! Simple, easy, and the opportunity to open some technical debates, even if you can't attend some meetup evening events in Paris (what I call my geek evenings).

I've read pretty much the same concept within Sandro's book with tons of tips for the logistic when we don't have access to the ideal rooms/theaters (even if I'll try to borrow our official theater at work, if this initiative is having success). And we already have lots of videos (and debates) in mind for the first representations...


And last but not least, I'm proud to introduce you


The Lunch-box mob

How would we properly implement event sourcing to add more business value and audit trail capabilities to an Order Management System (OMS)? Will we be able to leverage on the LMAX Disruptor without being forced to reimplement all the associated LMAX stacks in .NET? (i.e. ressources pooling, cache-friendly collections, etc) How would we implement a Smart Order Routing system (SOR) with relevant low latency and throughput performances, but without scarifying code readability & maintainability for non-experts? Which programming paradigm to choose for this kind of reactive system: LMAX disruptor-based? In-house Sequencer-based? RX? F#? 

... were some of the questions that led Tomasz JASKULA (DDD and F# Paris co-founder) and I to pair together at noon with our laptops in order to spike and build stuffs (yeah: enough said! time to code ;-)

That why we created the Lunch-Box github organization, and started to work on an open source Simple Order Routing (a kind of Smart Order Routing financial system, but without the smart algorithm part). The #SORLunchBox project was born! (see details on the project's wiki)





We've been since joined by some mates that have both the SOR functional knowledge and the envy to collaborate. Again:  our Mob programming crew was born! (with 4 people for 1 keyboard and a big screen).

This is the very beginning, but since we are not working on the same teams, we decided to meet at least 2x2hours a week -at noon- to work on it. We split our project into various journeys, and will probably keep a logbooks to diffuse what we will discover, next to our code which is open sourced.

This kind of experimentation has lots of benefits. It has already learned us various things (functional, mob programming organization, etc.) and bring lots of fun too! And who knows : maybe this spike open source project will help other teams, but also help ourselves (through the feedback of people elsewhere).


Experiment, talk, learn, debate, build, and share


What i'm saying, as a conclusion, is that you always have lots of benefits to experiment, talk, learn, debate, build, and share. Yes, sharing your passion, your failures, your discoveries is good, and helpful. For you, and the others.

Most of the events and occasions I talked about in that post are free and very easy to organize. Don't wait that your company or structure help you to start. Also, don't expect to have lots of people with you to start doing things. Start with 2 people, communicate on it, have fun... and let the other join the initiative.


Be the spark that will ignite the fire of Software Craftsmanship within your organization! 





Friday, 20 December 2013

Debunking the stupid myth that performance is a technical concern

I often hear people saying that latencies and response times are not business topics. I strongly disagree with that vision, and I like the punch-line used by Gojko here: “Debunking the stupid myth that performance is a technical concern”.

Indeed, It’s a fact that extra latencies or bad response times strongly impact the business. As a customer, I can’t stand to wait ages (even minutes;-) until someone (a salesman, a web site,…) is able to answer my questions, or to help me to quickly buy and checkout the product I already identified myself. It’s rather more true nowadays where our expectation level has been dramatically increased with our mobile phone usages.

Asking our business questions like : “what are your objective in term of performance?” is rarely productive. With such approach, we usually end up with average values for our implementation. Clearly the kind of setup fully loaded of implicit, leading to unclear situations,inappropriate architecture choices, and often crisis and strong tensions when we experience fires under production. Sad panda ;-(

On the other hand, by asking our business questions such as “ok, you want an average response time of 1 second, but is it ok for some things to take more than 10 minutes once a day?”, we can start to obtain reactions, deeper involvement, and answers that will  help us to build and validate the service level expectation we need to leverage on.

By leveraging on, I mean: to be verified via a continuous performance test harness, and carefully monitored under production (capacity management). A much more mature way of supporting the business of our clients.

I’d like to end this post with a reference to the excellent presentation of Gil TENE: “How NOT to measure latency”. In particular, Gil is sharing with us the typical kind of questions he asks to his clients in order to establish the performance requirements/service level expectations. 

The outcome of such exercise is something like:
  • 50% better than 20msec
  • 90% better than 50msec
  • 99.9% better than 500msec
  • 100% better than 2 seconds

The entire presentation worth the look, but the typical questions for the interview are showed from the slide #98.

Very useful…

Saturday, 14 December 2013

Simple Binary Encoding (SBE): the next protocol buffer?

Even if it's still in beta, the SBE ultra-fast marshalling API (available in C++, Java & .NET) seems very promising. Designed from a financial (FIX Trading community) standard, this first implementation is fully open source, and made by -wait for it ...

  • Martin Thompson (former LMAX CTO) 
  • Todd Montgomery (former 29West CTO)
  • and Olivier Deheurles (for the .NET part)
If you want to discover both the big picture, the history, and how you can use SBE, check the nice post written by my friend Olivier

Another really interesting part to read is the description of the SBE design principles chosen by the authors and published on the SBE wiki. A clearly mechanical sympathy approach ;-)

No doubt this has to be closely followed...

Sunday, 8 December 2013

Entreprise Architecture, TOGAF, and the ol' pitfall of big upfront design...

Executive summary:

Entreprise architecture is answering to real concerns & challenges for big-scale companies. After just being TOGAF certified, I don't think that this approach "applied by the book" is the proper answer to support those challenges for any situation where software development is concerned.  Reasons why detailed here (note: allow 27 hours of reading :-)

----------

This week, I followed a training course validated by 2 exams at the end (theory & practice) in order to be TOGAF (9.1) certified. This course, now mandatory for almost every architect in my company, was about Entreprise Architecture (TOGAF stands for The Open Group Architecture Framework).


Wait a minute! What entreprise architecture stands for?

Good question! and as a DDD practitioner, I think it worth here to define our ubiquitous language for the rest of this post. Especially if you are an IT guy like me used to build and deal with softwares, you should not confuse the Entreprise Architect with what you think a Software/Technical Architect is. We are not talking about the same things here... Not at all. You should also not confuse Entreprise Architecture with IT Urbanism here (a kind of autistic & accountable-less french interpretation of Entreprise Architecture as explained by our TOGAF trainer ;-)  

Before defining Entreprise Architecture, TOGAF defines the Entreprise as "any collection of organizations that has a common set of goals". From a TOGAF perspective, an entreprise has a strategy and some capabilities to support this strategy (e.g. to sell its products in EMEA, to produce its products in China, ...). And capability is a keyword here. 

The Entreprise Architecture aims to guide the organizations to identify, specify and assess the changes necessary to execute their strategies. These changes are usually related to the management of their capabilities (did I told you before?). Whether to add a new capability (e.g. to be present on a new market), to change an existing one (e.g. to be compliant with a new regulatory constraint) or to dismantle an existing one (e.g. to stop the production of a given product). 

Of course since "software is eating the world" now, most of these changes will be supported by software development or integration, but having an Entreprise Architecture capability within a company is also important to support non-IT projects like: adding the "capability to build a production factory for our products in Italy in less than 2 years" for instance ... 

Ok. That makes sense to me. This being said, time to talk about TOGAF now.


The TOGAF vision

According to TOGAF, allowing the companies to execute their strategies in an efficient and safe manner is the job of highly skilled people that don't need to have IT knowledge: the Entreprise Architects.

All they have to do, is to leverage on technical experts from various domain to do their job (note from myself: if they think they need to do it...). 

What skills should have Entreprise Architects according to TOGAF? leadership, teamwork, oral and written communications, logical analysis, risk and stakeholders management. Lots of qualities, huh? simply too bad that TOGAF is not mentioning humility, empathy, and continuous learning / improvement mindset...

By the way, I explained the acronym but I didn't explained yet what TOGAF is. TOGAF is a framework including "the methods and tools for assisting in the acceptance, production, use, and maintenance of an entreprise architecture". 

There are 3 main pillars in TOGAF
  1. The Architecture Development Method (ADM) which is a methodology cycle
  2. The Architecture Content framework (describing the needed artefacts) 
  3. The Architecture Capability framework (i.e. the description of what is needed to have an architecture capability within a company, including the governance aspects). 
But ADM is clearly the most important part of this huge document of 633 pages. ADM describes the 10 possible phases (steps and deliverables included) of every architecture project according to TOGAF.

ADM cycle and phases:





What I liked during this training

I must admit that I really enjoyed the introduction of this 5 days course, with statements coming from the (very smart) instructor like: 
  • "Entreprise Architecture considers and supports the entreprise under the angle of its capabilities" (understand: use case driven)
  • "the map is not the territory; we should always stay humble regarding any kind of modeling" 
  • "TOGAF is an Utopia vision, it's mandatory to pragmatically bend this framework to our needs and entreprise's context"
  • "Architects are not here to be simply conceptual: we are here to support people (i.e. our sponsors), and to demonstrate value. We need to deliver! We should not turn ourselves into IT urbanists (i.e. use-case-focusless ivory tower guys ;-)"
  • "we should always know and remind us who we are working for (i.e. the sponsor of the entreprise architecture initiative)"
  • TOGAF favors a "just enough" approach. If what you are doing is not linked to a requirement,  a risk mitigation or an impact. Stop working on it! ;-)
  • The non-testable requirements are not acceptable
  • "Using our veto card means that you've already lost as an architect"
  • "There should be no architect within the entreprise architecture board! Indeed, the entreprise architects aren't the big persons (i.e. the executive decision takers with money). They are simply here to help big persons to take the good decisions regarding the objectives of the company (they've usually defined before)."
But it's important to notice here that most of those statements were coming from the instructor's experience, and not directly from TOGAF itself.

Regarding TOGAF now, I also appreciated that its ADM cycle was requirements-centric. ADM dedicates most of its phases to work on the "architecture" (from phase A to D); architecture meaning here "the description of the requirements to be taken into account". The "solution" arrives quite late in the process (mostly from phase E).

Since It's a common pitfall for everyone to be solution-driven instead of being use-case driven, I was quite pleased to find that mindset in TOGAF.

The 2 last thing I really appreciated within TOGAF, were:
  1. The clear expression of the business/architecture principles (including their implications)
  2. But also the exigences related to a good governance. Including the importance of the transparency of the architecture board decisions (minutes) to all the stakeholders (everyone within the company)


What bothers me within TOGAF? It's clearly not compliant with the agile and continuous delivery approaches...

Not really surprising, since TOGAF was initiated early 1990s. But even if TOGAF indicates that almost all phases of the architectural projects should be executed and reviewed iteratively, I fully disagree with its big upfront design approach. Indeed, TOGAF ADM cycle is basically saying: Entreprise Architects will specify and prepare everything BEFORE the implementation project is launched and transmitted to the involved PMO and dev teams

That doesn't match at all with the agile approach which is saying: confront your vision to the reality and gather feedback on your work as soon as possible... (remember: the map is not the territory...)

All the phases, steps and deliverables indicated within the TOGAF ADM cycle really reminded me the old Unified Process (UP). For those that are too young ;-) UP was the 1990s answer to the major tunnel effect of the former waterfall software methodologies. Ok, UP was far better than the waterfall-based processes, but its bureaucratic pitfall was one of the reasons why the agile manifesto was created at the time (too many deliverables per UP phases and iterations).

Also, the segregation made by TOGAF between the data & application concepts (see phase C) doesn't seem relevant to me. It reminds me the obsolete Merise french method ;-) According to me, considering the data aside the applications' use cases is the best way to fall into the pit of "mini-model the world" anti-pattern. i.e. the kind of situation where people are loosing their time on useless data/topics, simply to improve parts of the model that won't even have usage afterward (on that topic, see the excellent french presentation of Gregory Weinbach). 


There are also some good ideas within TOGAF (most of them are not new), but it clearly doesn't match with today's ways of building IT solutions (whether agile, lean, continuous delivery & lean start-up).

For non-IT needs on the other hand (e.g. "to build a factory from scratch in another country"), TOGAF seems quite applicable & interesting.


But the worst part for me is...

... the caricatural description of the actors involved that you can find within TOGAF (see the matrix presented in the chapter 52.5, or the extract below). Basically, it explains that the Entreprise Architects have all possible skills you can imagine. Even much more skills than the architecture board members (it's probably not the smartest communication move of the open group guys... Cause remember? the Architecture board members are the executive directors of the company that are sponsors of the entreprise architecture initiative ;-)

And finally, Entreprise Architects have much, much more accurate skills -of course- than the lame "IT designers" (solution architects? tech leads?). Hopefully for the Entreprise Architects' ratings, TOGAF doesn't retain "humility" as a criteria... ;-)



By the way, this lack of humility pitfall reminds me a former & excellent tweet from Martin Thompson (probably one of the finest technical architect you can find), saying:

How to be an architect:
1. Stop programming
2. Stop learning
3. Convince yourself nothing has changed since you did #1 and #2.
 
;-) 


And finally, even if it's highly recommended to blend TOGAF and ADM to your context and organization, the huge number of outputs, tools and steps per phase (10 phases at max, with 8-15 steps per phases) can clearly lead us to a dogmatic and bureaucratic hell if the people in charge of setting up the architecture capability within our entreprises are not mature, or pragmatic enough.


Conclusion

More than TOGAF, the practice of entreprise architecture seems key to ensure that large scale organizations are able to align themselves to their strategy and to face the challenges related to the management of their capabilities.

TOGAF has some good ideas, but is intrinsically not adapted to agility best practices (nor continuous delivery for instance). Thus, we will need to find how to bend TOGAF strong enough to be continuous delivery compliant, or -more likely- to find something else than TOGAF for our modern entreprise architecture practice. 

But whatever the methodology and since the ivory tower syndrome is threaten every architect, it seems crucial to me to setup some risk mitigation/protection mechanisms within those organizations, in order to avoid situations where architect without continuous improvement mindset (and not aware that they need to seek assistance from other people's expertises) to be those who hold the keys to achieve the entreprise's objectives. 

When a developer stops learning things and improving himself as a professional, bad things happen. But when an architect is falling into the same laziness, VERY bad things happen ;-(

Wanna start the brainstorming to find solutions to those questions?

Saturday, 5 October 2013

The (french) case against the NIH syndrome

Attention: les cascades de cette présentation ont été réalisées par des professionnels, n'essayez en aucun cas de les reproduire chez vous...

C'est en substance ce que j'aurai eu envie de lire comme commentaire de fin autour de cette très agréable soirée passée chez nos amis d'ABC Arbitrage, pour un meetup d'Alt.NET de haute volée, intitulé : "SOA et Service Bus" ce jeudi 3 octobre.

Pourquoi cela? parce que j'ai vraiment eu le sentiment ce soir là, d'avoir été complètement à contre-courant de l'enthousiasme général autour des messages délivrés par les speakers (mais il faut dire que l'excellente qualité de leurs présentations ainsi que leur état d'esprit très sympathique ont du aider). 

Mais avant de détailler ici les raisons de mon scepticisme, parlons d'abord des choses qui m'ont plues :

  • Leur SOA pragmatique. En gros, l'important avec le SOA n'est pas d'utiliser tel ou tel protocole, ou tel ou tel format de message, l'important est déjà de penser son SI en terme de briques cohésives rendant des services avec un couplage lâche (le SOA apportant les mêmes bienfaits au niveau de l'architecture, que l'encapsulation au niveau de l'OOP)
  • Leur découpage progressif et pragmatique de leur ancienne plate-forme monolithique en un modèle distribué et orienté service. On commence avec un service, on acquiert du feedback, de l'expérience ... et on continue petit à petit.
  • Leur attention portée sur la réduction de la dette technique au fil de l'eau et des projets
  • La cohérence de leur écosystème et de leurs APIs qui visent semble t-il la simplicité (l'approche "pit of success" qui est aussi une de mes marottes ;-) Le fait que cet écosystème cohérent facilite l'intégration et la productivité des nouveaux arrivants (à qui ils confient généralement l'implémentation d'outils pour compléter l'offre, comme première tâche)
  • L'utilisation de produits open source, et leur "polyglot persistance", c.ad. de ne pas se limiter à une seule stratégie de persistance (i.e. SQL Server) en fonction de leurs besoins.
  • La qualité de leurs supports et de leur présentations ce soir là. Très agréable et très efficace pour l'auditoire.
  • L'état d'esprit positif, ouvert, dynamique et très sympathique de l'ensemble du crew ABC

Ca fait déjà pas mal de bonnes choses me direz-vous ;-)

Bon. Abordons maintenant ce que j'ai moins aimé:

  • L'omniprésence des challenges techniques, et une absence quasi totale du métier dans leur discours. Je n'ai pas le souvenir d'avoir entendu parler de leurs utilisateurs, de l'interactions avec ceux-ci, des challenges qu'ils avaient à relever, etc. Du coup, cela a renforcé l'impression d'une équipe de DEV qui se fait un peu plaisir toute seule (et cohérente avec le NIH que j'aborderai plus tard). Je ne dis pas que c'est leur cas -pas de procès d'intention ici donc- je dis juste que cela a renforcé cette impression chez moi pendant leurs présentations. Car même si je kiffe la technique et apprécie particulièrement le mechanical sympathy, j'ai beaucoup de mal en revanche, lorsque celle-ci n'est pas pilotée par de vrais cas d'usages
  • Le flou quant à la définition de ce qui peux justifier la création d'un service chez eux (quels sont les critères de création/d'ajout). Ce point, combiné à l'atrophie de la place du métier dans leur discours m'a presque donné l'impression qu'ils parlaient tout le temps de "services windows" et non pas de "services à portée fonctionnelle" (alors que ce devait être le cas à mon avis). Je pense que d'aborder un ou deux exemples concrets de ce qui était un service chez eux, aurait pu donner plus d'épaisseur et moins d'ambiguité à leur discours (en tout cas pour moi).
  • La naïveté de certains choix : comme Cassandra pour une persistance "reliable". Car ce n'est pas parce que Cassandra est constitué en Cluster, qu'il est pour autant fiable et consistant (sur ce sujet, je vous invite à lire le récent post d'Aphyr qui explique tout ça, et à patcher vos versions de Cassandra au plus vite d'ailleurs, par la même occasion ;-)

Abordons maintenant ce que je n'ai pas du tout (mais alors pas du tout ;-) aimé:

  • Pire que la justification, le discours d'évangélisation autour du Not Invented Here (NIH) syndrome (affublé ici de l'adjectif 'pragmatic') . Pour rappel, il s'agit de la tendance d'une entreprise ou d'une équipe, qui re-développe quelque chose qui existait déjà sous prétexte que cela n'a pas été conçu ou mise au point à l'intérieur de celle-ci. Que le contexte particulier d'ABC arbitrage (taille, budget, qualité de l'IT) les ai mené à faire tel ou tel choix d'architecture ou d'implémentation cela peut se comprendre (se discuter aussi, étant donné le côté quasi systématique de la démarche chez eux: hosting, déploiement, monitoring, middleware, ...). Mais promouvoir cette approche avec une certaine naïveté; j'ai été très, très mal à l'aise avec ce discours. Pourquoi est ce que je parle de naïveté ici ?!? Et bien c'est lié aux 2 autres points qui suivent:
  • Le fait de minorer le coût total d'une telle entreprise (i.e. de faire croire à l'assistance que c'est simple, pas cher et rapide d'implémenter un middleware). En effet, implémenter une solution de messaging ne s'improvise pas, et ce n'est certainement pas en une seule fois que celle-ci sera mature et utilisable (et puis c'est du temps en moins passé à répondre aux besoins du business). Lorsque j'ai posé la question en séance sur le coût total de cette solution, c'est le coût initial qui m'a été donné (2 mois pour 4 développeurs si j'ai bien retenu). Et bien je gage moi, que le coût sérieux et total de la v1 (sur toutes ces années, et au fil de tout ces projets) n'a pas vraiment été pris en compte, lorsqu'il a été décidé de partir sur une implémentation de leur v2 (vs. acheter une solution du marché)
  • L'absence totale de capacity management dans une telle entreprise (on parle du développement d'un middleware maison, tout de même!). Pour moi, c'était le plus gênant. En effet, comment savoir que l'objectif (de latences inférieures ou égales à la milliseconde, avec de la distribution de market data sur des 50aines de clients) est atteint  si on ne met pas en place des instruments de mesure concrètes ? Comment s'assurer qu'un affolement de la volumétrie des market data ne saturera pas totalement le réseau (ou la latence des publishers avant;-), étant donné la stratégie tcp-unicast choisie (vs. multicast), si on ne mesure pas ce qui se passe sur le réseau? En gros: comment savoir que toute l'activité de la boîte n'est pas mise en danger par certains choix d'architecture? Et bien, si on ne mesure pas : on ne peut pas savoir ! (sauf en le découvrant à nos dépens en prod --> j'aurais des dizaines de war stories à vous raconter sur ce sujet là...). Sur cette culture de la mesure qui me parait désormais indispensable d'avoir et de mettre en oeuvre, je conseille vivement l'excellente et très percutante présentation de Coda Hale: "Metric, Metrics everywhere")

En conclusion

Vous l'aurez compris, je suis beaucoup plus réservé sur le bilan très positif dressé par presque tout le monde du contenu de cette soirée. En état (et sans plus d'informations) je peux comprendre l'intérêt pour une société comme ABC arbitrage d'avoir eu recours à une telle stratégie de build systématique (étant donné la taille de la structure, et l'excellente qualité de ses développeurs), mais je trouverai très dangereux de généraliser cette approche à d'autres contextes (ce que je pouvais entendre en discutant le soir même avec les uns et les autres, ou en lisant certains retours très enthousiastes par la suite). Le plus gros risque avec cette approche NIH étant de perdre son énergie dans des combats annexes, et de ne pas se concentrer sur ce qui apporte véritablement de la valeur à nos utilisateurs (un travers contre lequel nous devons tous lutter, nous développeurs).

Pour ceux qui douteraient encore de la complexité d'une telle entreprise (i.e. développer son propre middleware low latency), je recommande la lecture de quelques documents produits par le Jimi Hendrix des middlewares, à savoir Todd.L. Montgomery (ça pour l'architecture, mais surtout ça pour l'implémentation).

En tout cas, et comme à son habitude, ce fut une excellente soirée ALT.NET, propice à de nombreux échanges et rencontres entre passionnés. 

Merci à Rui et aux gens d'ABC Arbitrage pour tout ça !

Wednesday, 11 September 2013

Lost with web technologies and protocols? Let me help you to clarify differences between WebSockets, Google SPDY, HTTP 2.0 & Co

Web technologies are moving so fast today. Or maybe that's a feeling I've recently had because excepting few REST services here and there, I didn't have to handle much web technologies at my low-latency-finance-work since 2005 (whereas the web was my speciality at the time). And now what?!? even the core web protocols are changing?!? incredible...

If I now ask you 'what are the differences between WebSockets, Server Sent events, HTTP 2.0?' would you be able to answer me easily? If yes, you have better reading to do (this other post for instance). If not: sit down and relax...I'm gonna earn you some time. 

Nothing new in my post, since there are lots of information available down there. But simply a synthetic view that may be helpful to grasp the differences between Web sockets, Google SPDY (pronounced "SPeeDY') and HTTP 2.0 without googling hours like I did...

But before that, let's refresh our mind with core basics:


  • TCP: guaranteed in-order delivery, bi-directional and full-duplex, TCP is useful in lots of context, but is also the transport layer protocol on which HTTP is built-on
  • HTTP 1.0: request-response application-level protocol built upon TCP to exchange structured text documents that uses logical links. HTTP is said 'stateless' because a separate TCP connection to the same server is made for every resource request (the client initiate a new TCP connection with the server, the client makes one full HTTP request, the server gives its full HTTP response, and then the underlying TCP connection is closed). I don't know if the 3-seconds-goldfish-memory is a myth, but I can tell you that HTTP memory beyond this simple Q & A roundtrip would be peanuts without the existance of post-its... Er... I mean: cookies. Name/value pairs message headers are also joined to transmit meta-data and informations between client & server (with the X- prefix to indicate non-standard headers)
  • HTTP 1.1: Maintains the HTTP 1.0 request-response paradigm, but improves the latency by reusing a connection multiple time to download images, scripts, stylesheets, etc after the page has been delivered. HTTP 1.1 avoids lots of the overhead cost of TCP connection establishment, but maintains the one response per request paradigm
  • Short polling: in the standard HTTP model, a server cannot initiate a connection with a client. Therefore, in order to receive asynchronous events as soon as possible, the client needs to poll the server periodically for new content (with a javascript timer for instance). However, the frequency of the poll request can cause an unacceptable burden on the server, the network, or both (by forcing HTTP roundtrips even if no data is available on the server side). It can also be inefficient because it reduces the responsiveness of the applications since data is queued on the server side, until the server receives the next poll request from the client 
  • Long polling: kind of hacks to HTTP (1.x) in order to support data-push from the server to the client. With long polling, the client requests a page from the server in a way similar to a short and standard polling; however, it the server has no information available for the client, then instead of sending an empty response, the server holds (his breath) the request -responding partially with headers to the client request for instance- and waits for information to become available (or for a suitable timeout event), before sending a complete response to the client. The client then typically sends a new long poll request, either immediately of after a pause. The short duration of the pause between a long poll response and the next long poll request avoids the closing of idle connections (when HTTP 1.1 is used)
  • HTTP streaming: a set of technologies or techniques where the server keeps a request open indefinitely; that is, it never terminates the request or closes the connection, even after it pushes data to the client. This mechanism significantly reduces the network latency because it avoids to pay the cost of the underlying TCP connection establishment. This may work whether with HTTP 1.0 (using EOF as a streaming mechanism) or HTTP 1.1 (using whether chunked transfer or EOF to stream) as underlying protocol. The HTTP streaming mechanism is strongly based on the capability of the server to send several pieces of information on the same response, without terminating the request or the connection. Warning: some network intermediaries (proxies, gateways, ...) involved in the transmission between the server and the client may seriously hurt or prevent this streaming experience from working (by buffering the data until the entire response is published by the server)
  • Comet: a generic term which covers both the long polling and the HTTP streaming techniques in order to support push-based interactions from the server. Third parties comet libraries usually support multiple techniques and fallback strategies to try a maximize cross-browser and cross server support 

At the end, there were lots of reasons why we couldn't stick with the former HTTP 1.x family... But rather than being negative here, let's instead detail the use cases and objectives for those new protocols and technologies

Indeed, the technologies we will describe below were mainly introduced for the following objectives:
  1. To speed up the interactions over the web by reducing the latency (cause responsiveness is nowadays a must for end-users, especially for phone users)
  2. To allow servers to natively push data to their clients (to support event-driven)
  3. To allow web clients and web servers to exchange more than HTML pages (the initial purpose of HTTP, remember?)
  4. To secure the interactions over the web

Ok then... thanks for the overview. But can we have a look at the new stuffs now?!?

Ok, ok: but just calm down pal...  Here they are:
  • WebSockets: Coming both as an IETF protocol and a W3C API, WebSocket is a technology allowing full-duplex communications channels over a single TCP connection. WebSocket is a transport layer built-on TCP, but offering an HTTP friendly upgrade handshake. As a client, you initiate an HTTP connection, before requesting the server to upgrade the session from HTTP to the WebSocket protocol. Once this bootstrapping is done: there is no more HTTP between the client and the server! Some old-fashion socket interactions to be handled instead, where client and server are equal peers with the same capability to send messages at any time, and to handle their reception asynchronously
  • Server-Sent events: W3C standardization of an API for opening an HTTP connection for receiving push notifications from a server (using a text/event-stream MIME type) in the form of DOM events. The browser API is called the EvenSource API, and is part of the so called HTML 5...
  • SPDY: Already implemented and available Google proposal to extend HTTP with a more efficient wire protocol (still based on TCP) but maintaining all the former HTTP semantics (encoding, headers, cookies, request/response). SPDY replaces some parts of HTTP, but mostly augments it. In other words: SPDY =  {HTTP 1.x headers and methods + google connection management & data transfer format}. The name SPDY captures speed, but also the idea of compression. Some SPDY implementation details:
    • Usage of TCP as the underlying transport layer, so requires no changes to existing networking infrastructure (to ease the WW deployment)
    • Multiplexed requests: many concurrent HTTP requests to run across one TCP session
    • Usage of TLS (over TCP) as the standard transport protocol. Ok, it will improves the security of our web exchanges. But as a nice side effect, It will help the transparent deployment of SPDY. Indeed, no one on the network path between the server and the client will have access, and be able to mess with the SPDY bits (protected by TLS tunneling) 
    • Prioritized requests: to prevent high priority requests from being blocked by non-critical resources
    • Compressed HTTP headers: to save latency and bandwidth for pages with tons of sub-requests
    • Server push: the capability for a server to push data to clients via the X-Associated-Content header (informing the client that the server is pushing a resource to the client before the client has asked for it)
    • Server hint: the capability for the server to suggest the client that it should ask for specific resources (done via the X-Subresources header)
  • HTTP 2.0: is the next planned version of the HTTP network protocol. Currently a working draft by the IETF (working group last call is scheduled for April 2014), HTTP 2.0 is derived from SPDY -used here as a starting point- and defines an upgrade handshake and data framing very similar to the WebSocket standard. The main idea of this draft is to provide asynchronous connection, multiplexing, header compression, request-response pipelining (with possible prioritized streams) while preserving a full backwards compatibility with the transaction semantics of HTTP 1.1. One major difference though, between HTTP 2.0 and SPDY, is that HTTP 2.0 won't force the usage of TLS

Hum, interesting... But what kind of impacts for our website implementation today?


I would say that using SPDY right now, until HTTP 2.0 is finished and implemented seems to be an interesting path

Indeed, SPDY is already supported by Chrome, Firefox and Opera on the client side (today), and by Apache web server, nginx, Jetty, Netty, Node.js, etc. on the server side. This interesting path has also already been taken by some tiny web actors such as Twitter, Wordpress, Facebook (you can Google them if you want to find out what are those companies ;-) and Google.* of course!

On the other hand, you'd better-as usual with web technologies- cover your back with fallback strategies. Cause it's most unlikely that corporate infrastructure components (such as proxies, reverse proxies, firewalls, etc) will be compliant soon with those HTTP 2.0 like (and of course WebSocket!) technologies. I know that's a shame, but it's a fact that classic corporate (like banks for instance) aren't innovative as web giants...

Anyway, some hints if you decided to rely on SPDY (and with HTTP 2.0 in the future)

You should not do domain sharding anymore to improve the end-user response time (sharding domains being the mechanism used to minimize HTTP 1.x round-trip times by parallelizing downloads across various hostnames). With stream multiplexing and request-response pipelining, this is not useful anymore (and worst: degrades the end-user experience). As a consequence, set a switch on your server saying: if it is through SPDY, don't bother with sharding.

Also, avoid image spriting (i.e. to combine multiple images within the same file to load them without having too much HTTP 1.x roundtrips), this is not useful anymore, and even counter-productive (likewise web domain sharding).

Last but not least: let's finish this post with some real web mechanical sympathy

As you may notice, the web is full of resources explaining how to configure your server components and how to design your web applications in order to rely efficiently on SPDY. But I'd like to end this post with an excellent reference: a QCON session from Roberto Peon -one of the inventor of SPDY- where he explains a lot about HTTP 1.x limitations, SPDY and HTTP 2.0. A very clear session, available here on infoQ (.pdf slides included).

I hope that this post clarified a little bit this jungle of terms and concepts. Time for us to speed up the end-user responsiveness of our web apps ;-)

Happy coding!

Monday, 19 August 2013

A zoom on the hexagonal/clean/onion architecture


Several weeks from now, I've conducted an important study that was related to the middlewares we were using in the entire pre-trade perimeter of the bank I'm working for. One of the analysis criteria was the dependency & the coupling towards the used messaging technologies for every application. Even if there were lots of legacy applications involved in this study, I was still surprised to see how strong the coupling was and how far the various messaging technologies were spread within a huge number of applications (with messaging data structures present in the core business layers in the worst cases). The consequence of that?!? Some very costly (and risky) situations for those applications when we need to replace a former messaging technology they were using, by a better (and less risky) one.

This is where I realized that It was the time for me to communicate more on the strengths and the benefits of the hexagonal architecture, which I also recently successfully foster for another project (just before the middleware study).

Apart from being a new buzzword, what's the hexagonal architecture concretely?

Also known as 'Ports and Adapters', or clean, or onion architecture (other variants I won't detail here), the hexagonal architecture is an applicative-architecture-style that helps us to focus on our business goals without being tied or jeopardized by our technical frameworks or infrastructure choices

In that model there is no such thing as 'front-end' (users interactions) or 'back-end' (db) anymore, but two primary areas instead: the inside (with applicative-use-case-handlers and business domain code) and the outside (with all our infrastructure code: db access, messaging & communication bindings, etc). If you now combine this model with the dependency inversion principle which states that High-level modules should not depend on low-level modules. Both should depend on abstractions, you can easily infer that this model dictates that you can only point inwards the hexagon / circle (infrastructure being the low-level stuffs in that context).

Interactions between those two areas (in and out) are achieved by ports and adapters (P/A in the diagram below). In a nutshell, events or clients requests arrive from the outside world at a port (i.e. a plug for a technology), and the technology-specific adapter converts it into a usable procedure call or message and passes it to the application layer.

Ok, it's all about focusing on the real topics after all, right?

Indeed, an important point with the hexagonal architecture is that we put all our frameworks, drivers and infrastructure related code to the periphery of your system. Because those should be only details for our applications. Nothing more!



Strengths and benefits of the hexagonal architecture (to impress your world during geeks meetings)

  • Sustainability / Timelessness: by decoupling our application-business code from the tools we are using (i.e. the libraries and frameworks), we make it less vulnerable to the erosion of time and IT fads
  • Testability: The usage of ports and adapters to communicate with all our infrastructure (e.g. db, messaging systems, etc) eases the usage of mocks in order to test our applicative services and domain code. Tests could even be written for our application service layer before we decide which technology to be plugged with its corresponding port/adapter (whether REST, SOAP, specific messaging, db, etc)
  • Adaptability / Time to market: adding a new way to interact with your application is very easy: you just add a new port/adapter to support this new technology and that's it! You can usually have multiple ways or technologies to interact with your application
  • Understandability: Rather than having a solution where use cases are completely lost or mixed within all the technical stuff, this architecture style states the emergence of an applicative-use-case-layer (with all your use case handlers in a dedicated module). The proper location to make scream our functional intentions
  • Use case driven & DDD compliance:  Indeed, with this architecture style, we design our applications with our use cases in mind; not the number of persistence technologies or binding types we will need to support! A typical project may start without deciding what kind of database it needs for his persistency, and pick the proper technology based on real data manipulated and usages discovered after several iterations (what we've done in a project recently; even if it took me some efforts to explain to the project manager that I was suggesting that strategy on purpose, and not by lazyness ;-) Cause you remember? a pragmatic architect usually defer decisions about the choice of frameworks or tools to be used.

OMG: is the hexagonal architecture a silver bullet?!?

Calm down. Please... This is not the Xmas architecture, and there is still no such things as Silver bullet ;-) But as Vaughn Vernon says in his excellent Implementing Domain Driven Design book (iDDD) : 
'The hexagonal architecture forms the strong fundation for supporting any and all those additional architectural options' (i.e. SOA, REST, event-driven, event-sourcing, CQRS, etc)

I don't understand a word of this post. May I read other stuff on this topic in real & solid english?

Sure. Even if it's not very gentle for me... Anyway, if you want to dig a little bit on that matter, I highly recommend you the reading (of at least the chapter 4) of the excellent iDDD book. You can also read the original pattern description by Alistair Cockburn, or the more recent explanations by Robert C Martin about the clean architecture (here as post, or here in this fabulous video session). By the way, I have no doubt that the next book of Uncle Bob (after Clean Code & The Clean Coder) will be something like: The Clean Architecture.


Enough said: time to try it in action in your own projects. You won't regret it!

Saturday, 17 August 2013

Metrics, Metrics everywhere...

... is a truly awesome presentation by Coda Hale. Even if its content is really interesting (in a nutshell: improve your decisions and time to market by measuring your code when it runs), I really like the format of his presentation. 

Very funny, fluent, and built like an Anaphora (repetition, repetition, repetition), his presentation is a real 'model of genre' in term of efficiency to spread a message. 

Available on youtube, Coda's presentation lasts only 30 minutes long (there are 15 minutes of Q & A at the end); it definitely worth the look:


'As developers we have a mental model of what our code does ... we spend so much time inside our heads, it's very easy to mistake what's inside of our heads for reality (i.e. to mistake the map for the territory)... we can't know until we measure it'

+1

(thanks to my mate Cyrille for this resource)

Friday, 16 August 2013

The technology behind an equity trade

I highly recommend you to watch the excellent presentation of 'The technology behind an equity trade' made by John O'Hara during last QCon London 2013.

Clear as crystal and very informative, John is giving a good introduction of main bank functional blocks, before explaining the challenges every bank will have to face in the upcoming years (starting now ;-)

Finally, he ends explaining why technology will be so crucial in that industrial (and digital) transformation. Very nice session!

Thursday, 18 July 2013

The reactive manifesto

Working in financial front-office since several years from now, I'm used to build reactive systems. 

In a nutshell, they are systems that are ready to respond to stimuli (whether market data updates, client requests for prices, spreads modifications, position changes, etc), in order to react as quick as possible (in milliseconds or less), in a world where you must constantly protect you against waves or throughput peaks of data coming into your systems.

To achieve that, we mostly build systems that are event-driven, stream-oriented, scalable (vertically, but also horizontally), performant (with conflation enabled, GC-less strategies, ...), and resilient (i.e. recovery-oriented & production ready).

Today, one of my mate showed me this new reactive manifesto which is all about that kind of systems we used to build. I found it very readable and informative. Check this out by yourself:


And welcome to our reactive world!

Wednesday, 17 July 2013

Understandability Driven Development

Thought you knew every kind of possible ...DD?

You should probably read the excellent white paper explaining Raft: the new clustering distributed machines consensus algorithm recently invented in Stanford to add a new one (UDD ;-)

Except a few things, their article is very clear and informative. But what I really like more than everything else is their approach. Indeed, they elaborate their algorithm with understandability as a core driver. The idea was to make possible for a large audience, to understand their algorithm comfortably.

Always interested by the challenge of how to deliver durable solutions and even if we are talking about an algorithm here (and not implementation), this is the first time I'm reading someone explaining that understandability is not a nice to have ... but a must have.

Pretty cool!

Update (July, 26th 2013):