Showing posts with label guidelines. Show all posts
Showing posts with label guidelines. Show all posts

Saturday, 17 August 2013

Metrics, Metrics everywhere...

... is a truly awesome presentation by Coda Hale. Even if its content is really interesting (in a nutshell: improve your decisions and time to market by measuring your code when it runs), I really like the format of his presentation. 

Very funny, fluent, and built like an Anaphora (repetition, repetition, repetition), his presentation is a real 'model of genre' in term of efficiency to spread a message. 

Available on youtube, Coda's presentation lasts only 30 minutes long (there are 15 minutes of Q & A at the end); it definitely worth the look:


'As developers we have a mental model of what our code does ... we spend so much time inside our heads, it's very easy to mistake what's inside of our heads for reality (i.e. to mistake the map for the territory)... we can't know until we measure it'

+1

(thanks to my mate Cyrille for this resource)

Sunday, 18 November 2012

wiked! the solution to DRY all our projects KM

Since my latest post: It's really time to DRY our apps' Knowledge Management! I had the occasion to share this topic with several mates, and thus to discover that the maven site plugin was perfectly fine for what I intended to do (thanks to Christophe LALLEMENT & Alexandre NAVARRO by the way).

Thus, no need to reinvent the wheel for that (which is perfect fine for me).

So I started to play with maven (quite new for the .NET expert I am), and especially with the maven site plugin ecosystem. At the end, it took me several hours in order to fully understand how to use it properly. With some questions like:
  • Which maven plugins (and versions) should I configure within the pom.xml I intend to use in order to generate the web site for my project (which is not a java project by the way)?
  • Which minimal pom.xml could I set in order to generate a web site with the mvn site command-line (important for non-java project, and to KISS)
  • Where should I put images and other resources in order to be able to reference them properly from my markdown pages
  • Why and when should I have to call `mvn clean` before the `mvn site` command-line in some rare cases (like when changing the project name whithin the pom.xml file)
  • How to set a better look n feel to the overall generated web site?
  • Which markdown syntax was working, and which one was not well supported by the tools (i.e. the atx-style headers)?
  • etc.
I finally intended to package something that would help me (and others) to quickly setup such web site generation solution for every new project Knowledge Management.

Thus, i'm proud to present you the wiked! solution,  now available on gitHub.

More than a simple bootstrapper that will ease you to earn some time when setup-ing such a solution for your project, wiked! also offers you a structure and some content for all your projects web sites (see the values and practices section for instance).




And since the DRY principle was one of my driver, I won't fully describe here what wiked! is all about. If you are interested, you can have a look at the wiked! ReadME.md

Hope this help.

Monday, 12 November 2012

It's really time to DRY our apps' Knowledge Management!

While I was reading the amazing Pragmatic Programmer manifesto, several years ago from now, I really enjoyed the "Keep Knowledge in Plain Text" principle (tip 20).

Unfortunately, I never really took the time to apply this principle seriously for the Knowledge Management (KM) of the various projects on which I worked. Every time, I lazily put all the team/application related informations whether in a wiki, or as MS Office documents stored on a Sharepoint... (sad panda ;-)


Last time, while I was having a lunch with a colleague of mine (a well-known and passionate french craftsman), I realized how lazy I was so far regarding this KM topic. With 2 or 3 words only (read-only wiki, Markdown documentation, SCM), Cyrille starry-eyed the dormant craftsman in me.

Indeed, since:

  • wiki web sites for DEV teams usually finishes badly (e.g.: lots of different wikis/sections for the same component/teams, no proper knowledge of what is still relevant and what is deprecated, structure mess, etc)
  • documentation created separately from code or components is less likely to be correct and up to date
  • it’s not easy to compare/diff MS Office documents
  • it’s important for every piece of knowledge to have a single, unambiguous, authoritative representation within a system (DRY principle)
  • the text documentation has becoming more important than ever since the advent of the blue book (to express the ubiquitous language of our projects, to clarify some large-scale design intents, etc)

It would be a good idea for us to find a way to get rid of traditional team wiki site, and instead to write every documentation in:
  • text files written in Markdown format (we may benefit from any Markdown existing tool/editors)
  • stored within the SCM of our project (e.g.: SVN, git)
  • and to make our Software Factory (re)generate a static web site (like a read-only wiki) every time we changed those Markdown text files from within our SCM. 

As benefits, our team/application documentation will:
  • always be up-to-date and versioned
  • easily diff-able (text files with Markdown format)
  • respect the dry principle (with our SCM as its golden source)
  • easily browsable by everyone in a readonly but readable wiki-like web site (DEV, QA, Support teams, …)
  • easily modifiable by team members in a well know and official location (as easy as creating/modifying a text file in a SCM)

Of course, the idea here would be not to reinvent the wheel, but to reuse any existing Markdown renderer & static web site generators instead (the step beyond the github Readme.md file). But if I can’t find something and make it out quickly, I would probably initiate an open source project to implement such system.


Cyrille was right: it's probably time for us now to kill wikis! and to reinvent them.

Last but not least: a very interesting post of Cyrille about Collaborative Artifacts as Code.



BREAKING NEWS: I found a solution for my use cases, described here.


Friday, 1 July 2011

Continuous delivery at Facebook

Chuck Rossi is explaining in a video how the Release management is handled at Facebook; it allows them to push daily updates to their site without production outage or service interruption.  This video is very informative and the announced KPIs are very impressive...

watch the video here

While we are talking about release management, I also highly recommand you to dig into the "continuous delivery" book and blogs of Jez Humble and Dave Farley.

Release management is definitively not only about process and tools; it is about changing our culture.

Tuesday, 28 June 2011

Handle your technical debt with SQALE!

You should try SQALE; a generic, language and tools independent method for assessing the quality of source code.

In a nutshell, SQALE aims to manage the Technical debt within your projects; its classification allows you to analyze the impact of the debt and to define the priority of code refactoring/remediation activities. SQALE is also pragmatic and result-oriented: it is a requirement model and not a set of best practices to implement. Based on this requirement model, SQALE will also allow you to produce ratings for all your projects (from A to E).

Even if SQALE is tool independent, the method target for an automated implementation and you can already find some existing concrete solutions (see the SQALE plugin for SONAR for instance).

You should definitively have a look on SQALE.

Tuesday, 4 January 2011

Low latency systems in .NET

Not new, but since i'm currently in a low latency mood ;-)

Back to june 2009, Rapid Addition, one of the leading supplier of front office messaging components to the global financial services industry, published a white paper with Microsoft on how to build ultra-low latency FIX engine with 3.5 .NET framework...

Their whitepaper (and RA Generation Zero framework) mostly indicates how to prevent from having gen2 garbage collection.

I really like their description of a managed low latency system with a startup phase (with memory resource pooling preallocation), a forced full GC phase, then a continuous operation phase for our low latency systems (where we avoid to cause garbage collection by using and recycling resource pools).

According to me, this white paper demonstrates that Rapid Addition really know what they are talking about.

In another press communication, they have said that their FIX engine breaks the 10 micro seconds average latency with a 12,000 messages/second throughput (still with the 3.5 .NET framework based solution).

Very informative.

Sunday, 2 January 2011

A new perspective for Ultra low latency performant systems

I just watched an awesome presentation by Martin Thompson and Michael Barker. They explained how they implemented their ultra low latency (with high throughput) systems for the London Multi-Asset eXchange (LMAX) and it's pretty impressive: 100 000 transactions per seconds at less than 1 millisecond latency in Java...

Since I'm working on FX and low latency systems in general about several years from now, I was very interested by their teasing (100K...). I have to admit that I was thrilled by their presentation.

For those that didn't have one hour to kill watching their video, here is a summary:

---HOW TO DO 100K TPS AT LESS THAN 1ms LATENCY----------------------------
  1. UNDERSTAND YOUR PLATFORM
  2. CHECK YOUR PERFORMANCE FROM THE BEGINNING
  3. FOLLOW THE TIPS
---------------------------------------------------------------------------------------------------------------


UNDERSTAND YOUR PLATFORM
  • You have to know how modern hardwares work in order to build ultra low latency systems
  • The advent of multi-cores with their bigger and smarter caches (do you really know about how proc cache synchronization is working? false sharing drawbacks? etc)
  • Ok, free lunch is over (for Ghz), but it's time to order and use more memory!!! (144GB servers with 64bits addressing for instance)
  • Disk is the new tape! (fast for sequential access); rather use SSDs for random threaded access
  • Network is not slow anymore: 10GigE is now a commodity and you can have sub 10 microseconds for local hop with kernel-bypassing RDMA
  • (Not hardware, but) understand how GC and JIT work (under the hood)


CHECK YOUR PERFORMANCE FROM THE BEGINNING
  • Write Performance tests first
  • Make it run automatically and nightly to detect when you should start to optimize your code
  • Still no need for early and premature performance optimizations


FOLLOW THE TIPS
  • Keep the working set in-memory (data and behaviour co-located)
  • Write cache (cache-lines synchronization) friendly code (the rebirth of the arrays ;-)
  • Choose your data structure wisely
  • Queues are awful for concurrency access, rather use Ring Buffers instead (no stress: we just said that we bought lot of memory ;-)
  • Use custom cache friendly collections
  • Write simple, clean & compact code (the JIT always do better with simpler code-shorter methods are easy to inline)
  • Invest in modeling your domain. Also respect the single responsibility principle (one class one thing, one method one thing,...) and the separation of concerns
  • Take the right approach to the concurrency. Concurrent programming is about 2 things: mutual exclusion and visibility of changes which can be implemented following two main approaches. i) a difficult locking approach (with context switch to kernel), and ii) a VERY difficult atomic with no blocking (user space) instructions (remember how are implemented optimistic locking mechanism within databases). You should definitely choose the second one
  • Keep the GC under control. Because the GC may pause your application, you should avoid it by implementing (circular buffer) preallocation and by using a huge amount of (64bits) memory
  • Run business logic on a single thread and push the concurrency in the infrastructure. Because trying to put concurrency within the business model is far too hard and easy to get wrong. You would also turn the OO programmers dream of: easy to write testable and readable code. As a consequence, it should increase your time to market.
  • Follow the disruptor pattern which is a system pattern that tries to avoid contention wherever possible (even with business logic running on a single thread)
---------------------------------------------------------------------------------------------------------------

Ok, this presentation leave us with lot of questions about how they implemented their systems. However, I found this video very refreshing and interresting regarding several points (the need for huge amount of RAM with 64 bits addressing, the interest of network kernel-passing technologies with RDMA, the fact that queues are bad to handle concurrency properly, that clean and simple code doesn't prevent you from having excellent performances, that concurrency should stay out of the business logic, that the separation of concerns allow you to have very competitive time-to-market,... and of course that you need a write-performance-tests-first approach).


For the disruptor pattern explanations and much more, see the entire video of this presentation here: http://www.infoq.com/presentations/LMAX

Note: don't see this video in full-screen mode otherwise you will loose the benefits of the slides displayed below ;-(

Cheers, and happy new year to everyone ;-)

Wednesday, 28 March 2007

My personal motto for software development

Here are my four commandments for software development. Nothing is really new, but these are the basis criteria I try to follow each time I code a class :
  1. Make it testable
  2. Make it simple
  3. Make it readable
  4. Make it cohesive
Make it testable: because the test act like a specification and helps us to focus on design before writing our implementation code (cf Test Driven Development).

Make it simple: because complexity has several costs (difficulties to enter the code for every newcomer, painful sessions of debugage, bigger slowness for modification, ... ).

Make it readable: because source code must be human-readable (whereas binaries must be computer-executable), and because we usually do not work alone !
The choice of our method/variable names, the adoption of common naming and coding guidelines, and the quality of our source code comments (which must expose our intentions) is crucial.

Make it cohesive: because cohesion (the measure of how strongly-related and focused the responsibilities of a single class are) brings reliability, reusability, and understandability. It is essential to be capable of formalizing the responsibilities of a class in order to ensure its cohesion (I usually ask my teammates to make their best in order to write a precise and clear "summary comment" at the top of each one ot them). Highly cohesive classes are also usually easy to test (cf. Make it testable).


Like the Test-Driven Development motto (Red, Green, Refactor), I think this kind of reminder may be useful in our daily work.

Saturday, 27 January 2007

Juval Lowy's white papers

The well-know .NET expert Juval Lowy (author of the must-read "Programming .NET Components") has recently published some white papers such as C# coding standard, SOA design guidelines, Windows Communication Foundation (WCF) Essentials, etc.

This is available in the Resources section of the IDesign web site.

I recommand the use of the first part of the "IDesign C# Coding Standard, for development guidelines and best practices" for any new .NET project which you would begin.

On the contrary, the multi-threading guidelines part of this document is bulsh#%^$ - sorry- I mean: isn't fit to all kind of development (especially not for real-time one !!!)
=> It would have been more logical to associate it to the ASP.NET section, for instance ...