James Crisp

Software dev, tech, mind hacks and the occasional personal bit

The Fallacy “Best of Breed” in Layered Solutions

Imagine you are designing a layered solution where data enters in a GUI, and passes through several layers for transformation and processing before being written to a database. Everyone knows that layering is a good way to do decomposition, right? It means you can work on each layer separately, without affecting any other layer? It means we can even hand off each layer to a separate person/group/company and different hardware to handle? This is all looking so good, now we can choose a “best of breed” solution for each layer. If each layer chooses the technology and implementation group that is the very best for that sort of work, it must lead to the very best solution overall, right?

Well, data needs to flow through all of the layers in this sort of design. Lets take an imaginary example. Say one layer in the middle has a data field length limit of 255 characters. This means that every layer is then limited to this data length, otherwise the data will be truncated or rejected on the way to/from storage. Instead of getting the advantages of each layer’s solution, you end up being limited to the lowest common denominator of all the layers.

A further problem is staffing and team structure. If each layer has chosen a very different “best of breed” technology, it will be difficult to find one team/company/group that can handle all of the layers (eg, Java front end, BizTalk middleware, Mainframe backend) and do vertical slices of functionality. Of course, you need the “best of breed” for the staffing! Hence, implementation is often split between different teams/companies (horizontal slicing of teams), each of which is known for skills in a particular layer. Although each team may be “best of breed”, we end up with the lowest common denominator again. Methodologies are likely to differ between teams (eg, waterfall vs agile) so the communication and planning is limited to the area of overlap between methodologies. The same applies for project goals. For example, one team may focus on user experience and another may focus on building an enterprise wide data model. It is only where/when these goals intersect that the project can progress efficiently.

What can we do to defuse this sort of architectural design in its infancy? Questions to ask:

  • How many times is the same data transformed, and does each transformation add value?
  • Can multiple layers be hosted in the same process rather than split between different machines/processes?
  • Integration is always time consuming. Do the “best of breed” advantages of a solution for a particular layer outweigh the cost of cross process or cross technology integration?
  • Can one co-located, multi-disciplinary team be formed to build the solution?
  • By comparison, how many people would be required, and how long would it take to build the application with the simplest architecture that could possibly work?

 

Previous

“Ruby for Rails” by David Black

Next

Talk: Securing your MVC site against Code Injection and X-Site Scripting

4 Comments

  1. That’s why many companies come up with architectures based on vertical slice concept, where each developer/team delivers end-to-end functionality. Each developer would write his own UI, stored procs, services, etc. and all the plumbing will be autogenerated by approved templating tool (to adhere to all standards, etc.).

    Disadvantages? Main one is code duplication, where the same problem is often solved by various developers repeatedly. While it’s an obvious inefficiency, it’s not much of an overhead often.
    Imagine horizontal slicing. A service consumed by Team1 and Team2 is developed for them by service team. If Team1 needs to change it later, it needs to go through endless consultations with the service team and Team2 on changing the service. Often it’s too much of risk to change and too much of a regression testing to be performed, hence, a BIG NO to any change.
    In vertical world, Team1 just changes it and that’s the end of it. While not pure, it delivers real life results for business.

    And if service isn’t working, it’s affecting particular part of the application developed by Team1, not a range of features developed by many teams.

  2. Alex just beat me on this one. I’d also suggest to have a vertical team structure. It works better and it’s easier to scale. What will you do in case you need more people? Add more layers? When you need to work on new modules, another team could just take over and work independently of the other teams.

  3. With you both guys. I think developing and delivering in vertical slices is the best way to go. It lets you deliver production ready business functionality from each iteration and stop at any time, and get working software in front of the customer as soon as possible. Not to mention cutting down on the communication overhead and avoiding arguments around different goals and priorities between different layers.

    It’s not always possible to go that way though, for organisational and political reasons. One such reason is the “best of breed” mentality (really lowest common denominator) approach to the design where the technology choices (eg, Java dev and mainframe dev) and team choices (eg, different consultancies or different geographic locations) preclude one team handling all the layers in a vertical slice. This means you end up with layered teams. Even in this case it is still worth aiming for vertical slices (all teams working on the same slice of functionality at the same time) but methodological differences, differences in goals and difference in development speeds between the teams in different layers can make this very difficult or even impossible. This is the situation you want to avoid!

  4. You have raised many good points and questions to consider in this post. I will try to add something helpful to the conversation:

    1. With regards to “Best of Breed” fallacy, I think many would agree with you that local optimisations can often have a negative effect on global system performance. An example that I can recall that is similar to your 255 characters example. I remember reading about Lee Iacocca (from his auto biography) about his observation that Ford shipped 2 cars per rail cart. After some measurements, he then went back to the engineers and asked them to make the cars 2 cms shorter. The end result was that 3 cars could now fit into one rail cart. A 50% increase in throughput by taking 2 cms off the length of the cars.

    2. With regards to architectural design considerations, communication between all the stakeholders is the biggest challenge (IMHO). With good communication the technology often develops over time to a satisfactory level. The vertical slicing approach; I would classify that as a communication strategy rather than a technology strategy. A bit like a special ops unit within the army, a smaller group that moves faster and has a reduced stakeholder set to answer to. Such an approach gives local optimisations but I don’t think you can win a war or feed an army(/enterprise) with loosely coupled sets of special ops groups. 🙂

    I am more inclined to favour an enterprise architecture approach. There are many, but I like some of the thinking behind the Zachman Framework and it has stood the test of time. Having a taxonomy that maps business strategy down through an organisation and into IT infrastructure helps everyone sign from the same hymn sheet.

    P.S. I am admire your posting frequency – well done 🙂

Comments are closed.

Powered by WordPress & Theme by Anders Norén