Imagine you are designing a layered solution where data enters in a GUI, and passes through several layers for transformation and processing before being written to a database. Everyone knows that layering is a good way to do decomposition, right? It means you can work on each layer separately, without affecting any other layer? It means we can even hand off each layer to a separate person/group/company and different hardware to handle? This is all looking so good, now we can choose a “best of breed” solution for each layer. If each layer chooses the technology and implementation group that is the very best for that sort of work, it must lead to the very best solution overall, right?
Well, data needs to flow through all of the layers in this sort of design. Lets take an imaginary example. Say one layer in the middle has a data field length limit of 255 characters. This means that every layer is then limited to this data length, otherwise the data will be truncated or rejected on the way to/from storage. Instead of getting the advantages of each layer’s solution, you end up being limited to the lowest common denominator of all the layers.
A further problem is staffing and team structure. If each layer has chosen a very different “best of breed” technology, it will be difficult to find one team/company/group that can handle all of the layers (eg, Java front end, BizTalk middleware, Mainframe backend) and do vertical slices of functionality. Of course, you need the “best of breed” for the staffing! Hence, implementation is often split between different teams/companies (horizontal slicing of teams), each of which is known for skills in a particular layer. Although each team may be “best of breed”, we end up with the lowest common denominator again. Methodologies are likely to differ between teams (eg, waterfall vs agile) so the communication and planning is limited to the area of overlap between methodologies. The same applies for project goals. For example, one team may focus on user experience and another may focus on building an enterprise wide data model. It is only where/when these goals intersect that the project can progress efficiently.
What can we do to defuse this sort of architectural design in its infancy? Questions to ask:
- How many times is the same data transformed, and does each transformation add value?
- Can multiple layers be hosted in the same process rather than split between different machines/processes?
- Integration is always time consuming. Do the “best of breed” advantages of a solution for a particular layer outweigh the cost of cross process or cross technology integration?
- Can one co-located, multi-disciplinary team be formed to build the solution?
- By comparison, how many people would be required, and how long would it take to build the application with the simplest architecture that could possibly work?