*You can read Todd's blog at:
http://www.biske.com/blog/?p=585
*
*Gervas*
<<A common theme that comes up in architecture discussions is the
elimination of redundancy. Simply stated, it's about finding systems
that are doing the same thing and getting rid of all of them except one.
While it's easily argued that there are cost savings just waiting to be
realized, does this mean that organizations should always strive to
eliminate all redundancy from their technology architectures? I think
such a principle is too restrictive. If you agree, then what should the
principle be?
The principle that I have used is that if I'm going to have two or more
solutions that appear to provide the same set of capabilities, then I
must have clear and unambiguous policies on when to use each of those
solutions. Those policies should be objective, not subjective. So, a
policy that says "Use Windows Server and .NET if your developer's
preferred language is C#, and use if your developer's preferred language
is Java" deosn't cut it. A policy that says, "Use C# for the
presentation layer of desktop (non-browser) applications, use Java for
server-hosted business-tier services" is fine. The development of these
policies is seldom cut and dry, however. Two factors that must be
considered are the operational model/organizational structure and the
development-time values/costs involved.
On the operational model/organizational structure side of things, there
may be a temptation to align technology choices with the organizational
structure. While this may work for development, frequently, the
engineering and operations team are centralized, supporting all of the
different development organizations. If each development group is free
to choose their own technology, this adds cost to the engineering and
operations team, as they need expertise in all of the platforms
involved. If the engineering and operations functions are not
centralized, then basing technology decisions the org chart may not be
as problematic. If you do this, however, keep in mind that organizations
change. An internal re-organization or a broader merger/acquisition
could completely change the foundation on which policies were defined.
On the development side of things, the common examples where this comes
into play are environments that involve Microsoft or SAP. Both of these
solutions, while certainly capable of operating in a heterogeneous
environment, provide significant value when you stay within their
environments. In the consumer space, Apple fits into this category as
well. Their model works best when it's all Apple/Microsoft/SAP from
top-to-bottom. There's certainly other examples, these are just ones
that people will associate with this more strongly than others. Using
SAP as an example, they provide both middleware (NetWeaver) and
applications that leverage that middleware. Is it possible to have SAP
applications run on non-SAP middleware? Certainly. Is there significant
value-add if you use SAP's middleware? Yes, it's very likely. If your
entire infrastructure is SAP, there's no decisions to be made. If not,
now you have to decide whether you want both SAP middleware and your
other middleware, or not. Likewise, if you've gone through a merger, and
have both Microsoft middleware and Java middleware, you're faced with
the same decision. The SAP scenario is bit more complicated because of
the applications piece. If we were only talking about custom
development, the more likely choice is to go all Java, all C#, or all
-insert your language of choice-, along with the appropriate middleware.
Any argument about value-add of one over the other is effectively a
wash. When we're dealing with out-of-the-box applications, it's a
different scenario. If I deploy a SAP application that will
automatically leverage SAP middleware, that needs to be compared against
deploying the SAP application and then manually configuring the non-SAP
middleware. In effect, I create additional work by not using the SAP
middleware, which now chips away at the cost reductions I may have
gained by only going with a single source of middleware.
So, the gist of this post is that a broad principle that says,
"Eliminate all redundancy" may not be well thought out. Rather, strive
to reduce redundancy where it makes sense, and where it doesn't, make
sure that you have clear and unambiguous policies that tells project
teams how to choose among the options. Make sure you consider all use
cases, such as where the solution may span domains. Your policies may
say "use X if in domain X, use Y if in domain Y," but you also need to
give direction on how to use X and Y when the solution requires
communication across domains X and Y. If you don't, projects will either
choose what they want (subjective, bad), or come back to you for
direction anyway.>>