My initial reaction was the same as Nick's and Michael's--is this an 
example of SOA? What other principles are in play here?

But, I'll let that thread run its course separately. I'd like to 
pursue a different aspect.

I'm a proponent of SO being applied at any level. SOA is not just a 
BA or an EA thing. Is this an example of SO applied to an application 
architecture (AA)? Let's assume that it is, for the most part anyway. 
Can we examine the complexity aspect in more detail?

I guess the first step is to make sure we're on the same page as far 
as understanding the before and after pictures.

Before:

A web-based app comprising many JSPs and EJBs.
A data store, replicating data from a data warehouse. The data 
warehouse was populated with data from various transactional 
applications.
Presumably a batch job of some sort to periodically pull data from 
the DWH into the application data store.

After:

A web-based app comprising 1 JSP, 1 SWF, and BPEL-specified 
invocation of web services.
The BPEL processes exposed as services.
Data services invoked by the BPEL processes. The services 
access "backend data sources" directly.

Unknown items:

"Service" equal "web service?"
Do any of the services have multiple interfaces? Multiple operations? 
Multiple versions?
The components included in the SWF.
Number of BPEL processes.
Number of data services.
The implementation of the "backend data" services.
Where the backend data services are hosted. Which group owns them.
How the services are discovered.
How the services are managed as independent entities (development, 
versioning, testing, deployment, etc.).

I have no doubt that the fraud detection application is simpler in 
many ways. Certainly, the elimination of the replicated data is a big 
win. What are the ways in which this solution is now more complex? 
What is the net complexity delta and how do we objectively measure 
that?

Some candidate metrics:

Number of independent components.
The internal complexity of each component.
The effort needed to interact with an independent component, relative 
to interacting with an internal component.
Administrative tasks required.
Troubleshooting aspects.
What others?

I'm hoping we can be more exhaustive in evaluating the complexity 
aspects of this case without violating NDAs or exposing confidential 
info.

-Rob

--- In [email protected], David 
Chappell <david.chapp...@...> wrote:
>
>  
> 
>   _____  
> 
> From: Rob Eamon [mailto:rea...@...] 
> Sent: Wednesday, January 14, 2009 3:25 PM
> To: [email protected]
> Subject: [service-orientated-architecture] Re: IBM's Carter on 
Selling SOA to the CEO
> 
> 
> 
> 
>  ... 
> 
> > If we were to look at the average complexity of the before and 
> > after, we see that it goes down substantially. There are many 
> > metrics that can be used to prove that (cyclomatic complexity, 
for 
> > one), but the KPI we see improving is the time it takes to make a 
> > change or add functionality to the ecosystem.
> 
> Does anyone have anything other than anecdotal evidence of 
complexity 
> decreasing "substantially?­" 
> 
>  
> 
> [DC] - In my blog posting from Earth Day of last year, 
http://blogs.oracle.com/davidchappell/2008/04/roi_by_the_ton_going_gre
en_wit.html
> 
> I highlighte a case study about Verizon Wireless going green by 
rewriting their fraud detection application using SOA, EDA, and Web 
2.0, and as a result eliminating 6 tons of hardware from their 
datacenter.
> 
> The old application, which was based on J2EE,  replicated the 
entire data warehouse of call detail records for use by the fraud 
detection application. It also had a lot of procedural custom code 
that was hand written by 5 FTE over 2 years, some stuff that was 
ported from Forte to J2EE, and 100s of JSPs feeding (circa 1995) html 
3.0 pages with data.
> 
> The new implementation is 0.5% of its original size. They replaced 
100's of JSPs and EJBs with 1 JSP and 1 SWF file for UI and BPEL 
processes which access call detail records from the backend systems 
directly (via service level interfaces).  They also went from 5 FTE 
over the course of 2 years down to 1 consultant who wrote several 
BPEL processes over the course of 1 year.
> 
> The old architecture had to replicate the call detail records and 
operate agains the replicated data in the event of a suspicious 
activity.  The new architecture uses BPEL processes (which are 
themselves exposed as services) to call directly to the backend data 
sources via service level interfaces to get access to the call detail 
records.
> 
> The best line of code is one that you never have to write. The 
complexity decreased substantially in terms of the amount of 
handwritten code to be maintained, in lieu of BPEL processes which 
can be declaritively modified using visual modeling tools.  Aside 
from the approach to language and tooling, the sheer volume of code, 
data, and hardware was also dramatically reduced, which no matter how 
you look at it will dramatically reduce the complexity.
> 
> Dave  
> mailto:[email protected]?
subject=
> 
> .
>  http://geo.yahoo.com/serv?
s=97359714/grpId=9428360/grpspId=1705007181/msgId=12605/stime=12319685
45/nc1=1/nc2=2/nc3=3
>


Reply via email to