> Ian Griffiths wrote:
> >.NET remoting requires both ends to share type
> > information.  This entails a degree of coupling
> > between your systems that is likely to be highly
> > undesirable.
 
Frans Bouma replied:
> Everybody who has written a remoting setup knows that
> this is easily circumvented by defining an assembly
> with just interface definitions which are implemented
> by teh server and consumed by the client.

That mitigates the problem, but it doesn't solve it.  Sooner or later
you're going to want to evolve your remote interface, which involves
getting new versions of the interfaces to both ends.

In a situation where the same team owns both ends of the link and can
coordinates the releases of clients and servers, this is not a problem.
But not everyone gets to work in that scenario.  In large organizations,
there's every chance that even though the client and server may both be
running .NET, the software on each box may be owned by different teams.
Coordinating updates is a real headache here when you have to share type
information.

It's not exactly a piece of cake with web services either of course.
Evolving public APIs is hard.  But web services certainly make it a lot
easier, because there isn't that hard dependence on a specific assembly
being available at both ends.


> You don't need to store type implementations on server and
> client, just the interfaces.

As I said, you need to share type information - interfaces are types
too.  You don't need to share implementations, but then I never said
that you did.  The fact remains that you need to get the interface
definitions to both ends.  This involves having equivalent .NET
assemblies at both ends.  (The fact that the types are all interfaces
doesn't make that any easier - it's a file that has to be there either
way.)

The reason for using interfaces is to minimize coupling between client
and server, with the hope being that you can evolve both relatively
freely whilst minimizing the number of changes you need to make to your
shared interface component.  But sooner or later, you're likely to find
that you want to change something in that interface component, and
that's where the trouble starts.

(Of course to do some of the things you were talking about earlier -
passing cyclic object graphs with polymorphic members, you *are* going
to need class implementations on both ends.  But I'd strongly recommend
against such a design in most remoting scenarios.)


> Why is it highly undesireable that systems are coupled?

The same reason you avoid putting concrete classes in your shared
interface DLL.

The more tightly coupled any pair of systems are, the harder it is to
evolve them independently.  (In fact, that's one definition of 'tightly
coupled'...)  In extreme cases, it is impossible to update anything
without updating everything - your distributed system must be deployed
monolithically.  The people responsible for keeping a company's IT
systems running are usually (rightly) reluctant to do such big-bang
deployments, because it's a highly risky thing to do.  But if you have
low coupling, there's a much better chance that you'll be able to update
components independently.

The main reason for defining all your remoting APIs in a separate DLL
and making them interfaces reduces coupling - it increases your chances
of being able to update either end of the connection in isolation.
However, the success of this ploy depends on being able to keep those
shared interfaces stable.  Web services take the decoupling one step
further by observing that it's not actually necessary to share the
interface types at all.


> Webservices are also coupled. these types are also known
> at the client!!

I disagree with the implied view of the relationship between the XML
documents that web services and clients exchange, and the types those
services and clients use internally to represent those documents.  You
appear to be assuming that both ends must be using the same types.  This
isn't actually true; indeed, it is one of the fundamentally important
advantages of web services.

The fact is that with web services, the types the client code uses don't
need to be the same as the types the server code uses.  All that matters
is that the XML documents they exchange look right.  And there are any
number of ways of representing an XML document in code.  (As an extreme
example, the client can use the type "text string" as its input to the
web service.  I've seen that done extremely successfully in fact - for
simple cases StringBuilder can be a great expedient way of building a
web service client...)

Even if the client uses something more sophisticated, then so long as
the XML documents it sends to the server conform to the schema the
server requires, and the returned documents conform to the schema the
client expects, all will be well.  The server doesn't care what types
the client uses internally, it only cares that the XML is receives looks
like.  This is still a sort of coupling of course, but a significantly
less restrictive kind of coupling than the need to share type
information.  (This doesn't even necessarily require schema to be
shared.  With careful design, it is possible for the server to evolve
its schemas for incoming documents without breaking existing clients.
As long as the new schema accepts a superset of the documents acceptable
under the old schema, it doesn't matter that the two ends are
different.)

I'm surprised you've not come across this concept.  Lots of people in
the remoting and web services space have been talking about this for
ages. For example, the blogs of Don Box and Steve Maine spring to mind
(that's http://www.gotdotnet.com/team/dbox/ and
http://hyperthink.net/blog/ respectively).


> In fact, VS.NET regenerates classes for those types at the
> server when you add a web reference,

Not everyone consumes web services that way though, even if they are
using .NET at both ends.  And even if they are using the wrappers VS.NET
generates, this doesn't necessarily preclude the kind of evolution I'm
talking about, if you get your schema designs right.  (But it's crucial
to be in control of your schemas.  If you regard schemas as being an
XML-flavoured definition of the concrete types you are using, then your
chances of success here are low, so you may as well just use .NET
remoting.)


> but these regenerated classes can often be a big pain
> as they don't contain deeper logic

That's by design.  And it's a design choice that was informed by the
lessons learnt in the 1990s with CORBA, RMI, and, to a lesser extent,
COM.  You really don't want to be sharing class implementation across
remoting boundaries in most applications.  (It definitely makes no sense
in a web service scenario, because sharing implementations across
remoting boundaries is a very highly coupled design approach.)


> > IIS ... it's the only supported mechanism for getting
> > integrated security
> 
> Integrated authentication can be 'good' at first, but once
> you use a remoted service, you should protect the datastream
> anyway, which doesn't make non-IIS remoting a bad solution
> per se. If 2 applications just have to communicate on a
> binary level, why on earth do we need this big stack of
> overhead services to get that going? :)

Why indeed?  I agree this is bad, but it's where we are today.  I
understand the plan is to fix this in Whidbey.  But I don't have the
luxury of deploying Whidbey applications just yet, what with it being
almost a year away.  :-)

For certain kinds of applications this is really important.  (And for
others, it's completely irrelevant.  It all depends on what you are
doing.)  If it's important to you, your options are either to use an
unsupported channel, or to host in IIS.

(And for some apps, it's irrelevant of course...)


> Could you please provide some argumentation why the close
> binding of a server interface with the client is so bad?

Yes.  I'll provide two examples.

First example: Suppose I have a service accessible via .NET remoting
that has been up and running for 6 months now. It exposes several
different endpoints, and it has a number of client systems using these
various endpoints in various different ways throughout my organization.
Suppose I want to add a new method to one of the remote interfaces it
exposes.  Suppose this method is for the benefit of one particular
client piece of software, but that I have lots of other clients out
there already using the existing service, but which won't use this new
feature.  (I know I'm always berating you for not being concrete enough,
so I'll be a little more specific here.  One place I've had to do this
was when the management and monitoring API for a service needed
modifying but its main operational interfaces did not.)

Conceptually, there's no good reason for most of the clients to be
disturbed. I'm augmenting a feature that most of them don't even use -
at most I should only need to update the clients that rely on the
interface being changed.  (And in an ideal world, I should only need to
update the clients that are going to *use* the new functionality -
there's no real reason for existing clients to be affected by the
addition of features they're not going to use.  That shouldn't be a
breaking change.)  But I'm going to end up with a new version of my
remote interfaces DLL.  Would you really be happy if your clients were
all running with a different version of the remote interfaces DLL from
the version on the server?  (Indeed it might not even work - the change
in version numbers can be enough to make .NET remoting give up.  But
even if it did plough on, it's not the kind of setup you really want on
your production systems.)  So you're going to have to update *all* your
clients, even though nothing really changed for many of them.

(Or you're going to have to start introducing multiple remoting DLLs.
That might be an expedient solution, but seems like the start of descent
into chaos - how are your systems going to look after a year of that
kind of thing?  In any case, this only allows isolated updates if you
put every interface in its own DLL to start with.  If you're starting
from a position of having one remote interface DLL, going to multiple
DLLs doesn't help - you're still going to have to update every client
the first time you do this kind of split.)

Compare this with web services.  There is no requirement for shared type
information.  The only requirement is that the documents are valid
according to the schemas in play.  It is possible to accommodate
evolution of the remote API without updating every client every time -
introducing a brand new element definition into an existing schema isn't
going to cause documents that were valid according to the old schema to
become invalid all of a sudden.  (Yes, I know, you could contrive such a
thing through careful use of xsd:anyType in the original document.  But
for a large and useful set of cases, such modifications are safe.)


Another example: suppose that some of my clients are running v1.0 of the
.NET framework and some are running v1.1.  I've had trouble connecting
v1.0 and v1.1 systems together with .NET remoting.  (Even if you stick
to the rules for avoiding problems with the security changes that came
in with remoting serialization in v1.1, I've still seen problems.)  It's
not always entirely clear what the degree of support is for
deserializing byte streams serialized by a different version of the .NET
framework.  One way of looking at this is that even with the .NET
Framework at both ends, you don't necessarily have the same platform at
both ends...  Use of remoting can introduce a new type of coupling that
requires me to be running the same *version* of the platform at both
ends (or that at least makes it impossible to be completely confident
that it's going to work when you have different versions).  Again, this
kind of thing doesn't tend to go down well with the IT ops people - even
with side-by-side .NET frameworks, you're still going to end up with a
big-bang update of all the various clients when you move the server to
the new version.

With web services, versions of the .NET framework are a non-issue,
because the .NET type system isn't involved at all.  A given XML
document is either valid or invalid for a given XML schema, regardless
of the nature of the runtime you use to generate or consume that
document.



> Talking about myths, I think the fear for having coupled
> systems is based on a big myth. If coupling of systems
> is so bad, every tier in an n-tier system should be
> completely abstract and only be communicating
> with others through xml (both commands and data).

Of course there will always be some degree of coupling.  It would
obviously be idiotic to claim that zero coupling is a goal, so please
don't mischaracterize my argument like that.  At the minimum, a client
presumably has some expectation of what a service will do when sending
it a request.

All I'm saying is that the higher the degree of coupling, the harder it
is to change anything without changing the entire system.  For some
systems that's just fine.  If that's acceptable on the systems you're
working on, then you don't need to worry.  But it's not OK for all
systems.

So this comes back to my original point: sometimes web services will be
a more appropriate choice than .NET remoting.  Not always, but
sometimes.  That's all I'm saying.  Is there a performance hit for the
flexibility?  Of course - XML is going to be more verbose than binary
serialization.  But that doesn't mean that web services are always
wrong.  It just means there's a tradeoff - are you prepared to sacrifice
the flexibility to improve the efficiency?  The answer to that question
will depend on the requirements of individual applications.  And that's
why I disagree with your claim that web services are always the wrong
way to connect .NET systems together.  I would say that they are only
*sometimes* the wrong solution.
 

> > Sometimes it is.  But you seem to be saying that web services
> > are always the wrong choice, and I really can't agree with that.
> 
> As long as XmlSerializer is REQUIRED, webservices are the wrong
> choice,

Now that you've recanted some of your comments on XmlSerializer
performance in another email, I'm not sure if you're still standing
behind this particular comment.  I still disagree that it has anything
to do with XmlSerializer.  Web services have an unavoidable price of
entry that is to do with the use of XML, rather than specifically
XmlSerializer - replacing XmlSerializer with some other XML mechanism
doesn't get around that cost of entry.


> especially in situations where you are communicating between
> 100% .NET applications.

Now that I've put forward the two examples above, is this still what you
think?


> The XmlSerializer is severily broken as it can't produce Xml for
> simple classes with interface typed members,

That's a good thing IMO.  If you're trying to pass *objects* across a
remoting boundary, then I believe you're making a mistake.


> Aren't you agreeing with me that if you have two boxes,
> A and B, and on both an application is running and
> communicating with the other one on a binary level, that
> it is completely insane to run this communication through
> a webserver/http client, xml serializer/deserializer, soap
> envelopes etc. ? :)

No, I don't agree.  I think that *under certain circumstances* it can be
insane, but I can also see circumstances in which it's entirely the
right thing to do.  It all depends on two things: (1) the operational
costs of the relatively high coupling of a .NET Remoting solution are
for your particular application and deployment scenario; (2) how much
you really need the efficiency gains offered by remoting.

For scenarios where the efficiency benefits justify the cost, then yes,
I agree, it would be wrong to pay all the web service overheads.  For
situations where you need parts of the system to be able to evolve
independently, then I think it's not so clear cut.  For applications
where the web service overheads are not the bottleneck (and I've worked
on systems where the bottleneck is elsewhere) then again, I think it's
not clear cut.  It's entirely possible that the extra flexibility you
can obtain with web services will be worth the overhead.


-- 
Ian Griffiths - DevelopMentor
(RSS: http://www.interact-sw.co.uk/iangblog/ )

===================================
This list is hosted by DevelopMentorŪ  http://www.develop.com
Some .NET courses you may be interested in:

NEW! Guerrilla ASP.NET, 26 Jan 2004, in Los Angeles
http://www.develop.com/courses/gaspdotnetls

View archives and manage your subscription(s) at http://discuss.develop.com

Reply via email to