Jan Algermissen wrote:
> If you put semantics in the message that are not covered by the MIME
> type, then you introduce
> an undesired coupling between client and server. If you send someting
> in MIME type X then the
> meaning of that message should be understood by any component
> handling that mime type.
> (That is why for example application/xml is a bad choice because it
> has no sematics beyond "this is XML")
> If you send serialized Java objects, you surely should do that with
> the proper MIME type, even if it is only
> standadized within your adminstrative domain.
The application needs particular data with particular meaning. The mime-type
is
important to the targeted application to control the processing of the "type"
of
information returned. I don't see where this part of the discussion is going.
> > and I want to send this over HTTP, I might write a GET URL as
> >
> > ...?v1=23&v2=66&v3=223&v4=help%20me&v5=my%20name&v6=my%account
> >
> No, you do not send stuff as part of the query string and you do not
> send stuff via GET. You send stuff via PUT or POST.
This is a query for a set of information, not a store of information. The
parameters to the get are "form" parameters. I don't understand why I would
use
PUT or POST for that.
> > If I was returning this to the client, because I am using Java and
> > I can use
> > java.util.Properties.load(InputStream), I might send back
> >
> > Content-Type: text/plain
> > Content-Length: ...
> >
> > v1=23
> > v2=66
> > v3=223
> > v4=help me
> > v5=my name
> > v6=my account
> >
> > As an asside, if I was using XML, the same issues below would apply.
>
> Not necessarily. If you use a sopecialized MIME type for your
> semantics, the receiving component can very well check for allowed
> element names (e.g. HTML allows a title element, put no grupswush
> element)
But, this is all overhead in application design. With Java and an RMI
programming model, all the type checking is done for me, and all of the data is
sent without any extra effort on my part, because it already has a mobile
wrapper, instead of having to be converted to an alternate form.
> > I could
> > apply some other XML tools to help me get the "object" converted to
> > a document
> > for return or POST, but the "get" would still require manual work.
> > Using those
> > tools is extra knowledge and extra expense and time.
>
> You'll need to do verification on input data in any distributed
> environment, so what?
But that's value based verification, not "presence" based verification. In the
case of conversion from native objects to any other representation for
transport, if I do that transformation manually, then I have a lot of extra
work
to do.
> >> Now wait - HTTP does not provide the ability to type the content that
> >> is exchanged between clients and servers? What else are MIME types
> >> for then? Surely your PDF reader can verify if some message is indeed
> >> application/pdf. Or am I missing your point?
> >
> > I'd like to remind you of the classic problem with http servers
> > everywhere.
>
> Hmm...what do HTTP servers everywhere have to do with using HTTP as
> your application protocol?
My example was that you can't depend on the content and the mime-type making
sense for any particular application. If I wrote a download manager, how would
I know how to treat text/plain typed data that was binary information? All of
the web browsers that I've used, always open text/plain documents for me to
view, instead of prompting to download them. I've seen many people then select
"File->Save" and end up with a corrupted file.
> Can it be that the only thing that bothers you is that you do not get
> compile time type checking???
> (In the age of late binding and dynamic tping, who still want that
> anyhow?)
I'm saying that everything about HTTP breaks down because of the lack of
controlled structure to the system. There are countless examples of things
that
you can't depend on. So, if I can't count on the system to behave right at the
application level, I can only really depend on the transport layer.
> >> It is all about enabeling independent evolution of your components.
> >
> > As does the use of Java interfaces for client/server interactions.
>
> But you have API coupling, don't you? When you use a uniform
> interface, you do not.
I have a coupling to the use of that interface that is no stronger than the
coupling you have to the content of a retrieved HTTP transported response
"document". In Java, we have the opportunity to evolve the interface trivially
through the use of interfaces. If I have an interface such as:
public interface MyFunctions extends Remote {
public void oper1( int val ) throws IOException;
}
and I need a new function for another customer and I don't want to create
problems for my existing customers, I can do
public interface MyMoreFunctions extends MyFunctions {
public void oper2( String val ) throws IOException;
}
and then have my "service" implement this sub interface. Now, everyone using
the old version has no issues with the new version, they get the same
interface.
But, as time goes by, they can switch to the new implementation if they want.
And, as I've said here before, with a smart proxy, I can hide the
implementation
of oper1(int) to be a client local translated call to oper2(String) if that
makes my service easier to maintain.
Thus, the clients and the servers evolve completely independently.
> So far, I think only compile time type checking in a single process
> situation ca do that. What prevents some socket to return garbage to
> a Jini client?
Everything about TCP prevents "GARBAGE". But more importantly, there are a
multitude of things in place to keep you from getting anything different than
what you tested with.
1. The interface keeps the functional semantics from changing
2. The httpmd: protocol keeps the server or a third party in transit
from changing the JAR of downloaded code changing without your
knowledge. If you don't need httpmd: though, you don't have to
use it.
3. When you receive mobile code, proxy verification provided in the
JERI implemenation lets you know that the proxy object that you have
is in fact the correct software for you to use to talk to the server.
No third party interjection of code possible here.
4. JERI authentication and authorization tells you that you are talking
to the expected service(s).
5. Object serialization version ids make sure that the objects representation
is valid for your JVM.
> > HTTP is everywhere if you live in a document world. I live in a
> > "data" world
> > where the data is down inside of very small devices using very
> > limited computing
> > environments.
>
> I have never evaluated that, but I doubt that there is a prohibitive
> overhead introduced by HTTP on top of sockets.
Most of these devices have very limited memory. Any free memory is used to
store data. The data is the vital enterprise item.
> I am very interested in providing HTTP in small or embedded devices.
> Does anyone have any figures about this?
There are countless HTTP servers on different types of embedded devices. I've
implemented HTTP servers on the aJile native java microcontrollers. Others
have
implemented them on other, smaller devices using very, very, very customized
TCP
/IP stacks that basically generate static packets onto the ethernet without any
real TCP functions available.
> > I think that there are several issues. If all you need is to "get"
> > something,
> > then HTTP works. If you need to exchange things and make a multi-
> > facited
> > transaction, HTTP breaks down. HTTP is a publishing technology.
> > It doesn't do
> > as well as a conversational technology because the lack of type
> > validation and
> > associated control of content.
>
> Hmm...I just do have totally contrary experience about this. And I do
> not see how you can be sure that whatever you get over the network or
> from another process on the same machine is in any way correct.
The items I listed above make the focus on what the application has to handle
in
terms of errors very limited. The JERI stack will keep failures of many
different types from propagating into the the application layer as "okay" data,
which is actually bad.
> OTH, RESTs architectural properties of message self-descriptiveness
> and visibility make it possible to check the validity of a message at
> any point, by any component.
But, my continued question is if you have a 10K document structure that you
need
to validate the "content" of (as opposed to the structure which DTDs or schemas
help with), how hard is that? How hard do you have to work to make sure that
an
expected integer value isn't now a floating point value that some clients
parser
is going to fail on?
Gregg Wonderly
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/service-orientated-architecture/
<*> To unsubscribe from this group, send an email to:
[EMAIL PROTECTED]
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/