IMO there are a variety of issues involved in the "component revolution"
that are related to the entire issue of "code reuse" as promised by all of
the "object oriented" technologies.

Component interfaces and communication:

The various component communication mechanisms are in competition - CORBA,
DCOM, XML-RPC, etc. They are not as yet standard. When the components reside
on the same system the efficiency hit is substantial. For a single processor
we have such things as DLLs which are mostly packaging, and OLE which allows
an application to export most of its object model and interface.

Using the interfaces:

No matter what mechanism is used to interface to a component, there is still
an interface to be learned. It may be termed an API, a COM interface, an
object model, or whatever. The fact remains that every component provides
some object structure and some ability to operate on that structure, all of
which requires understanding the nature of the component. This is true
whether the interface is a language (or dialect) of some sort, a programming
language or set of function calls, or an object model exported by
OLE/COM/DCOM/CORBA.

XML-RPC attempts to help by providing a DTD that defines the grammar of the
interface language and a definition of any particular interface. This is
another part of the approach of allowing one component to ask another for
its interface. It is really analogous to asking for and receiving a grammar
for the interface language and a dictionary for it.

Even given this, the programmer still has to understand the interface of the
component being used, at least in broad terms. This is independent of any
interface communication standards and, to my mind, inescapable. What grammar
definitions do help with is changing interface definitions and such things
as parameter type validation for remote invocations.

If we realized interfaces with dialects, we still need to know what the
dialect offers and how it is used. Rebol/View offers several internal
dialects, but programming in any of them will always require a knowledge of
the dialect.

The concept of "little languages" has been used with great success in parts
of the Unix communities for years. When these languages are implemented as
true grammars, they can be very useful and very powerful, since they allow
the programmer to address a problem domain in a language that is suited to
the domain. Languages such as REBOL and FORTH make heavy use of this idea -
the general concept of a dialect.

Indexing reusable components:

Another overwhelming problem is indexing and retrieval of components once
created. Give a universe of accessible components that you could potentially
use, how do you locate those that might be appropriate and how do you find
out what they can really do? If you think that a new component it needed,
how can you find out whether much (or perhaps all or even better) has
already been implemented and is available to build upon?

The best current answer that we have is that components of various sorts are
advertised or promoted in various ways, discussed in email lists and
newsgroups, featured on web sites, and found by accident or by design with a
web search engine.

Learning the interface is done as it has always been done - the bright ones
read the manual and then try things and then ask for help when what the
manual seems to say conflicts with their understanding and the behavior that
they observe. If the component is one that must be purchased prior to use,
the decision it spend the money has to come from the various discussions of
benefits and pitfalls of using the tools.

Notice that this problem doesn't change whether we are talking about
stand-alone applications, programming languages, development tools, or
components in any of their various forms.

While we as an industry have made some progress on the technical side of
reusable and even distributed components, we have hardly the glimmering of
an approach to this problem of indexing. We haven't solved it with regard to
books and other natural language communications, and we haven't solved it
for components.

What now?:

Having said all that, it is clear that components have a real place in the
scheme of things.

Being able to describe the structure and format of the actual interface with
a grammar as is done with XML-RPC is a good step in generality.

The ability to create dialects within a language that allow the use of
component / interface features easily is harder but still useful.

Whether these "little languages" are small enough to be realized by
something like dialects in REBOL or are more appropriately constructed using
parser generators so as to produce true compilers or interpreters for the
languages would seem to depend on the nature of the component interfaces
involved. That REBOL has succeeded with so many of the internet protocols is
an indication of the value of the approach. If these protocols had been
developed from a perspective of language to start with, the job would be
simpler. If the email protocol had been defined based on an internal object
architecture then the protocol itself would consist of little more than
appropriately marshalling and unmarshalling the object, which is the effect
that the REBOL dialect (or Perl library, or Python library, or . . . ) has.

FWIW,

Garold (Gary) L. Johnson
DYNAMIC Alternatives
[EMAIL PROTECTED]


-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf
Of [EMAIL PROTECTED]
Sent: Monday, July 17, 2000 1:52 PM
To: [EMAIL PROTECTED]
Subject: [REBOL] where are all the components?

Hi all,

Here's a longish rant I originally sent to some friends this weekend
about where I see REBOL having a lot of promise in "the big picture."
I'm sure I'm not saying anything that RT isn't already thinking of, but
considering how long it's taken me to really GET what REBOL may mean for
the industry, I hope this might be useful for some readers of this list
who are still new to REBOL.. and of course, if I'm way off base in my
logic, I'd like to hear about it.  :-)

I remember reading some very optimistic books by a team of authors
(Orfali,
Harkey, etc.) extolling the virtues of CORBA, The Distributed Objects
Survival Guide being the most typical.  In these books, they painted
this
very compelling image of a wonderful world in the near future where all
software would be made out of a bunch of components, and companies and
end-users would be free to either write their own or buy (or in the case
of open source, download) ready-made and tested components that others
had
written.  They saw CORBA as the horse most likely to win the race of
being
first to market with a workable component architecture that could bring
about the "component revolution."

So, where are all the components?  Why is it that, outside of things
like
VBX controls and JavaBeans that have been very successful within very
limited domains, we still haven't progressed beyond, essentially, shared
libraries?!  I take that back:  MS has done a good job of implementing
new
Windows technologies in terms of COM components, but, even on Windows,
it
doesn't seem like ISV's are embracing COM within their own
applications.
And, on UNIX and Mac and BeOS, neither COM nor CORBA seems to have taken
off, and even though GNOME apps are linking with an ORB now, I really
don't see GNOME doing anything with CORBA that it couldn't just as
easily
have done without it.

Long story short, I think one of the answers might just lie with a
concept
REBOL is pushing, dialects.  Think about some of the most highly
successful Internet protocols we use today:  SMTP, HTTP, FTP, NNTP.
While
not an "Internet" protocol, let's add SQL to the list.  What do they
have
in common?  Instead of using CORBA or COM or some binary packet format,
all client/server communication is in the form of domain-specific ASCII
text commands!  Why can't we take that architecture and apply it to
creating components that communicate within a single machine?  In some
sense, we're already doing it in a very static and primitive way with
things like /etc files and command-line arguments.

Another piece of the puzzle is the notion of minimizing the number of
incompatible namespaces on the system.  Plan9 took this to an
extreme:  everything's a file in Plan9, even more so than UNIX, and so
it's easy for shell scripts to do things like act as TCP/IP clients just
by manipulating the right magic files.  More on this below.

So, why dialects?  First, I think they're much easier to design and
document than object interfaces.  They've traditionally been much harder
to IMPLEMENT, which is why people don't go that route unless they
absolutely have to, but that's where REBOL comes in (as I'll expand on
in
a bit)..  They're also MUCH easier to debug, as you can just open a port
and start typing in commands and see what's going on!  Think about how
many mail problems have been diagnosed by sysadmins telnetting to port
25.
Why shouldn't you be able to, say, telnet to a port and type a command
to
open a window, another one to draw a circle in it, etc., and see all the
mouse, keyboard, and other events come back to you as text messages
too?
In fact, like any good RAD environment, I can picture a very clean
design/implement/debug cycle where you add a new command, document it,
and
debug it, all with quick turn-around time compared to designing a class,
implementing all the get/set accessor methods, realizing it still sucks,
etc.

Dialects are much easier to script, as any random scripting language
that
has the ability to open the port (which, in Plan9 was done in the same
way
as opening any other file) can spew stuff to it with print statements,
and
get responses back with read statements.  Of course a language like
REBOL
makes these two tasks particularly easy, so it helps on both sides of
the
fence.  Finally, dialects are TRIVIAL to extend:  unlike binary
protocols
where it's easy to screw up and not leave yourself room to elegantly add
new capabilities, with a dialect, you simply add a new command word!
You
don't even have to worry about proprietary extensions screwing up
clients,
because there's natural namespace protection built into a dialect, you
simply do something like:

X-MyCompany-WeirdCommand: foo

and if the server doesn't understand, it gives you an error in a
well-established (by the particular dialect) way.

Efficiency seems to be the big reason more stuff hasn't gone this way,
and
to be fair, I'm certainly not proposing that we throw away shared
libraries!  Things like printf() will always be best handled through
that sort of mechanism.  However, what exactly would be the impact if
the
computer industry were to embrace dialects and take them to their
logical
conclusion?  What if, sitting at your PC, you could connect to any of
100
different ports, each one of which would supply you (as well as the
applications which depend on it) with a set of well-defined services
available through a simple, domain-specific language?

I can think of a few obvious implications.  First, security!  While it's
useful to think in terms of "telnetting to a port", it's clear that, if
this is the primary way to add functionality to a system that was
previously available through shared libraries, the DEFAULT security
should
be localhost-only, with some secure way of establishing the identity of
the user.  Something like UNIX domain sockets would make sense, at least
on UNIX (in fact, 4.4BSD has a rather nice feature in that one app can
send unforgeable credentials to another over a UNIX domain socket).
Clearly it would be trivial to write a telnet-style debugger tool to
connect and log in to one of these sockets, and the fact that they sit
in
the filesystem namespace means that they'd be much easier to find and
avoid collisions than a numeric TCP port.  The other nice thing is that
the needed infrastructure is already in the UNIX kernel and is
well-tested, efficient, and reliable.

Of course UNIX domain sockets eliminates the possibility of remote
access
when you do desire that sort of thing, so it'd be nice if this solution
supported TCP as well.  The funny thing is that, by communicating in
terms
of ASCII commands, there's no need for anything like IIOP, and it's
probably just as efficient, if not potentially even better!

You might argue that this will create a "Tower of Babel" with each
language being completely different from all the others, but I think
that,
by planning for the future and creating a sensible set of "Language
Design
Guidelines" (much like the famous Mac UI Guidelines), you can eliminate
99% of this.  Ideally, each language would service no more than 10-20
commands, and all the boilerplate stuff (like returning error codes for
commands not recognized, etc.) would be handled in an identical way for
all of them.

The real issue that's kept this from happening so far, and this is where
I
think REBOL can come in, is in the difficulty (up until now) of writing
reliable parsers for these domain-specific languages.  What if we assume
a
single multithreaded REBOL server to wait on all of these UNIX domain
sockets and TCP ports, parse each of the hundreds of domain specific
languages, and then call down to a mixture of native and REBOL code to
implement the functionality?  This presumes a lot of REBOL, and clearly
current versions of REBOL are probably not up to the task, but I think
future versions could easily be, and I'm pretty sure the clever people
at
REBOL are already looking forward to such a day.  It's just that you
have
to read between the lines and really THINK about the nature of software
"components" to make the connection that maybe REBOL has a real
alternative worth pursuing, because clearly CORBA has had its chance,
and
so far has not ushered us into "component heaven", so what do we have to
lose?

-Jake

Reply via email to