On Mon, Aug 31, 2009 at 9:36 AM, erik quanstrom<quans...@quanstro.net> wrote:
>
> i can see in principle how this could be a good idea (no more
> comments, though).  could you elaborate, though.  i have found
> editing /lib/ndb/local works well at the scales i see.
>

I think the main issue with just editing /lib/ndb/local is a
combination of scale and number of writers.  Writing static config
files could work fine in a "typical" plan 9 network of hundreds of
machines, even with multiple admins.  I have a feeling it starts to
break down with thousands of machines, particularly in an environment
where machines are appearing and disappearing at regular intervals
(clouds, HPC partitioning, or Blue Gene).  Hundreds of thousands of
nodes with this sort of behavior probably makes it impractical.  Of
course -- this won't effect the casual user, but its something that
effects us.  Its also possible that such a dynamic environment would
better support a 9grid style environment.

>
> i also don't know what you mean by "transient, task specific services".
> i can only think of things like ramfs or cdfs.  but they live in my
> namespace so ndb doesn't enter into the picture.
>

There is the relatively mundane configuration examples of publishing
multiple file servers, authentication servers, and cpu servers.
There's a slightly more interesting example of more pervasive sharing
(imagine sharing portions of your ACME file system to collaborate with
several co-authors, or for code review).  The more applications which
export synthetic file systems, the more opportunity there is for
sharing and requiring a broader pub/sub interface.

There is another option here which I'm side-stepping because its
something I'm actively working on -- which is instead of doing such a
pub/sub interface within ndb and CS, extending srv to provide a
registry for cluster/grid/cloud.  However, underneath it may still be
nice to have zeroconf as a pub/sub for interoperation with non-Plan 9
systems.

>
>> When I publish a service, in the Plan 9 case primarily by exporting a
>> synthetic file system.  I shouldn't have to have static configuration
>> for file servers, it should be much more fluid.  I'm not arguing for a
>> microsoft style registry -- but the network discovery environment on
>> MacOSX is much nicer than what we have today within Plan 9.  An even
>> better example is the environment on the OLPC, where many of the
>> applications are implicitly networked and share resources based on
>> Zeroconf pub/sub interfaces.
>
> sounds interesting.  but i don't understand what you're talking about
> exactly.  maybe you're thinking that cpu could be rigged so that
> cpu with no host specifier would be equivalent to cpu -h '$boredcpu'
> where '$boredcpu' would be determined by cs via dynamic mapping?
> or am i just confused?
>

Actually, the idea would be that cpu's default behavior would be to go
grab $boredcpu -- but that's part of the idea.  That would make adding
cpu servers as easy as booting them -- they'd publish their service
with zeroconf and everyone would automatically pick it up and be able
to query them for additional attributes related to utilization,
authentication, or even fee-structures.  Of course, as a I alluded to
earlier, I think its much more interesting in the presence of
pervasive network services exported through file systems.  I'd suggest
those who haven't to go grab "sugar on a stick" from the OLPC folks
and run it under vmware or whatever.  I'm not broadly endorsing every
aspect of their environment, but I really liked their loosely coupled
sharing/collaboration framework built upon zeroconf and their mesh
networks (it may be a bit hard to see this on sugar on a stick, but
there were in the past several public OLPC zeroconf servers you could
attach to and play around.

       -eric

Reply via email to