Hmmm. we did that for FS processes on Plan B. I mean, keep a dynamic version of a registry. It kept the list of volumes available at a central place.
I think it can be used as is on Plan 9, without changes. There was a program (I think it was called adsrv; not sure, it´s on the Plan B man pages) were file servers could keep an open file as long as they were alive. We didn´t do load balancing but it shouldn´t be hard to add that to this program. If there´s interest I can dig in our worm (although it should be also on sources). On Mon, Aug 31, 2009 at 4:25 PM, Eric Van Hensbergen<eri...@gmail.com> wrote: > On Mon, Aug 31, 2009 at 9:04 AM, erik quanstrom<quans...@quanstro.net> wrote: >> >> given the database= option, if one could confine rapid changes to >> smaller files, one could teach ndb to only reread changed files. >> > > Why not have a synthetic file system interface to ndb that allows it > to update its own files? I think this is my primary problem. > Granular modification to static files is a PITA to manage -- we should > be using synthetic file system interfaces to to help manage and gate > modifications. Most of the services I have in mind may be transient > and task specific, so there are elements of scope to consider and you > may not want to write anything out to static storage. > >>> registration/discovery mechanism to existing applications. When I >>> export, a flag should make that export visible to zeroconf resolution, >>> etc. >> >> what do you mean by export? >> > > When I publish a service, in the Plan 9 case primarily by exporting a > synthetic file system. I shouldn't have to have static configuration > for file servers, it should be much more fluid. I'm not arguing for a > microsoft style registry -- but the network discovery environment on > MacOSX is much nicer than what we have today within Plan 9. An even > better example is the environment on the OLPC, where many of the > applications are implicitly networked and share resources based on > Zeroconf pub/sub interfaces. > > -eric > >