> >
> > given the database= option, if one could confine rapid changes to
> > smaller files, one could teach ndb to only reread changed files.
> >
> 
> Why not have a synthetic file system interface to ndb that allows it
> to update its own files?  I think this is my primary problem.
> Granular modification to static files is a PITA to manage -- we should
> be using synthetic file system interfaces to to help manage and gate
> modifications.  Most of the services I have in mind may be transient
> and task specific, so there are elements of scope to consider and you
> may not want to write anything out to static storage.

i can see in principle how this could be a good idea (no more
comments, though).  could you elaborate, though.  i have found
editing /lib/ndb/local works well at the scales i see.

i also don't know what you mean by "transient, task specific services".
i can only think of things like ramfs or cdfs.  but they live in my
namespace so ndb doesn't enter into the picture.

could you give an example?

> When I publish a service, in the Plan 9 case primarily by exporting a
> synthetic file system.  I shouldn't have to have static configuration
> for file servers, it should be much more fluid.  I'm not arguing for a
> microsoft style registry -- but the network discovery environment on
> MacOSX is much nicer than what we have today within Plan 9.  An even
> better example is the environment on the OLPC, where many of the
> applications are implicitly networked and share resources based on
> Zeroconf pub/sub interfaces.

sounds interesting.  but i don't understand what you're talking about
exactly.  maybe you're thinking that cpu could be rigged so that
cpu with no host specifier would be equivalent to cpu -h '$boredcpu'
where '$boredcpu' would be determined by cs via dynamic mapping?
or am i just confused?

- erik

Reply via email to