I think I've mentioned this before, but on a few of my synthetic file
systems here I'm using what you describe to slice a database by
specific orderings. For example, I have a (long) list of resources
which I'm managing in a particular environment each has an owner,
type, status and a few static data containers. It's all backed by a
relational database, but the file server presents different "slices"
of that to external users, where the directory structure is rigidly
defined as:

/
 available/
 by-type/
 by-owner/
 inuse/
 ...

with all data to fill the directories being dynamically pulled from
the database.

in this particular case it saves me having to implement a generic SQL
query mechanism, which is unsafe, as well as pushing the complexity of
knowing the underlying database structure onto the clients. in the
end, clients only know how to navigate to a particular resource and
'reserve' or 'release' it. this scheme could potentially be extended
(at least in my case) to match your "user-defined sets" by simply
enumerating every unique column in the database as subdirectories. a
user defines a "subset" of all available nodes of a particular type
foo which have owner bar by cd-ing to
/available/by-type/foo/by-owner/bar. (I admit this is a bit hasty, so
perhaps not what you're really after)...

the reason i'm sticking with a file system is that if i have to do
this for many different resources which can't  be easily stuck in the
same database, I'd have to design a protocol in order to avoid
replicating everything, and if I'm going to design a protocol I may as
well use 9p, something that's simple and I'm familiar with.

Reply via email to