On August 13, 2012 5:52:26 AM "Fernando Frediani (Qube)"
wrote:
I am not sure how it works on Gluster but to mitigate the problem with
listing a lot of small files wouldn't it be suitable to keep on every
node a copy of the directory tree. I think Isilon does that and there
is probably a lot to be learned from them which seems quiet mature
technology. Could also have another interesting thing added in the
future, local SSD to keep the file system metadata for faster access.
We could do that, in fact I've been an advocate for it, but it must be
understood that there's no such thing as a free lunch. Once you're
caching directory structures on clients, you either have to give up a
certain amount of consistency or make the entire protocol much more
complex to perform cache invalidations etc. Who's volunteering to do
that work? Who's even asking us to do that in the core team, once they
understand that it means taking resources away from other priorities
and permanently slowing down development because of that complexity?
Nobody. At least, unlike Isilon, there's the possibility that somebody
could take a stab at reducing consistency for the sake of performance
themselves (as I myself have done e.g. with negative-lookup caching and
replication bypass). There's not really all that much to be learned
from a closed-source system that's not even described in papers. In
fact, I *know* that they learn more from us than vice versa.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users