I ended up changing the directory structure so instead of having everything in 
one long directory, it was hierarchical, one dir per day, then one dir per 10 
minutes.  Most accesses are in the most recent directory.  This gives just a 
4-second interval where everything is locked up.  Works for me.

Anne


From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Harshavardhana
Sent: Friday, October 11, 2013 3:36 AM
To: Toby Corkindale
Cc: gluster-users
Subject: Re: [Gluster-users] Can't access volume during self-healing



I've posted to the list about this issue before actually.
We had/have a similar requirement for storing a very large number of fairly 
small files, and originally had them all in just a few directories in glusterfs.

Directory layout also matters here "number of files v/s number of directory" 
hierarchy, also necessary to know is how does the application reach to these 
individual files (access patterns)

It turns out that Glusterfs is really badly suited to directories with large 
numbers of files in them. If you can split them up, do so, and performance will 
become tolerable again.

But even then it wasn't great.. Self-heal can swamp the network, making access 
for clients so slow as to cause problems.

This analysis is wrong - self-heal daemon runs in lower priority threads and 
shouldn't be swamping the network at all. It never competes by default against 
User i/o traffic. Which was the version this was tested against?

For your use case (wanting distributed, replicated storage for large numbers of 
1mb files) I suggest you check out Riak and the Riak CS add-on. It's proven to 
be great for that particular use-case for us.

Including all of that there is a fair amount of tuning which should be done at 
kernel, network and filesystem level as well. NoSQL's such as Riak could be 
beneficial but again are based on use-case basis.

--
Religious confuse piety with mere ritual, the virtuous confuse regulation with 
outcomes
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to