On 10/10/13 05:22, Pruner, Anne (Anne) wrote:
I’m evaluating gluster for use in our product, and I want to ensure that
I understand the failover behavior.  What I’m seeing isn’t great, but it
doesn’t look from the docs I’ve read that this is what everyone else is
experiencing.

Is this normal?

Setup:

-one volume, distributed, replicated (2), with two bricks on two
different servers

-35,000 files on volume, about 1MB each, all in one directory (I’m open
to changing this, if that’s the problem.  ls –l takes a /really/ long time)


I've posted to the list about this issue before actually.
We had/have a similar requirement for storing a very large number of fairly small files, and originally had them all in just a few directories in glusterfs. It turns out that Glusterfs is really badly suited to directories with large numbers of files in them. If you can split them up, do so, and performance will become tolerable again.

But even then it wasn't great.. Self-heal can swamp the network, making access for clients so slow as to cause problems.

For your use case (wanting distributed, replicated storage for large numbers of 1mb files) I suggest you check out Riak and the Riak CS add-on. It's proven to be great for that particular use-case for us.


-Toby
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to