Re: [Gluster-users] ec2 peers

2013-03-18 Thread David Ward
Was a solution to this ever sorted? I am finding the same thing as Gerry. Trying the solution below just gives me: Probe on host port 0 already in peer list This is on Ubuntu 12.04.2 and gluster 3.3 (this wasn't an issue on gluster 3.1 I don't recall) You can peer probe back from the

Re: [Gluster-users] Questions about gluster/fuse, page cache, and coherence

2013-03-18 Thread nlxswig
Good questions, Why are there no reply? At 2011-08-16 04:53:50,Patrick J. LoPresti lopre...@gmail.com wrote: (FUSE developers: Although my questions are specifically about Gluster, I suspect most of the answers have more to do with FUSE, so I figure this is on-topic for your list. If I

Re: [Gluster-users] Errors during dbench run (rename failed)

2013-03-18 Thread Pranith Kumar K
On 03/17/2013 06:55 PM, Marc Seeger wrote: Hi, We just ran into drench dying on one of our test runs. We execute a dbench each on 2 machines. We use the following parameters: dbench 6 -t 60 -D $DIRECTORY (host specific, they each write in a separate one) The directories are on a mountpoint

Re: [Gluster-users] Errors during dbench run (rename failed)

2013-03-18 Thread Hans Lambermont
Pranith Kumar K wrote on 20130318: On 03/17/2013 06:55 PM, Marc Seeger wrote: This is how dbench died: I, [2013-03-16T05:34:03.176890 #13121] INFO -- : [710] rename /mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/NEWPCB.PPT /mnt/gfs

[Gluster-users] Nightly rpms?

2013-03-18 Thread Nux!
Hello, On some occasions with 3.4 for example I seemed to hit bugs that not only were already reported, but in some cases even fixed (like some recent quota failed issue). Is there a place where I could get nightly or at least weekly RPMs? This way at least I'll hit new or unresolved bugs

Re: [Gluster-users] Errors during dbench run (rename failed)

2013-03-18 Thread Pranith Kumar K
On 03/18/2013 02:36 PM, Hans Lambermont wrote: Pranith Kumar K wrote on 20130318: On 03/17/2013 06:55 PM, Marc Seeger wrote: This is how dbench died: I, [2013-03-16T05:34:03.176890 #13121] INFO -- : [710] rename /mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT

[Gluster-users] How to evaluate the glusterfs performance with small file workload?

2013-03-18 Thread nlxswig
Hi guys I have met some troubles when I want to evaluate the glusterfs performance with small file workload. 1: What kind of benchmark should I use to test the small file operation ? As we all know, we can use iozone tools to test the large file operation, while for the sake

Re: [Gluster-users] How to evaluate the glusterfs performance with small file workload?

2013-03-18 Thread Torbjørn Thorsen
On Mon, Mar 18, 2013 at 11:27 AM, nlxswig nlxs...@126.com wrote: Hi guys 1: What kind of benchmark should I use to test the small file operation ? I've been wondering a bit about the same thing. I was thinking it would be nice to have something record and synthesize IO patterns. One could

Re: [Gluster-users] different size of nodes

2013-03-18 Thread Thomas Wakefield
You can set the free disk space limit. This will force gluster to write files to another volume. gluster volume set volume cluster.min-free-disk XXGB(you insert your volume name and the amount of free space you want, probably like 2-300GB) Running a rebalance would help move your files

Re: [Gluster-users] different size of nodes

2013-03-18 Thread Papp Tamas
On 03/18/2013 01:43 PM, Thomas Wakefield wrote: You can set the free disk space limit. This will force gluster to write files to another volume. gluster volume set volume cluster.min-free-disk XXGB(you insert your volume name and the amount of free space you want, probably like

Re: [Gluster-users] Possible reason for meta-data data entry missing-entry gfid self-heal failed?

2013-03-18 Thread Marc Seeger
Sadly, we keep seeing those. The logs display the same pattern: [2013-03-18 05:22:49.174382] I [afr-self-heal-common.c:1941:afr_sh_post_nb_entrylk_conflicting_sh_cbk] 0-replicate0: Non blocking entrylks failed. [2013-03-18 05:22:49.174382] E