[Gluster-users] op-version issue on 3 nodes setup

2015-02-12 Thread PEPONNET, Cyril N (Cyril)
Hi, We had an outage in our gluster setup and I had to rm -rf /var/lib/gluster/* I rebuild the setup from scratch but op-version for volumes is set as 2 for op-version and client-op-version. op-version in glusterd.info is 30501. The main issue is when I try to set property it fails only for

Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.

2015-02-12 Thread Ernie Dunbar
On 2015-02-12 02:55, Atin Mukherjee wrote: On 02/12/2015 12:36 AM, Ernie Dunbar wrote: I nuked the entire partition with mkfs, just to be *sure*, and I still get the error message: volume create: gv0: failed: /brick1/gv0 is already part of a volume Clearly, there's some bit of data

Re: [Gluster-users] Missing 'status fd' and 'top *-perf' details

2015-02-12 Thread Rumen Telbizov
Am I the only one experiencing this? Do you guys have proper statistics? On Wed, Feb 11, 2015 at 1:29 PM, Rumen Telbizov telbi...@gmail.com wrote: Hello everyone, I have the following situation. I put some read and write load on my test GlusterFS setup as follows: # dd if=/dev/zero

Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.

2015-02-12 Thread Ernie Dunbar
Sorry to top-post, Roundcube doesn't do quoting properly. The answer to your quesiton is yes, both NFS1 and NFS2 had /brick1 formatted. I find some information about the brick in / Also, here are my logs from the last time I tried to create the volume on NFS1: [2015-02-11 18:59:58.394420]

Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.

2015-02-12 Thread Atin Mukherjee
On 02/12/2015 12:36 AM, Ernie Dunbar wrote: I nuked the entire partition with mkfs, just to be *sure*, and I still get the error message: volume create: gv0: failed: /brick1/gv0 is already part of a volume Clearly, there's some bit of data being kept somewhere else besides in

[Gluster-users] Warning: disk space on subvolume getting full - consider adding more nodes

2015-02-12 Thread Peter B.
Hi! I've recently added another node to a gluster volume (glusterfs v3.4.6), because it was getting full, but now I still receive the following warning in the logs (/var/log/glusterfs/VOLUME-NAME.log) every 2-3 days: [quote] [2015-02-09 09:06:55.625497] W

Re: [Gluster-users] Performance loss from 3.4.2 to 3.6.2

2015-02-12 Thread Pranith Kumar Karampuri
On 02/12/2015 02:07 PM, ML mail wrote: Thanks Pranith for the details. So with that option one would be trading data consistency with performance. I am now interested to hear about Nico's new tests with this option disabled... On Thursday, February 12, 2015 9:23 AM, Pranith Kumar Karampuri

Re: [Gluster-users] Performance loss from 3.4.2 to 3.6.2

2015-02-12 Thread Pranith Kumar Karampuri
On 02/12/2015 01:17 PM, ML mail wrote: Dear Pranith I would be interested to know what the cluster.ensure-durability off option does, could you explain or point to the documentation? By default replication translator does fsyncs on the files at certain times so that it doesn't lose data when

[Gluster-users] Gluster performance on the small files

2015-02-12 Thread Punit Dambiwal
Hi, I have seen the gluster performance is dead slow on the small files...even i am using the SSDit's too bad performanceeven i am getting better performance in my SAN with normal SATA disk... I am using distributed replicated glusterfs with replica count=2...i have all SSD disks on the

[Gluster-users] too many socket or ib related error logs..

2015-02-12 Thread 손영호
Hi, everybody I have one question.. There is too many duplicate Error log in my gluster client server ** /var/log/glusterfs/gluster.log [2015-02-12 08:42:43.476938] W [rpc-clnt-ping.c:145:rpc_clnt_ping_cbk] gluster_01-client-1: socket or ib related error [2015-02-12

[Gluster-users] too many socket or ib related error logs..

2015-02-12 Thread 손영호
Hi, everybody I have one question.. There is too many duplicate Error log in my gluster client server ** /var/log/glusterfs/gluster.log [2015-02-12 08:42:43.476938] W [rpc-clnt-ping.c:145:rpc_clnt_ping_cbk] gluster_01-client-1: socket or ib related error [2015-02-12

Re: [Gluster-users] Can't create volume. Can't delete volume. Volume does not exist. Can't create volume.

2015-02-12 Thread Justin Clift
On 11 Feb 2015, at 19:06, Ernie Dunbar maill...@lightspeed.ca wrote: I nuked the entire partition with mkfs, just to be *sure*, and I still get the error message: volume create: gv0: failed: /brick1/gv0 is already part of a volume Clearly, there's some bit of data being kept somewhere

[Gluster-users] Lot of logs like 'STATUS: hashed_subvol volume1-replicate-0 cached_subvol null'

2015-02-12 Thread Jose Pablo Ferrero Prieto
Hello, I am testing glusterfs under Apache+Joomla to store the DocumentRoot. I'm using Centos7 and gluster client 3.4. After installing components/plugins when I browse the web I get a lot of messages like: [2015-02-12 15:14:11.327225] I [dht-common.c:1000:dht_lookup_everywhere_done]

[Gluster-users] performance flush-behind dangerous?

2015-02-12 Thread ML mail
Hi, I was wondering if turning on the performance.flush-behind option is dangerous in terms of data integrity? Reading the documentation it seems to me that I could benefit from that especially for having a lot of small files but I would like to stay on the safe side. So if anyone could tell me

Re: [Gluster-users] Performance loss from 3.4.2 to 3.6.2

2015-02-12 Thread ML mail
Thanks Pranith for the details. So with that option one would be trading data consistency with performance. I am now interested to hear about Nico's new tests with this option disabled... On Thursday, February 12, 2015 9:23 AM, Pranith Kumar Karampuri pkara...@redhat.com wrote: