Re: [Gluster-users] Arbiter vs Dummy Node Details

2015-12-29 Thread Pranith Kumar Karampuri
On 12/30/2015 06:42 AM, Ravishankar N wrote: On 12/30/2015 04:20 AM, Kyle Harris wrote: Hello All, Forgive the duplicate but I forgot to give the first post a title so this corrects that. Anyway, I recently discovered the new arbiter functionality of the 3.7 branch so I decided to give it a

Re: [Gluster-users] gluster volume status -> commit failed

2015-12-29 Thread Atin Mukherjee
I think I have a workaround here: Remove /var/lib/glusterd/vols//node_state.info file from all the nodes and restart GlusterD Post that you should be able to execute gluster volume status successfully. Thanks, Atin On 12/30/2015 11:36 AM, Atin Mukherjee wrote: > I have some updates: > > Actual

Re: [Gluster-users] gluster volume status -> commit failed

2015-12-29 Thread Atin Mukherjee
I have some updates: Actually the opCode what we check here for aggregating status is for rebalance process (volinfo->rebal.op), not for the incoming operation, so I was wrong with my first analysis. However having said that rebalance op could either be GD_OP_REMOVE_BRICK or GD_OP_REBALANCE. If it

Re: [Gluster-users] Arbiter vs Dummy Node Details

2015-12-29 Thread Ravishankar N
On 12/30/2015 04:20 AM, Kyle Harris wrote: Hello All, Forgive the duplicate but I forgot to give the first post a title so this corrects that. Anyway, I recently discovered the new arbiter functionality of the 3.7 branch so I decided to give it a try. First off, I too am looking forward to

[Gluster-users] Arbiter vs Dummy Node Details

2015-12-29 Thread Kyle Harris
Hello All, Forgive the duplicate but I forgot to give the first post a title so this corrects that. Anyway, I recently discovered the new arbiter functionality of the 3.7 branch so I decided to give it a try. First off, I too am looking forward to the ability to add an arbiter to an already exis

[Gluster-users] glusters become slow when running IO intensive work for some time.

2015-12-29 Thread Zhengyu Guo
Hi all, I am working with 4 physical servers and on each of them we set up a virtual servers for regular usages. The glusterfs server is set up on the physical servers and the glusterfs clients are running on the virtual machines. When I run some IO intensive jobs in a virtual machines for som

[Gluster-users] (no subject)

2015-12-29 Thread Kyle Harris
Hello All, I recently discovered the new arbiter functionality of the 3.7 branch so I decided to give it a try. First off, I too am looking forward to the ability to add an arbiter to an already existing volume as discussed in the following thread: https://www.gluster.org/pipermail/gluster-users/

Re: [Gluster-users] Healing queue rarely empty

2015-12-29 Thread Atin Mukherjee
-Atin Sent from one plus one On Dec 17, 2015 3:21 PM, "Nicolas Ecarnot" wrote: > > Le 17/12/2015 10:10, Nicolas Ecarnot a écrit : >> >> Hello, >> >> Our setup : 3 Centos 7.2 nodes, with gluster 3.7.6 in replica-3, used as >> storage+compute for an oVirt 3.5.6 DC. >> >> Two days ago, we added some

Re: [Gluster-users] gluster volume status -> commit failed

2015-12-29 Thread Atin Mukherjee
I'll dive into it in detail in some time. Could you provide cli/cmd_history/glusterd log files for further debugging. -Atin Sent from one plus one On Dec 29, 2015 9:53 PM, "Christophe TREFOIS" wrote: > Hi Atin, > > Same issue. I restarted glusterd and glusterfsd everywhere and it seems > the thi

Re: [Gluster-users] gluster volume status -> commit failed

2015-12-29 Thread Christophe TREFOIS
Hi Atin, Same issue. I restarted glusterd and glusterfsd everywhere and it seems the thing is still in STATEDUMP. Any other pointers? Kind regards, — Christophe > On 29 Dec 2015, at 16:19, Atin Mukherjee wrote: > > > > On 12/29/2015 07:09 PM, Christophe TREFOIS wrote: >> Hi, >> >> >>

Re: [Gluster-users] gluster volume status -> commit failed

2015-12-29 Thread Atin Mukherjee
On 12/29/2015 07:09 PM, Christophe TREFOIS wrote: > Hi, > > >> On 29 Dec 2015, at 14:27, Atin Mukherjee wrote: >> >> It seems like your opCode is STATEDUMP instead of STATUS which is weird. >> Are you running a heterogeneous cluster? > > What does that mean? In principle no. This means when a

Re: [Gluster-users] Healing queue rarely empty

2015-12-29 Thread Nicolas Ecarnot
Le 17/12/2015 10:51, Nicolas Ecarnot a écrit : Le 17/12/2015 10:10, Nicolas Ecarnot a écrit : Hello, Our setup : 3 Centos 7.2 nodes, with gluster 3.7.6 in replica-3, used as storage+compute for an oVirt 3.5.6 DC. Two days ago, we added some nagios/centreon monitoring watching every 5 minutes t

Re: [Gluster-users] gluster volume status -> commit failed

2015-12-29 Thread Christophe TREFOIS
Hi, > On 29 Dec 2015, at 14:27, Atin Mukherjee wrote: > > It seems like your opCode is STATEDUMP instead of STATUS which is weird. > Are you running a heterogeneous cluster? What does that mean? In principle no. > What is the last version you > were running with? I think it was 3.7.3 or 3.7.

Re: [Gluster-users] gluster volume status -> commit failed

2015-12-29 Thread Atin Mukherjee
It seems like your opCode is STATEDUMP instead of STATUS which is weird. Are you running a heterogeneous cluster? What is the last version you were running with? What's the current cluster op-version? Thanks, Atin On 12/29/2015 06:31 PM, Christophe TREFOIS wrote: > Dear all, > > I have a 3-node

[Gluster-users] gluster volume status -> commit failed

2015-12-29 Thread Christophe TREFOIS
Dear all, I have a 3-node distribute setup with a controller of GlusterFS and upgraded to 3.7.6 today and to CentOS 7.2. After the ugprade (reboot), I can start the volume fine and see mounted volume as well on the controller. However, a gluster volume info results in an [root@stor104 gluste

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2015-12-29 Thread Manikandan Selvaganesh
Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC