Re: [Gluster-devel] [Gluster-users] Unable to get lock for uuid peers

2015-06-10 Thread Atin Mukherjee


On 06/10/2015 02:58 PM, Sergio Traldi wrote:
 On 06/10/2015 10:27 AM, Krishnan Parthasarathi wrote:
 Hi all,
 I two servers with 3.7.1 and have the same problem of this issue:
 http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693

 My servers packages:
 # rpm -qa | grep gluster | sort
 glusterfs-3.7.1-1.el6.x86_64
 glusterfs-api-3.7.1-1.el6.x86_64
 glusterfs-cli-3.7.1-1.el6.x86_64
 glusterfs-client-xlators-3.7.1-1.el6.x86_64
 glusterfs-fuse-3.7.1-1.el6.x86_64
 glusterfs-geo-replication-3.7.1-1.el6.x86_64
 glusterfs-libs-3.7.1-1.el6.x86_64
 glusterfs-server-3.7.1-1.el6.x86_64

 Command:
 # gluster volume status
 Another transaction is in progress. Please try again after sometime.
The problem is although you are running 3.7.1 binaries the cluster
op-version is set to 30501, because of glusterd still goes for acquiring
cluster lock instead of volume wise lock for every request. Command log
history indicates glusterD is getting multiple volume's status requests
and because of it fails to acquire cluster lock. Could you bump up your
cluster's op-version by the following command and recheck?

gluster volume set all cluster.op-version 30701

~Atin


 In /var/log/gluster/etc-glusterfs-glusterd.vol.log I found:

 [2015-06-09 16:12:38.949842] E [glusterd-utils.c:164:glusterd_lock]
 0-management: Unable to get lock for uuid:
 99a41a2a-2ce5-461c-aec0-510bd5b37bf2, lock held by:
 04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
 [2015-06-09 16:12:38.949864] E
 [glusterd-syncop.c:1766:gd_sync_task_begin]
 0-management: Unable to acquire lock

 I check the files:
   From server 1:
 # cat /var/lib/glusterd/peers/04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
 uuid=04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
 state=3
 hostname1=192.168.61.101

   From server 2:
 # cat /var/lib/glusterd/peers/99a41a2a-2ce5-461c-aec0-510bd5b37bf2
 uuid=99a41a2a-2ce5-461c-aec0-510bd5b37bf2
 state=3
 hostname1=192.168.61.100
 Could you attach the complete glusterd log file and cmd-history.log
 file under /var/log/glusterfs directory? Could you provide a more
 detailed listing of things you did before hitting this issue?
 Hi Krishnan,
 thanks to a quick answer.
 In attach you can found the two log you request:
 cmd_history.log
 etc-glusterfs-glusterd.vol.log
 
 We use the gluster volume as openstack nova, glance, cinder backend.
 
 The volume is configured using 2 bricks mounted by an iscsi device:
 [root@cld-stg-01 glusterfs]# gluster volume info volume-nova-prod
 Volume Name: volume-nova-prod
 Type: Distribute
 Volume ID: 4bbef4c8-0441-4e81-a2c5-559401adadc0
 Status: Started
 Number of Bricks: 2
 Transport-type: tcp
 Bricks:
 Brick1: 192.168.61.100:/brickOpenstack/nova-prod/mpathb
 Brick2: 192.168.61.101:/brickOpenstack/nova-prod/mpathb
 Options Reconfigured:
 storage.owner-gid: 162
 storage.owner-uid: 162
 
 Last week we update openstack from havana to icehouse and we rename the
 storage hosts but we didn't change the IP.
 All volume have been created using ip addresses.
 
 So last week we stop all services (openstack gluster and also iscsi).
 We change the name in DNS of private ip of the 2 nics.
 We reboot the storage servers
 We start agian iscsi, multipath, glusterd process.
 We have to stop and start the volumes, but after that everything works
 fine.
 Now we don't observe any other problems except this.
 
 We have a nagios probe which check the volume status each 5 minutes to
 ensure all gluster process is working fine and so we find this problem I
 post.
 
 Cheer
 Sergio
 
 
 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Unable to get lock for uuid peers

2015-06-10 Thread Atin Mukherjee


On 06/10/2015 05:32 PM, Tiemen Ruiten wrote:
 Hello Atin,
 
 We are running 3.7.0 on our storage nodes and suffer from the same issue.
 Is it safe to perform the same command or should we first upgrade to 3.7.1?
You should upgrade to 3.7.1
 
 On 10 June 2015 at 13:45, Atin Mukherjee amukh...@redhat.com wrote:
 


 On 06/10/2015 02:58 PM, Sergio Traldi wrote:
 On 06/10/2015 10:27 AM, Krishnan Parthasarathi wrote:
 Hi all,
 I two servers with 3.7.1 and have the same problem of this issue:
 http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693

 My servers packages:
 # rpm -qa | grep gluster | sort
 glusterfs-3.7.1-1.el6.x86_64
 glusterfs-api-3.7.1-1.el6.x86_64
 glusterfs-cli-3.7.1-1.el6.x86_64
 glusterfs-client-xlators-3.7.1-1.el6.x86_64
 glusterfs-fuse-3.7.1-1.el6.x86_64
 glusterfs-geo-replication-3.7.1-1.el6.x86_64
 glusterfs-libs-3.7.1-1.el6.x86_64
 glusterfs-server-3.7.1-1.el6.x86_64

 Command:
 # gluster volume status
 Another transaction is in progress. Please try again after sometime.
 The problem is although you are running 3.7.1 binaries the cluster
 op-version is set to 30501, because of glusterd still goes for acquiring
 cluster lock instead of volume wise lock for every request. Command log
 history indicates glusterD is getting multiple volume's status requests
 and because of it fails to acquire cluster lock. Could you bump up your
 cluster's op-version by the following command and recheck?

 gluster volume set all cluster.op-version 30701

 ~Atin


 In /var/log/gluster/etc-glusterfs-glusterd.vol.log I found:

 [2015-06-09 16:12:38.949842] E [glusterd-utils.c:164:glusterd_lock]
 0-management: Unable to get lock for uuid:
 99a41a2a-2ce5-461c-aec0-510bd5b37bf2, lock held by:
 04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
 [2015-06-09 16:12:38.949864] E
 [glusterd-syncop.c:1766:gd_sync_task_begin]
 0-management: Unable to acquire lock

 I check the files:
   From server 1:
 # cat /var/lib/glusterd/peers/04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
 uuid=04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
 state=3
 hostname1=192.168.61.101

   From server 2:
 # cat /var/lib/glusterd/peers/99a41a2a-2ce5-461c-aec0-510bd5b37bf2
 uuid=99a41a2a-2ce5-461c-aec0-510bd5b37bf2
 state=3
 hostname1=192.168.61.100
 Could you attach the complete glusterd log file and cmd-history.log
 file under /var/log/glusterfs directory? Could you provide a more
 detailed listing of things you did before hitting this issue?
 Hi Krishnan,
 thanks to a quick answer.
 In attach you can found the two log you request:
 cmd_history.log
 etc-glusterfs-glusterd.vol.log

 We use the gluster volume as openstack nova, glance, cinder backend.

 The volume is configured using 2 bricks mounted by an iscsi device:
 [root@cld-stg-01 glusterfs]# gluster volume info volume-nova-prod
 Volume Name: volume-nova-prod
 Type: Distribute
 Volume ID: 4bbef4c8-0441-4e81-a2c5-559401adadc0
 Status: Started
 Number of Bricks: 2
 Transport-type: tcp
 Bricks:
 Brick1: 192.168.61.100:/brickOpenstack/nova-prod/mpathb
 Brick2: 192.168.61.101:/brickOpenstack/nova-prod/mpathb
 Options Reconfigured:
 storage.owner-gid: 162
 storage.owner-uid: 162

 Last week we update openstack from havana to icehouse and we rename the
 storage hosts but we didn't change the IP.
 All volume have been created using ip addresses.

 So last week we stop all services (openstack gluster and also iscsi).
 We change the name in DNS of private ip of the 2 nics.
 We reboot the storage servers
 We start agian iscsi, multipath, glusterd process.
 We have to stop and start the volumes, but after that everything works
 fine.
 Now we don't observe any other problems except this.

 We have a nagios probe which check the volume status each 5 minutes to
 ensure all gluster process is working fine and so we find this problem I
 post.

 Cheer
 Sergio


 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


 --
 ~Atin
 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

 
 
 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Unable to get lock for uuid peers

2015-06-10 Thread Sergio Traldi



On 06/10/2015 01:45 PM, Atin Mukherjee wrote:

gluster volume set all cluster.op-version 30701


Hi Atin,
thanks for your answer.
I try the command you gave me but I have an error:

# gluster volume set all cluster.op-version 30701
volume set: failed: Required op_version (30701) is not supported

Cheers
Sergio
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Unable to get lock for uuid peers

2015-06-10 Thread Atin Mukherjee


On 06/10/2015 07:25 PM, Sergio Traldi wrote:
 
 On 06/10/2015 01:45 PM, Atin Mukherjee wrote:
 gluster volume set all cluster.op-version 30701
 
 Hi Atin,
 thanks for your answer.
 I try the command you gave me but I have an error:
 
 # gluster volume set all cluster.op-version 30701
 volume set: failed: Required op_version (30701) is not supported
Are your all nodes upgraded to 3.7.1? Could you attach the glusterd log?

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Unable to get lock for uuid peers

2015-06-10 Thread Sergio Traldi

Hi,

On 06/10/2015 04:17 PM, Tiemen Ruiten wrote:
If I'm not mistaken, max op-version is still 30700, since this change 
wasn't merged into 3.7.1:


http://review.gluster.org/#/c/10975/

see also: https://bugzilla.redhat.com/show_bug.cgi?id=1225999



I set the cluster.op-version 30700 without problem.
Thanks to all for the hints.
Cheers
Sergio

On 10 June 2015 at 16:09, Atin Mukherjee amukh...@redhat.com 
mailto:amukh...@redhat.com wrote:




On 06/10/2015 07:25 PM, Sergio Traldi wrote:

 On 06/10/2015 01:45 PM, Atin Mukherjee wrote:
 gluster volume set all cluster.op-version 30701

 Hi Atin,
 thanks for your answer.
 I try the command you gave me but I have an error:

 # gluster volume set all cluster.op-version 30701
 volume set: failed: Required op_version (30701) is not supported
Are your all nodes upgraded to 3.7.1? Could you attach the
glusterd log?

--
~Atin
___
Gluster-users mailing list
gluster-us...@gluster.org mailto:gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users




--
Tiemen Ruiten
Systems Engineer
RD Media


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Unable to get lock for uuid peers

2015-06-10 Thread Tiemen Ruiten
If I'm not mistaken, max op-version is still 30700, since this change
wasn't merged into 3.7.1:

http://review.gluster.org/#/c/10975/

see also: https://bugzilla.redhat.com/show_bug.cgi?id=1225999

On 10 June 2015 at 16:09, Atin Mukherjee amukh...@redhat.com wrote:



 On 06/10/2015 07:25 PM, Sergio Traldi wrote:
 
  On 06/10/2015 01:45 PM, Atin Mukherjee wrote:
  gluster volume set all cluster.op-version 30701
 
  Hi Atin,
  thanks for your answer.
  I try the command you gave me but I have an error:
 
  # gluster volume set all cluster.op-version 30701
  volume set: failed: Required op_version (30701) is not supported
 Are your all nodes upgraded to 3.7.1? Could you attach the glusterd log?

 --
 ~Atin
 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users




-- 
Tiemen Ruiten
Systems Engineer
RD Media
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Unable to get lock for uuid peers

2015-06-10 Thread Tiemen Ruiten
Hello Atin,

We are running 3.7.0 on our storage nodes and suffer from the same issue.
Is it safe to perform the same command or should we first upgrade to 3.7.1?

On 10 June 2015 at 13:45, Atin Mukherjee amukh...@redhat.com wrote:



 On 06/10/2015 02:58 PM, Sergio Traldi wrote:
  On 06/10/2015 10:27 AM, Krishnan Parthasarathi wrote:
  Hi all,
  I two servers with 3.7.1 and have the same problem of this issue:
  http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693
 
  My servers packages:
  # rpm -qa | grep gluster | sort
  glusterfs-3.7.1-1.el6.x86_64
  glusterfs-api-3.7.1-1.el6.x86_64
  glusterfs-cli-3.7.1-1.el6.x86_64
  glusterfs-client-xlators-3.7.1-1.el6.x86_64
  glusterfs-fuse-3.7.1-1.el6.x86_64
  glusterfs-geo-replication-3.7.1-1.el6.x86_64
  glusterfs-libs-3.7.1-1.el6.x86_64
  glusterfs-server-3.7.1-1.el6.x86_64
 
  Command:
  # gluster volume status
  Another transaction is in progress. Please try again after sometime.
 The problem is although you are running 3.7.1 binaries the cluster
 op-version is set to 30501, because of glusterd still goes for acquiring
 cluster lock instead of volume wise lock for every request. Command log
 history indicates glusterD is getting multiple volume's status requests
 and because of it fails to acquire cluster lock. Could you bump up your
 cluster's op-version by the following command and recheck?

 gluster volume set all cluster.op-version 30701

 ~Atin
 
 
  In /var/log/gluster/etc-glusterfs-glusterd.vol.log I found:
 
  [2015-06-09 16:12:38.949842] E [glusterd-utils.c:164:glusterd_lock]
  0-management: Unable to get lock for uuid:
  99a41a2a-2ce5-461c-aec0-510bd5b37bf2, lock held by:
  04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
  [2015-06-09 16:12:38.949864] E
  [glusterd-syncop.c:1766:gd_sync_task_begin]
  0-management: Unable to acquire lock
 
  I check the files:
From server 1:
  # cat /var/lib/glusterd/peers/04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
  uuid=04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c
  state=3
  hostname1=192.168.61.101
 
From server 2:
  # cat /var/lib/glusterd/peers/99a41a2a-2ce5-461c-aec0-510bd5b37bf2
  uuid=99a41a2a-2ce5-461c-aec0-510bd5b37bf2
  state=3
  hostname1=192.168.61.100
  Could you attach the complete glusterd log file and cmd-history.log
  file under /var/log/glusterfs directory? Could you provide a more
  detailed listing of things you did before hitting this issue?
  Hi Krishnan,
  thanks to a quick answer.
  In attach you can found the two log you request:
  cmd_history.log
  etc-glusterfs-glusterd.vol.log
 
  We use the gluster volume as openstack nova, glance, cinder backend.
 
  The volume is configured using 2 bricks mounted by an iscsi device:
  [root@cld-stg-01 glusterfs]# gluster volume info volume-nova-prod
  Volume Name: volume-nova-prod
  Type: Distribute
  Volume ID: 4bbef4c8-0441-4e81-a2c5-559401adadc0
  Status: Started
  Number of Bricks: 2
  Transport-type: tcp
  Bricks:
  Brick1: 192.168.61.100:/brickOpenstack/nova-prod/mpathb
  Brick2: 192.168.61.101:/brickOpenstack/nova-prod/mpathb
  Options Reconfigured:
  storage.owner-gid: 162
  storage.owner-uid: 162
 
  Last week we update openstack from havana to icehouse and we rename the
  storage hosts but we didn't change the IP.
  All volume have been created using ip addresses.
 
  So last week we stop all services (openstack gluster and also iscsi).
  We change the name in DNS of private ip of the 2 nics.
  We reboot the storage servers
  We start agian iscsi, multipath, glusterd process.
  We have to stop and start the volumes, but after that everything works
  fine.
  Now we don't observe any other problems except this.
 
  We have a nagios probe which check the volume status each 5 minutes to
  ensure all gluster process is working fine and so we find this problem I
  post.
 
  Cheer
  Sergio
 
 
  ___
  Gluster-users mailing list
  gluster-us...@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-users
 

 --
 ~Atin
 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users




-- 
Tiemen Ruiten
Systems Engineer
RD Media
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel