Re: [Gluster-users] Volume turns read only when one brick is missing

2015-03-06 Thread Joe Julian

You set two different types of quorum.

The server quorum will stop a server if it loses quorum. Since you have 
three servers, losing one will not interrupt your quorum and your 
servers will continue running.


You also set cluster.quorum-type: auto which is a volume quorum. Auto 
will, only allow writes if more than half of bricks, or exactly half 
including the first, are present. With your volume, that reads as if 
you could lose vmhost-2 but not vmhost-1 and retain quorum. I doubt this 
is what you were looking for.


I suspect you only wanted the server quorum so just gluster volume 
reset cluster1 cluster.quorum-type


On 03/06/2015 10:32 AM, Nico Schottelius wrote:

Hello,

when I reboot one out of the two servers in a replicated setup,
the volume turns into read only mode until the first server is back.

This is not what I expected and I wonder if I misconfigured anything.

The expected behaviour from my point of view is that the volume stays
read write and that the rebooting node will catch up again.

My setup consists of three servers, two of them hosting the brick,
the third one only contributing to the quorom.

Thanks for any hint!

Nico




I mount the volume from fstab with this line:

vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 
glusterfs 
defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 0 0



[19:10:05] vmhost1-cluster1:~# gluster volume info
  
Volume Name: cluster1

Type: Replicate
Volume ID: b371ec1f-e01e-49f8-9573-e0d1e74bbd90
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vmhost1-cluster1.place4.ungleich.ch:/home/gluster
Brick2: vmhost2-cluster1.place4.ungleich.ch:/home/gluster
Options Reconfigured:
nfs.disable: 1
cluster.ensure-durability: off
server.allow-insecure: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: on
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server

[19:25:15] vmhost1-cluster1:~# gluster peer status
Number of Peers: 2

Hostname: entrance.place4.ungleich.ch
Uuid: 987e543e-fbc4-497b-9bc9-ae56086d9421
State: Peer in Cluster (Connected)

Hostname: 192.168.0.2
Uuid: 688816e1-aa51-450f-9300-979c4e83e33e
State: Peer in Cluster (Connected)
Other names:
136.243.38.8





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [volume options] auth.allow fails to resolve hostname

2015-03-06 Thread Krishnan Parthasarathi
Jifeng,

Some of us are looking into this issue. We should have an update
on this next week. We are busy with GlusterFS 3.7[1] release's feature
freeze.

[1] - http://www.gluster.org/community/documentation/index.php/Planning37

cheers,
kp

- Original Message -
 
 
 Hi,
 
 
 
 [environment]
 
 1. gluster version: 3.5.3
 
 2. os: redhat6.5
 
 3. volume info shown below:
 
 Volume Name: gv0
 
 Type: Replicate
 
 Volume ID: 37742c27-9f6b-4a38-821f-ea68d1ec8950
 
 Status: Started
 
 Number of Bricks: 1 x 2 = 2
 
 Transport-type: tcp
 
 Bricks:
 
 Brick1: dmf-ha-1-glusterfs:/export/vdb1/brick
 
 Brick2: dmf-ha-2-glusterfs:/export/vdb1/brick
 
 Options Reconfigured:
 
 cluster.server-quorum-type: server
 
 performance.cache-refresh-timeout: 60
 
 performance.io-thread-count: 64
 
 performance.cache-size: 8GB
 
 cluster.eager-lock: ON
 
 auth.allow: dmf-ha-1-glusterfs,dmf-ha-2-glusterfs
 
 nfs.disable: ON
 
 cluster.server-quorum-ratio: 51%
 
 
 
 [issue]
 
 When value of auth.allow is set to hostname of the client, one node mount
 succeeds, one node mount fails. If auth.allow value is set to the IP address
 of the client, both nodes mount succeed.
 
 
 
 The error log list below:
 
 
 
 [2015-03-04 04:52:38.580635] I
 [client-handshake.c:1677:select_server_supported_programs] 0-gv0-client-1:
 Using Program GlusterFS 3.3, Num (1298437), Version (330)
 
 [2015-03-04 04:52:38.580956] W [client-handshake.c:1371:client_setvolume_cbk]
 0-gv0-client-1: failed to set the volume (Permission denied)
 
 [2015-03-04 04:52:38.580983] W [client-handshake.c:1397:client_setvolume_cbk]
 0-gv0-client-1: failed to get 'process-uuid' from reply dict
 
 [2015-03-04 04:52:38.580993] E [client-handshake.c:1403:client_setvolume_cbk]
 0-gv0-client-1: SETVOLUME on remote-host failed: Authentication failed
 
 [2015-03-04 04:52:38.581001] I [client-handshake.c:1489:client_setvolume_cbk]
 0-gv0-client-1: sending AUTH_FAILED event
 
 [2015-03-04 04:52:38.581014] E [fuse-bridge.c:5081:notify] 0-fuse: Server
 authenication failed. Shutting down.
 
 [2015-03-04 04:52:38.581039] I [fuse-bridge.c:5514:fini] 0-fuse: Unmounting
 '/dmfcontents'.
 
 [2015-03-04 04:52:38.595944] I
 [client-handshake.c:1677:select_server_supported_programs] 0-gv0-client-0:
 Using Program GlusterFS 3.3, Num (1298437), Version (330)
 
 [2015-03-04 04:52:38.596366] W [glusterfsd.c:1095:cleanup_and_exit]
 (--/lib64/libc.so.6(clone+0x6d) [0x7f81fab6586d]
 (--/lib64/libpthread.so.0(+0x79d1) [0x7f81fb1f89d1]
 (--glusterfs(glusterfs_sigwaiter+0xd5) [0x4053e5]))) 0-: received signum
 (15), shutting down
 
 [2015-03-04 04:52:38.596667] W [client-handshake.c:1371:client_setvolume_cbk]
 0-gv0-client-0: failed to set the volume (Permission denied)
 
 
 
 I also find similar problem described in
 https://bugzilla.redhat.com/show_bug.cgi?id=915153 . I’m confused why only
 one node fails . I think if hostname resolving problem, two nodes should
 fail.
 
 Any tips about debugging further or getting this fixed up would be
 appreciated.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] fstab problem

2015-03-06 Thread Nico Schottelius
Just checked - essentially removing the stderr line in
/sbin/mount.glusterfs and replacing it by the usual 2 does the job:

[15:29:30] vmhost1-cluster1:~# diff -u /sbin/mount.glusterfs*
--- /sbin/mount.glusterfs   2015-03-06 14:17:13.973729836 +0100
+++ /sbin/mount.glusterfs.orig  2015-03-06 14:17:18.798642292 +0100
@@ -10,7 +10,7 @@
 
 warn ()
 {
-   echo $@ 2
+   echo $@ /dev/stderr
 }
 
 _init ()
[15:29:31] vmhost1-cluster1:~# 


Nico Schottelius [Fri, Mar 06, 2015 at 01:29:38PM +0100]:
 Funny, I am running into the same problem with CentOS 7 and glusterfs-3.6.2
 right now:
 
 var-lib-one-datastores-100.mount - /var/lib/one/datastores/100
Loaded: loaded (/etc/fstab)
Active: failed (Result: exit-code) since Fri 2015-03-06 13:23:12 CET; 21s 
 ago
 Where: /var/lib/one/datastores/100
  What: vmhost2-cluster1.place4.ungleich.ch:/cluster1
   Process: 2142 ExecMount=/bin/mount 
 vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 -t 
 glusterfs -o 
 defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 
 (code=exited, status=1/FAILURE)
 
 Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Mounted 
 /var/lib/one/datastores/100.
 Mar 06 13:23:12 vmhost1-cluster1 mount[2142]: /sbin/mount.glusterfs: line 13: 
 /dev/stderr: No such device or address
 Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: var-lib-one-datastores-100.mount 
 mount process exited, code=exited status=1
 Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Unit 
 var-lib-one-datastores-100.mount entered failed state.
 [13:23:33] vmhost1-cluster1:~# /bin/mount 
 vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 -t 
 glusterfs -o 
 defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch
 
 I've found older problems with glusterd reporting this, but no real
 solution for the fstab entry.
 
 
 何亦军 [Fri, Mar 06, 2015 at 02:46:12AM +]:
  Hi Guys,
  
  I meet fstab problem,
  fstab config :   gwgfs01:/vol01  /mnt/glusterglusterfs   
  defaults,_netdev0 0
  
  The mount can’t take effect, I checked below:
  
  [root@gfsclient02 ~]# systemctl status mnt-gluster.mount -l
  mnt-gluster.mount - /mnt/gluster
 Loaded: loaded (/etc/fstab)
 Active: failed (Result: exit-code) since Fri 2015-03-06 10:39:05 CST; 
  53s ago
  Where: /mnt/gluster
   What: gwgfs01:/vol01
Process: 1324 ExecMount=/bin/mount gwgfs01:/vol01 /mnt/gluster -t 
  glusterfs -o defaults,_netdev (code=exited, status=1/FAILURE)
  
  Mar 06 10:38:47 gfsclient02 systemd[1]: Mounting /mnt/gluster...
  Mar 06 10:38:47 gfsclient02 systemd[1]: Mounted /mnt/gluster.
  Mar 06 10:39:05 gfsclient02 mount[1324]: /sbin/mount.glusterfs: line 13: 
  /dev/stderr: No such device or address
  Mar 06 10:39:05 gfsclient02 systemd[1]: mnt-gluster.mount mount process 
  exited, code=exited status=1
  Mar 06 10:39:05 gfsclient02 systemd[1]: Unit mnt-gluster.mount entered 
  failed state.
  
  
  BTW, I can manual mount that gluster.  by command : mount -t glusterfs 
  gwgfs01:/vol01 /mnt/gluster
  
  What happened?
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-users
 
 
 -- 
 New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] fstab problem

2015-03-06 Thread Nico Schottelius
Hey Niels,

Niels de Vos [Fri, Mar 06, 2015 at 09:42:57AM -0500]:
 That looks good to me. Care to file a bug and send this patch through
 our Gerrit for review?
 
 
 http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow


it is ready for merging:

http://review.gluster.org/#/c/9824/
https://bugzilla.redhat.com/show_bug.cgi?id=1199545

I've replaced various other /dev/stderr occurences and also took
care of the non-Linux mount version.

What are the chances of getting this pushed into a release soon? I
can patch our hosts manually for the moment, but having this in the
package makes life much easier for maintenance.

Cheers,

Nico

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] fstab problem

2015-03-06 Thread Niels de Vos
On Fri, Mar 06, 2015 at 03:30:09PM +0100, Nico Schottelius wrote:
 Just checked - essentially removing the stderr line in
 /sbin/mount.glusterfs and replacing it by the usual 2 does the job:
 
 [15:29:30] vmhost1-cluster1:~# diff -u /sbin/mount.glusterfs*
 --- /sbin/mount.glusterfs   2015-03-06 14:17:13.973729836 +0100
 +++ /sbin/mount.glusterfs.orig  2015-03-06 14:17:18.798642292 +0100
 @@ -10,7 +10,7 @@
  
  warn ()
  {
 -   echo $@ 2
 +   echo $@ /dev/stderr
  }
  
  _init ()

That looks good to me. Care to file a bug and send this patch through
our Gerrit for review?


http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow

Let me know if that would be too much work, I can send the change for
you too.

Thanks,
Niels

 [15:29:31] vmhost1-cluster1:~# 
 
 
 Nico Schottelius [Fri, Mar 06, 2015 at 01:29:38PM +0100]:
  Funny, I am running into the same problem with CentOS 7 and glusterfs-3.6.2
  right now:
  
  var-lib-one-datastores-100.mount - /var/lib/one/datastores/100
 Loaded: loaded (/etc/fstab)
 Active: failed (Result: exit-code) since Fri 2015-03-06 13:23:12 CET; 
  21s ago
  Where: /var/lib/one/datastores/100
   What: vmhost2-cluster1.place4.ungleich.ch:/cluster1
Process: 2142 ExecMount=/bin/mount 
  vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 
  -t glusterfs -o 
  defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 
  (code=exited, status=1/FAILURE)
  
  Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Mounted 
  /var/lib/one/datastores/100.
  Mar 06 13:23:12 vmhost1-cluster1 mount[2142]: /sbin/mount.glusterfs: line 
  13: /dev/stderr: No such device or address
  Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: 
  var-lib-one-datastores-100.mount mount process exited, code=exited status=1
  Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Unit 
  var-lib-one-datastores-100.mount entered failed state.
  [13:23:33] vmhost1-cluster1:~# /bin/mount 
  vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 
  -t glusterfs -o 
  defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch
  
  I've found older problems with glusterd reporting this, but no real
  solution for the fstab entry.
  
  
  何亦军 [Fri, Mar 06, 2015 at 02:46:12AM +]:
   Hi Guys,
   
   I meet fstab problem,
   fstab config :   gwgfs01:/vol01  /mnt/glusterglusterfs   
   defaults,_netdev0 0
   
   The mount can’t take effect, I checked below:
   
   [root@gfsclient02 ~]# systemctl status mnt-gluster.mount -l
   mnt-gluster.mount - /mnt/gluster
  Loaded: loaded (/etc/fstab)
  Active: failed (Result: exit-code) since Fri 2015-03-06 10:39:05 CST; 
   53s ago
   Where: /mnt/gluster
What: gwgfs01:/vol01
 Process: 1324 ExecMount=/bin/mount gwgfs01:/vol01 /mnt/gluster -t 
   glusterfs -o defaults,_netdev (code=exited, status=1/FAILURE)
   
   Mar 06 10:38:47 gfsclient02 systemd[1]: Mounting /mnt/gluster...
   Mar 06 10:38:47 gfsclient02 systemd[1]: Mounted /mnt/gluster.
   Mar 06 10:39:05 gfsclient02 mount[1324]: /sbin/mount.glusterfs: line 13: 
   /dev/stderr: No such device or address
   Mar 06 10:39:05 gfsclient02 systemd[1]: mnt-gluster.mount mount process 
   exited, code=exited status=1
   Mar 06 10:39:05 gfsclient02 systemd[1]: Unit mnt-gluster.mount entered 
   failed state.
   
   
   BTW, I can manual mount that gluster.  by command : mount -t glusterfs 
   gwgfs01:/vol01 /mnt/gluster
   
   What happened?
  
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://www.gluster.org/mailman/listinfo/gluster-users
  
  
  -- 
  New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-users
 
 -- 
 New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


pgpE4NKV8v6E5.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] fstab problem

2015-03-06 Thread Nico Schottelius
Hey Niels,

I'll setup myself a gerrit account and submit it shortly.

Cheers,

Nico

Niels de Vos [Fri, Mar 06, 2015 at 09:42:57AM -0500]:
 On Fri, Mar 06, 2015 at 03:30:09PM +0100, Nico Schottelius wrote:
  Just checked - essentially removing the stderr line in
  /sbin/mount.glusterfs and replacing it by the usual 2 does the job:
  
  [15:29:30] vmhost1-cluster1:~# diff -u /sbin/mount.glusterfs*
  --- /sbin/mount.glusterfs   2015-03-06 14:17:13.973729836 +0100
  +++ /sbin/mount.glusterfs.orig  2015-03-06 14:17:18.798642292 +0100
  @@ -10,7 +10,7 @@
   
   warn ()
   {
  -   echo $@ 2
  +   echo $@ /dev/stderr
   }
   
   _init ()
 
 That looks good to me. Care to file a bug and send this patch through
 our Gerrit for review?
 
 
 http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
 
 Let me know if that would be too much work, I can send the change for
 you too.
 
 Thanks,
 Niels
 
  [15:29:31] vmhost1-cluster1:~# 
  
  
  Nico Schottelius [Fri, Mar 06, 2015 at 01:29:38PM +0100]:
   Funny, I am running into the same problem with CentOS 7 and 
   glusterfs-3.6.2
   right now:
   
   var-lib-one-datastores-100.mount - /var/lib/one/datastores/100
  Loaded: loaded (/etc/fstab)
  Active: failed (Result: exit-code) since Fri 2015-03-06 13:23:12 CET; 
   21s ago
   Where: /var/lib/one/datastores/100
What: vmhost2-cluster1.place4.ungleich.ch:/cluster1
 Process: 2142 ExecMount=/bin/mount 
   vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 
   -t glusterfs -o 
   defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 
   (code=exited, status=1/FAILURE)
   
   Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Mounted 
   /var/lib/one/datastores/100.
   Mar 06 13:23:12 vmhost1-cluster1 mount[2142]: /sbin/mount.glusterfs: line 
   13: /dev/stderr: No such device or address
   Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: 
   var-lib-one-datastores-100.mount mount process exited, code=exited 
   status=1
   Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Unit 
   var-lib-one-datastores-100.mount entered failed state.
   [13:23:33] vmhost1-cluster1:~# /bin/mount 
   vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 
   -t glusterfs -o 
   defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch
   
   I've found older problems with glusterd reporting this, but no real
   solution for the fstab entry.
   
   
   何亦军 [Fri, Mar 06, 2015 at 02:46:12AM +]:
Hi Guys,

I meet fstab problem,
fstab config :   gwgfs01:/vol01  /mnt/glusterglusterfs   
defaults,_netdev0 0

The mount can’t take effect, I checked below:

[root@gfsclient02 ~]# systemctl status mnt-gluster.mount -l
mnt-gluster.mount - /mnt/gluster
   Loaded: loaded (/etc/fstab)
   Active: failed (Result: exit-code) since Fri 2015-03-06 10:39:05 
CST; 53s ago
Where: /mnt/gluster
 What: gwgfs01:/vol01
  Process: 1324 ExecMount=/bin/mount gwgfs01:/vol01 /mnt/gluster -t 
glusterfs -o defaults,_netdev (code=exited, status=1/FAILURE)

Mar 06 10:38:47 gfsclient02 systemd[1]: Mounting /mnt/gluster...
Mar 06 10:38:47 gfsclient02 systemd[1]: Mounted /mnt/gluster.
Mar 06 10:39:05 gfsclient02 mount[1324]: /sbin/mount.glusterfs: line 
13: /dev/stderr: No such device or address
Mar 06 10:39:05 gfsclient02 systemd[1]: mnt-gluster.mount mount process 
exited, code=exited status=1
Mar 06 10:39:05 gfsclient02 systemd[1]: Unit mnt-gluster.mount entered 
failed state.


BTW, I can manual mount that gluster.  by command : mount -t glusterfs 
gwgfs01:/vol01 /mnt/gluster

What happened?
   
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
   
   
   -- 
   New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://www.gluster.org/mailman/listinfo/gluster-users
  
  -- 
  New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-users



-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] fstab problem

2015-03-06 Thread Nico Schottelius
Funny, I am running into the same problem with CentOS 7 and glusterfs-3.6.2
right now:

var-lib-one-datastores-100.mount - /var/lib/one/datastores/100
   Loaded: loaded (/etc/fstab)
   Active: failed (Result: exit-code) since Fri 2015-03-06 13:23:12 CET; 21s ago
Where: /var/lib/one/datastores/100
 What: vmhost2-cluster1.place4.ungleich.ch:/cluster1
  Process: 2142 ExecMount=/bin/mount 
vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 -t 
glusterfs -o 
defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 
(code=exited, status=1/FAILURE)

Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Mounted 
/var/lib/one/datastores/100.
Mar 06 13:23:12 vmhost1-cluster1 mount[2142]: /sbin/mount.glusterfs: line 13: 
/dev/stderr: No such device or address
Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: var-lib-one-datastores-100.mount 
mount process exited, code=exited status=1
Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Unit 
var-lib-one-datastores-100.mount entered failed state.
[13:23:33] vmhost1-cluster1:~# /bin/mount 
vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 -t 
glusterfs -o 
defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch

I've found older problems with glusterd reporting this, but no real
solution for the fstab entry.


何亦军 [Fri, Mar 06, 2015 at 02:46:12AM +]:
 Hi Guys,
 
 I meet fstab problem,
 fstab config :   gwgfs01:/vol01  /mnt/glusterglusterfs   
 defaults,_netdev0 0
 
 The mount can’t take effect, I checked below:
 
 [root@gfsclient02 ~]# systemctl status mnt-gluster.mount -l
 mnt-gluster.mount - /mnt/gluster
Loaded: loaded (/etc/fstab)
Active: failed (Result: exit-code) since Fri 2015-03-06 10:39:05 CST; 53s 
 ago
 Where: /mnt/gluster
  What: gwgfs01:/vol01
   Process: 1324 ExecMount=/bin/mount gwgfs01:/vol01 /mnt/gluster -t glusterfs 
 -o defaults,_netdev (code=exited, status=1/FAILURE)
 
 Mar 06 10:38:47 gfsclient02 systemd[1]: Mounting /mnt/gluster...
 Mar 06 10:38:47 gfsclient02 systemd[1]: Mounted /mnt/gluster.
 Mar 06 10:39:05 gfsclient02 mount[1324]: /sbin/mount.glusterfs: line 13: 
 /dev/stderr: No such device or address
 Mar 06 10:39:05 gfsclient02 systemd[1]: mnt-gluster.mount mount process 
 exited, code=exited status=1
 Mar 06 10:39:05 gfsclient02 systemd[1]: Unit mnt-gluster.mount entered failed 
 state.
 
 
 BTW, I can manual mount that gluster.  by command : mount -t glusterfs 
 gwgfs01:/vol01 /mnt/gluster
 
 What happened?

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo replication on slave not showing files in brick

2015-03-06 Thread ML mail
Yes my single slave node is a single brick. Here would be the output of the 
volume info just in case:

Volume Name: myslavevol
Type: Distribute
Volume ID: *REMOVED*
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gfs1geo:/data/myslavevol/brick
Options Reconfigured:
nfs.disable: on

I tried to mount this volume using FUSE but even there I don't see the files. I 
only see the .gfid directory.


Now I noticed that I did not have ACL enabled on my underlying filesystem 
(ZFS). Saw that from this error message:

[2015-03-04 22:27:06.100996] W [posix.c:5534:init] 0-myslavevol-posix: Posix 
access control list is not supported.

So now I at least activated POSIX ACL but still the files are not here. Could 
it be related to the ACL? or nothing to do with that?



On Friday, March 6, 2015 2:22 PM, M S Vishwanath Bhat msvb...@gmail.com wrote:

On 6 March 2015 at 14:27, ML mail mlnos...@yahoo.com wrote:

Hello,


I just setup geo replication from a 2 node master cluster to a 1 node slave 
cluster and so far it worked well. I just have one issue on my slave if I 
check the files on my brick i just see the following:


drwxr-xr-x  2 root root 15 Mar  5 23:13 .gfid
drw--- 20 root root 21 Mar  5 23:13 .glusterfs


there should be around 10 files on that volume but it's simply not there. If I 
list the files in the .gfid directory I can see that the files are there but 
just have a horrible name made out of numbers.


I don't see any special errors or warnings on the slave except maybe this 
warning which might be related:


[2015-03-05 22:19:06.233956] W 
[glusterd-op-sm.c:3312:glusterd_op_modify_op_ctx] 0-management: Failed uuid to 
hostname conversion
[2015-03-05 22:19:06.233974] W 
[glusterd-op-sm.c:3404:glusterd_op_modify_op_ctx] 0-management: op_ctx 
modification failed



So my question, is this the expected behaviour? or should be able to see the 
files on my slave volume just like on my master volume?

You should see files in slave volume just like your master volume.


Looks like you check the files in slave bricks. Does your single node slave 
server has single brick? If it has more bricks, the files may be present in 
other brick as well.


Can you mount the slave volume and check for the files?


Best Regards,

Vishwanath




Regards
ML



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] fstab problem

2015-03-06 Thread Niels de Vos
On Fri, Mar 06, 2015 at 04:34:09PM +0100, Nico Schottelius wrote:
 Hey Niels,
 
 Niels de Vos [Fri, Mar 06, 2015 at 09:42:57AM -0500]:
  That looks good to me. Care to file a bug and send this patch through
  our Gerrit for review?
  
  
  http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
 
 
 it is ready for merging:
 
 http://review.gluster.org/#/c/9824/
 https://bugzilla.redhat.com/show_bug.cgi?id=1199545
 
 I've replaced various other /dev/stderr occurences and also took
 care of the non-Linux mount version.
 
 What are the chances of getting this pushed into a release soon? I
 can patch our hosts manually for the moment, but having this in the
 package makes life much easier for maintenance.

We'll definitely get this included in the upcoming 3.7 release. But that
will be a couple of weeks/months before it is stable enough to get
released. I've cloned your bug 1199545 (which will be used for the
current master branch) to new bug 1199577 (for the 3.6 release).

I think there is a 3.6.3 beta out already, and I do not know what
Raghavendra accepts as last-minute changes before the final 3.6.3
release.

Thanks for the patch!
Niels
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] QEMU gfapi segfault

2015-03-06 Thread Josh Boon
Th qemu log is also the client log. The client was configured for info notices 
only; I've since turned it up to debug level in case I can get more but I don't 
remember the client log being that interesting. 

- Original Message -

From: RAGHAVENDRA TALUR raghavendra.ta...@gmail.com 
To: Josh Boon glus...@joshboon.com 
Cc: Vijay Bellur vbel...@redhat.com, Gluster-users@gluster.org List 
gluster-users@gluster.org 
Sent: Friday, March 6, 2015 8:17:08 AM 
Subject: Re: [Gluster-users] QEMU gfapi segfault 



On Fri, Mar 6, 2015 at 4:50 AM, Josh Boon  glus...@joshboon.com  wrote: 


segfault on replica1 
Mar 3 22:40:08 HFMHVR3 kernel: [11430546.394720] qemu-system-x86[14267]: 
segfault at 128 ip 7f4812d945cc sp 7f4816da48a0 error 4 in 
qemu-system-x86_64[7f4812a08000+4b1000] 
The qemu logs only show the client shutting down on replica1 
2015-03-03 23:10:14.928+: shutting down 
The heal logs on replica1 
[2015-03-03 23:03:01.706880] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 
0-VMARRAY-replicate-0: Another crawl is in progress for VMARRAY-client-0 
[2015-03-03 23:13:01.776026] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 
0-VMARRAY-replicate-0: Another crawl is in progress for VMARRAY-client-0 
The heal logs on replica2 
[2015-03-03 23:02:34.480041] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 
0-VMARRAY-replicate-0: Another crawl is in progress for VMARRAY-client-1 
[2015-03-03 23:12:34.539420] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 
0-VMARRAY-replicate-0: Another crawl is in progress for VMARRAY-client-1 
[2015-03-03 23:18:51.042321] I 
[afr-self-heal-common.c:2868:afr_log_self_heal_completion_status] 
0-VMARRAY-replicate-0: foreground data self heal is successfully completed, 
data self heal from VMARRAY-client-0 to sinks VMARRAY-client-1, with 
53687091200 bytes on VMARRAY-client-0, 53687091200 bytes on VMARRAY-client-1, 
data - Pending matrix: [ [ 3 3 ] [ 1 1 ] ] on 
gfid:86d8d9b4-f0cd-4607-abff-4b01f81d964b 
The brick log for both look like 
[2015-03-03 23:10:13.831991] I [server.c:520:server_rpc_notify] 
0-VMARRAY-server: disconnecting connectionfrom 
HFMHVR3-51477-2015/02/26-08:07:36:95892-VMARRAY-client-0-0-0 
[2015-03-03 23:10:13.832161] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=4c2fb000487f} 
[2015-03-03 23:10:13.832186] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=c883ac00487f} 
[2015-03-03 23:10:13.832195] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=44d8a800487f} 
[2015-03-03 23:10:13.832203] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=e8cea700487f} 
[2015-03-03 23:10:13.832212] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=0477b000487f} 
[2015-03-03 23:10:13.832219] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=2c2ba100487f} 
[2015-03-03 23:10:13.832227] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=4cfab100487f} 
[2015-03-03 23:10:13.832235] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=6c83a200487f} 
[2015-03-03 23:10:13.832245] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=0454a000487f} 
[2015-03-03 23:10:13.832255] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=a0e1a900487f} 
[2015-03-03 23:10:13.832262] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=2031a700487f} 
[2015-03-03 23:10:13.832270] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=7040ae00487f} 
[2015-03-03 23:10:13.832279] W [inodelk.c:392:pl_inodelk_log_cleanup] 
0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b held 
by {client=0x7fe13076f550, pid=0 lk-owner=1832ae00487f} 
[2015-03-03 23:10:13.832287] W 

[Gluster-users] Effect of performance tuning options

2015-03-06 Thread JF Le Fillâtre

Hello all,

I am currently trying to tune the performance of a Gluster volume that I
have just created, and I am wondering what is the exact effect of some
of the tuning options.

Overview of the volume, with the options that I have modified:

==
glusterfs 3.6.2 built on Jan 22 2015 12:59:57


Volume Name: live
Type: Distribute
Volume ID: 81c3d212-e43b-4460-8b5d-b743992a01eb
Status: Started
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: stor104:/zfs/brick0/brick
Brick2: stor104:/zfs/brick1/brick
Brick3: stor104:/zfs/brick2/brick
Brick4: stor104:/zfs/brick3/brick
Brick5: stor106:/zfs/brick0/brick
Brick6: stor106:/zfs/brick1/brick
Brick7: stor106:/zfs/brick2/brick
Brick8: stor106:/zfs/brick3/brick
Options Reconfigured:
performance.flush-behind: on
performance.client-io-threads: on
performance.cache-refresh-timeout: 10
nfs.disable: on
nfs.addr-namelookup: off
diagnostics.client-log-level: WARNING
diagnostics.brick-log-level: WARNING
cluster.min-free-disk: 1%
cluster.data-self-heal-algorithm: full
performance.io-thread-count: 64
performance.write-behind-window-size: 4MB
performance.cache-size: 1GB
==

2 servers, 4 bricks per server. Bandwidth is 2x10Gb trunked link on the
client side, and one 10Gb link per server.

Now, the questions I still haven't found an answer for are:

1) for the thread count on the server side, is it per brick, per server
or for the whole volume? While doing some tests I saw an increase in
threading on the servers but it seems to be dynamic, and I didn't get
the max number of threads created.

2) when I activate client-io-threads, I see that the same thread count
is used for the clients. The only way to modify it for the clients only
is to edit the volume files by hand, correct?

3) as for the client cache, if I remember correctly FUSE filesystems are
not cached by the kernel VFS layer, in which case it all hangs on the
performance.cache-size option. Given that the cache is refreshed on a
regular basis, has any test been done already to see what are the
network and CPU load impacts of large caches?

4) as far as I understand, the Samba backend hooks directly into the
FUSE module. Therefore it should benefit from all optimizations done for
the TCP FUSE client, correct?

5) is there any know issue with activating both client-io-threads and
flush-behind?

6) is there any other obvious (or not) tuning knob that I have missed?

And finally, the question I shouldn't ask: is there any way that I can
dump the current values for all possible parameters? Google points me to
various threads in the past on that topic, yet nothing seems to have
changed on that front...

Thank you in advance for your answers.
Regards,
JF


-- 

 Jean-François Le Fillâtre
 ---
 HPC Systems Administrator
 LCSB - University of Luxembourg
 ---
 PGP KeyID 0x134657C6
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] fstab problem

2015-03-06 Thread Raghavendra Bhat

On Friday 06 March 2015 10:07 PM, Niels de Vos wrote:

On Fri, Mar 06, 2015 at 04:34:09PM +0100, Nico Schottelius wrote:

Hey Niels,

Niels de Vos [Fri, Mar 06, 2015 at 09:42:57AM -0500]:

That looks good to me. Care to file a bug and send this patch through
our Gerrit for review?

 
http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow


it is ready for merging:

 http://review.gluster.org/#/c/9824/
 https://bugzilla.redhat.com/show_bug.cgi?id=1199545

I've replaced various other /dev/stderr occurences and also took
care of the non-Linux mount version.

What are the chances of getting this pushed into a release soon? I
can patch our hosts manually for the moment, but having this in the
package makes life much easier for maintenance.

Hi,

Since this is related to mounting, I can accept the patch for 3.6.3.

Regards,
Raghavendra Bhat


We'll definitely get this included in the upcoming 3.7 release. But that
will be a couple of weeks/months before it is stable enough to get
released. I've cloned your bug 1199545 (which will be used for the
current master branch) to new bug 1199577 (for the 3.6 release).

I think there is a 3.6.3 beta out already, and I do not know what
Raghavendra accepts as last-minute changes before the final 3.6.3
release.

Thanks for the patch!
Niels


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] QEMU gfapi segfault

2015-03-06 Thread RAGHAVENDRA TALUR
On Fri, Mar 6, 2015 at 4:50 AM, Josh Boon glus...@joshboon.com wrote:

 segfault on replica1
 Mar  3 22:40:08 HFMHVR3 kernel: [11430546.394720] qemu-system-x86[14267]:
 segfault at 128 ip 7f4812d945cc sp 7f4816da48a0 error 4 in
 qemu-system-x86_64[7f4812a08000+4b1000]
 The qemu logs only show the client shutting down on replica1
 2015-03-03 23:10:14.928+: shutting down
 The heal logs on replica1
 [2015-03-03 23:03:01.706880] I
 [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-VMARRAY-replicate-0:
 Another crawl is in progress for VMARRAY-client-0
 [2015-03-03 23:13:01.776026] I
 [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-VMARRAY-replicate-0:
 Another crawl is in progress for VMARRAY-client-0
 The heal logs on replica2
 [2015-03-03 23:02:34.480041] I
 [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-VMARRAY-replicate-0:
 Another crawl is in progress for VMARRAY-client-1
 [2015-03-03 23:12:34.539420] I
 [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-VMARRAY-replicate-0:
 Another crawl is in progress for VMARRAY-client-1
 [2015-03-03 23:18:51.042321] I
 [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status]
 0-VMARRAY-replicate-0:  foreground data self heal  is successfully
 completed,  data self heal from VMARRAY-client-0  to sinks
 VMARRAY-client-1, with 53687091200 bytes on VMARRAY-client-0, 53687091200
 bytes on VMARRAY-client-1,  data - Pending matrix:  [ [ 3 3 ] [ 1 1 ] ]  on
 gfid:86d8d9b4-f0cd-4607-abff-4b01f81d964b
 The brick log for both look like
 [2015-03-03 23:10:13.831991] I [server.c:520:server_rpc_notify]
 0-VMARRAY-server: disconnecting connectionfrom
 HFMHVR3-51477-2015/02/26-08:07:36:95892-VMARRAY-client-0-0-0
 [2015-03-03 23:10:13.832161] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=4c2fb000487f}
 [2015-03-03 23:10:13.832186] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=c883ac00487f}
 [2015-03-03 23:10:13.832195] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=44d8a800487f}
 [2015-03-03 23:10:13.832203] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=e8cea700487f}
 [2015-03-03 23:10:13.832212] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=0477b000487f}
 [2015-03-03 23:10:13.832219] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=2c2ba100487f}
 [2015-03-03 23:10:13.832227] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=4cfab100487f}
 [2015-03-03 23:10:13.832235] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=6c83a200487f}
 [2015-03-03 23:10:13.832245] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=0454a000487f}
 [2015-03-03 23:10:13.832255] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=a0e1a900487f}
 [2015-03-03 23:10:13.832262] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=2031a700487f}
 [2015-03-03 23:10:13.832270] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=7040ae00487f}
 [2015-03-03 23:10:13.832279] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=1832ae00487f}
 [2015-03-03 23:10:13.832287] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=68e0af00487f}
 [2015-03-03 23:10:13.832294] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b
 held by {client=0x7fe13076f550, pid=0 lk-owner=6446b400487f}
 [2015-03-03 23:10:13.832302] W [inodelk.c:392:pl_inodelk_log_cleanup]
 0-VMARRAY-server: releasing lock on 86d8d9b4-f0cd-4607-abff-4b01f81d964b

[Gluster-users] Geo replication on slave not showing files in brick

2015-03-06 Thread ML mail
Hello,
I just setup geo replication from a 2 node master cluster to a 1 node slave 
cluster and so far it worked well. I just have one issue on my slave if I check 
the files on my brick i just see the following:
drwxr-xr-x  2 root root 15 Mar  5 23:13 .gfid
drw--- 20 root root 21 Mar  5 23:13 .glusterfs
there should be around 10 files on that volume but it's simply not there. If I 
list the files in the .gfid directory I can see that the files are there but 
just have a horrible name made out of numbers.
I don't see any special errors or warnings on the slave except maybe this 
warning which might be related:
[2015-03-05 22:19:06.233956] W 
[glusterd-op-sm.c:3312:glusterd_op_modify_op_ctx] 0-management: Failed uuid to 
hostname conversion
[2015-03-05 22:19:06.233974] W 
[glusterd-op-sm.c:3404:glusterd_op_modify_op_ctx] 0-management: op_ctx 
modification failed

So my question, is this the expected behaviour? or should be able to see the 
files on my slave volume just like on my master volume?
RegardsML

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users