Re: [Gluster-users] Setup with replica 3 for imagestorage

2015-07-16 Thread Gregor Burck
One Point is, the difference between the log file time (UTC) and  
system time (europe/Berlin)


Could that be an hint?

Bye

Gregor

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7 Arbiter and/or quorum

2015-07-16 Thread Ravishankar N



On 07/16/2015 04:31 PM, Kingsley wrote:

On Wed, 2015-07-15 at 22:39 +0530, Ravishankar N wrote:

On 07/15/2015 09:22 PM, Scott Harvanek wrote:

I saw Ravi mention something earlier about the new arbiter volumes in 3.7

- Can you run a arbiter 3.7 server volume with 3.6 clients?  Or does
this not work?



Unfortunately, no. This is because the logic for the feature is mostly
in the AFR translator which is loaded on the client process (with some
minor logic in the arbiter translator loaded on the brick process). So
both clients and servers need to be 3.7.

Is there any harm in running 3.6 clients with a 3.7 cluster?



No harm if you are not using arbiter volumes.
-Ravi



We're currently running 3.6.3 and if we upgrade the cluster to 3.7 at
some point it would be nice to know whether we can upgrade the clients
in a separate step (given that we have a reasonable number of them).



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7 Arbiter and/or quorum

2015-07-16 Thread Kingsley
On Wed, 2015-07-15 at 22:39 +0530, Ravishankar N wrote:
> 
> On 07/15/2015 09:22 PM, Scott Harvanek wrote:
> > I saw Ravi mention something earlier about the new arbiter volumes in 3.7
> >
> > - Can you run a arbiter 3.7 server volume with 3.6 clients?  Or does 
> > this not work?
> >
> >
> 
> Unfortunately, no. This is because the logic for the feature is mostly 
> in the AFR translator which is loaded on the client process (with some 
> minor logic in the arbiter translator loaded on the brick process). So 
> both clients and servers need to be 3.7.

Is there any harm in running 3.6 clients with a 3.7 cluster?

We're currently running 3.6.3 and if we upgrade the cluster to 3.7 at
some point it would be nice to know whether we can upgrade the clients
in a separate step (given that we have a reasonable number of them).

-- 
Cheers,
Kingsley.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Error when mounting glusterfs on VM

2015-07-16 Thread Jiffin Tony Thottan



On 16/07/15 11:52, Kaamesh Kamalaaharan wrote:

Hi everyone,
Im trying to mount my volume on my VM but im encountering several problems
1) Using nfs mount to mount my gluster volume, i am able to mount 
normally, but when i access executables stored on my gluster volume, 
the process hangs and i have to cancel it. Once i cancel the process, 
the mount point is no longer accessible and im unable to view any 
files on my mount point


Can u please provide following details :

1.) can u try showmount -e   on client

at server

2.) gluster v  status

3) check the pid of gluster-nfs running on the brick(if there is)

4.) provide nfs.log and rpcinfo

Thanks,
Jiffin



2) using gluster-client (same version as server 3.6.2 ) to mount my 
gluster volume , im getting the following messages in the gluster log.


1: volume gfsvolume-client-0
  2: type protocol/client
  3: option ping-timeout 30
  4: option remote-host gfs1
  5: option remote-subvolume /export/sda/brick
  6: option transport-type socket
  7: option frame-timeout 90
  8: option send-gids true
  9: end-volume
 10:
 11: volume gfsvolume-client-1
 12: type protocol/client
 13: option ping-timeout 30
 14: option remote-host gfs2
 15: option remote-subvolume /export/sda/brick
 16: option transport-type socket
 17: option frame-timeout 90
 18: option send-gids true
 19: end-volume
 20:
 21: volume gfsvolume-replicate-0
 22: type cluster/replicate
 23: option data-self-heal-algorithm diff
 24: option quorum-type fixed
 25: option quorum-count 1
 26: subvolumes gfsvolume-client-0 gfsvolume-client-1
 27: end-volume
 28:
 29: volume gfsvolume-dht
 30: type cluster/distribute
 31: subvolumes gfsvolume-replicate-0
 32: end-volume
 33:
 34: volume gfsvolume-write-behind
 35: type performance/write-behind
 36: option cache-size 4MB
 37: subvolumes gfsvolume-dht
 38: end-volume
 39:
 40: volume gfsvolume-read-ahead
 41: type performance/read-ahead
 42: subvolumes gfsvolume-write-behind
 43: end-volume
 44:
 45: volume gfsvolume-io-cache
 46: type performance/io-cache
 47: option max-file-size 2MB
 48: option cache-timeout 60
 49: option cache-size 6442450944
 50: subvolumes gfsvolume-read-ahead
 51: end-volume
 52:
 53: volume gfsvolume-open-behind
 54: type performance/open-behind
 55: subvolumes gfsvolume-io-cache
 56: end-volume
 57:
 58: volume gfsvolume-md-cache
 59: type performance/md-cache
 60: subvolumes gfsvolume-open-behind
 61: end-volume
 62:
 63: volume gfsvolume
[2015-07-16 03:31:27.231127] E [MSGID: 108006]
[afr-common.c:3591:afr_notify] 0-gfsvolume-replicate-0: All
subvolumes are down. Going offline until atleast one of them
comes back up.
[2015-07-16 03:31:27.232201] W [MSGID: 108001]
[afr-common.c:3635:afr_notify] 0-gfsvolume-replicate-0:
Client-quorum is not met
 64: type debug/io-stats
 65: option latency-measurement on
 66: option count-fop-hits on
 67: subvolumes gfsvolume-md-cache
 68: end-volume
 69:
 70: volume meta-autoload
 71: type meta
 72: subvolumes gfsvolume
 73: end-volume
 74:

+--+
[2015-07-16 03:31:27.254509] I
[fuse-bridge.c:5080:fuse_graph_setup] 0-fuse: switched to graph 0
[2015-07-16 03:31:27.255262] I [fuse-bridge.c:4009:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions:
glusterfs 7.22 kernel 7.17
[2015-07-16 03:31:27.256340] I
[afr-common.c:3722:afr_local_init] 0-gfsvolume-replicate-0: no
subvolumes up
[2015-07-16 03:31:27.256722] I
[afr-common.c:3722:afr_local_init] 0-gfsvolume-replicate-0: no
subvolumes up
[2015-07-16 03:31:27.256840] W
[fuse-bridge.c:779:fuse_attr_cbk] 0-glusterfs-fuse: 2:
LOOKUP() / => -1 (Transport endpoint is not connected)
[2015-07-16 03:31:27.284927] I
[fuse-bridge.c:4921:fuse_thread_proc] 0-fuse: unmounting /export
[2015-07-16 03:31:27.285919] W
[glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum
(15), shutting down
[2015-07-16 03:31:27.286052] I [fuse-bridge.c:5599:fini]
0-fuse: Unmounting '/export'.



I have deployed gluster su

Re: [Gluster-users] Error when mounting glusterfs on VM

2015-07-16 Thread Kaamesh Kamalaaharan
Hi,
 I have no idea why it says that. The volume is still up and accessible
from the physical machines. i force started it and still no luck.

On Thu, Jul 16, 2015 at 2:27 PM, Sakshi Bansal  wrote:

> Hello,
>
> The logs have an error "All subvolumes down". Probably the volume is not
> started. To start a volume run:
> $ gluster volume start 
>
> If it is started then try:
> $ gluster volume start  force
>
> Try to mount the volume after that.
>
>
> Thanks and Regards
> Sakshi Bansal
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users