Re: [Gluster-users] Error to move files

2016-06-16 Thread Pepe Charli
I not know how to to reproduce it.

The files were not links

Thanks,
Pepe

2016-06-17 4:24 GMT+02:00 Vijay Bellur :
> On Wed, Jun 15, 2016 at 7:09 AM, Pepe Charli  wrote:
>> Hi,
>>
>> $ gluster vol info cfe-gv1
>>
>> Volume Name: cfe-gv1
>> Type: Distributed-Replicate
>> Volume ID: 70632183-4f26-4f03-9a48-e95f564a9e8c
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: srv-vln-gfsc1n1:/expgfs/cfe/brick1/brick
>> Brick2: srv-vln-gfsc1n2:/expgfs/cfe/brick1/brick
>> Brick3: srv-vln-gfsc1n3:/expgfs/cfe/brick1/brick
>> Brick4: srv-vln-gfsc1n4:/expgfs/cfe/brick1/brick
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> nfs.disable: on
>> user.cifs: disable
>> user.smb: disable
>> user.cifs.disable: on
>> user.smb.disable: on
>> client.event-threads: 4
>> server.event-threads: 4
>> cluster.lookup-optimize: on
>> cluster.server-quorum-type: server
>> cluster.server-quorum-ratio: 51%
>>
>> I did not see any errors in logs.
>>
>> I could move the file through an intermediate directory  /tmp (not 
>> GlusterFS).
>> $ mv /u01/2016/03/fichero.xml /tmp
>> $ mv /tmp/ /u01/procesados/2016/03/
>>
>> I did not think restart the volume,
>> What do you think could be the problem?
>>
>
> Would you happen to know how reproducible this problem is?
>
> Looking at the source code of coreutils, it does look like the error
> message mentioned in the earlier post is reported by a ln/link
> operation. dht uses links as part of a rename transaction and it is
> probably getting triggered due to that.
>
> Including dht maintainers Raghavendra and Shyam to take a look into this 
> issue.
>
> Regards,
> Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS Mesos Isolator

2016-06-16 Thread Vijay Bellur
On Thu, Jun 16, 2016 at 9:50 AM, Savage, Rory (CORP)
 wrote:
> Hello-
>
> I am new to this list.   I currently have  few Mesos clusters running Docker
> containers launched under Marathon.I am also utilizing the
> GlusterFS/Docker Volume Driver plugin for these containers which works
> pretty well.   However, as more mesos frameworks mature, I want to drop
> docker containers for native mesos containers.   I thought I saw a glusterfs
> mesos islolator project out there, but I am no longer able to find it.
> Does anyone know if there is a native mesos glusterfs isolator out there?
> Or perhaps know how to build one?
>

I had a quick look about the available isolators in upstream Mesos.
There is no Gluster native isolator but probably the shared filesystem
or posix isolators may work with a Gluster mount. What gluster
features would you look to have in a native Gluster isolator? quotas?

Thanks,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-16 Thread Vijay Bellur
On Wed, Jun 15, 2016 at 8:07 AM, André Bauer  wrote:
> Hi Prasanna,
>
> Am 15.06.2016 um 12:09 schrieb Prasanna Kalever:
>
>>
>> I think you have missed enabling bind insecure which is needed by
>> libgfapi access, please try again after following below steps
>>
>> => edit /etc/glusterfs/glusterd.vol by add "option
>> rpc-auth-allow-insecure on" #(on all nodes)
>> => gluster vol set $volume server.allow-insecure on
>> => systemctl restart glusterd #(on all nodes)
>>
>
> No, thats not the case. All services are up and runnig correctly,
> allow-insecure is set and the volume works fine with libgfapi access
> from my Ubuntu 14.04 KVM/Qemu servers.
>
> Just the server which was updated to Ubuntu 16.04 can't access the
> volume via libgfapi anmyore (fuse mount still works).
>
> GlusterFS logs are empty when trying to access the GlusterFS nodes so iyo
> think the requests are blocked on the client side.
>
> Maybe apparmor again?
>

Might be worth a check again to see if there are any errors seen in
glusterd's log file on the server. libvirtd seems to indicate that
fetch of the volume configuration file from glusterd has failed.

If there are no errors in glusterd or glusterfsd (brick) logs, then we
can possibly blame apparmor ;-).

Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster volume listening on multiple IP address/networks

2016-06-16 Thread Vijay Bellur
On Wed, Jun 15, 2016 at 11:06 AM, ML mail  wrote:
> Hello
>
> In order to avoid losing performance/latency I would like to have my Gluster
> volumes available through one IP address on each of my networks/VLANs. So
> that the gluster client and server are available on the same network. My
> clients mount the volume using native gluster protocol.
>
> So my question here is how can I have a gluster volume listening to more
> than one single network or IP addresses respectively? Is this possible?
>

gluster server processes listen on all available interfaces in the
storage server. One way of achieving this possibly by defining the
volume with hostnames that resolve to the appropriate IP addresses in
different VLANs.

HTH,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster warning remote operation failed during recovery from backups

2016-06-16 Thread Vijay Bellur
On Thu, Jun 16, 2016 at 3:05 PM, Steve Dainard  wrote:
> I'm restoring some data to gluster from TSM backups and the client errors
> out trying to retrieve xattrs at some point during the restore, killing
> progress:
> ...
> Restoring   8,118,878
> /storage/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_04.asc
> [Done]
> ANS1587W Unable to read extended attributes for object
> /storage/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_08.asc
> due to errno: 34, reason: Numerical result out of range
>  ** Unsuccessful **
> ...
>
> In the gluster fuse logs for the volume I see this:
> [2016-06-16 10:07:55.622020] W [MSGID: 114031]
> [client-rpc-fops.c:1161:client3_3_getxattr_cbk] 0-storage-client-2: remote
> operation failed. Path:
> /data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_08.asc
> (0e98a94b-7b86-4a72-88a9-a99a787e059d). Key: (null) [Numerical result out of
> range]
> [2016-06-16 10:07:55.622110] W [fuse-bridge.c:3353:fuse_xattr_cbk]
> 0-glusterfs-fuse: 76197165: GETXATTR((null))
> /data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_08.asc =>
> -1 (Numerical result out of range)
>
> I'm trying to understand if gluster is bubbling up errors to the TSM client
> (gluster fault), or reporting errors the TSM client is generating (TSM
> fault).
>

Do you happen to see the same error reported by posix translator(s) in
any of the brick(s)? Doing that might help in figuring out where the
problem could be stemming from.

As per man (2) getxattr, ERANGE is seen when the size of the value
buffer is too small to hold the result. Would it be possible to strace
the TSM client and see the size of the value buffer being passed?
Also, doing an extended attribute dump of the file on the brick
directory (either through attr or getfattr) can help in determining
the size necessary to hold all attributes.

HTH,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Error to move files

2016-06-16 Thread Vijay Bellur
On Wed, Jun 15, 2016 at 7:09 AM, Pepe Charli  wrote:
> Hi,
>
> $ gluster vol info cfe-gv1
>
> Volume Name: cfe-gv1
> Type: Distributed-Replicate
> Volume ID: 70632183-4f26-4f03-9a48-e95f564a9e8c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: srv-vln-gfsc1n1:/expgfs/cfe/brick1/brick
> Brick2: srv-vln-gfsc1n2:/expgfs/cfe/brick1/brick
> Brick3: srv-vln-gfsc1n3:/expgfs/cfe/brick1/brick
> Brick4: srv-vln-gfsc1n4:/expgfs/cfe/brick1/brick
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> user.cifs: disable
> user.smb: disable
> user.cifs.disable: on
> user.smb.disable: on
> client.event-threads: 4
> server.event-threads: 4
> cluster.lookup-optimize: on
> cluster.server-quorum-type: server
> cluster.server-quorum-ratio: 51%
>
> I did not see any errors in logs.
>
> I could move the file through an intermediate directory  /tmp (not GlusterFS).
> $ mv /u01/2016/03/fichero.xml /tmp
> $ mv /tmp/ /u01/procesados/2016/03/
>
> I did not think restart the volume,
> What do you think could be the problem?
>

Would you happen to know how reproducible this problem is?

Looking at the source code of coreutils, it does look like the error
message mentioned in the earlier post is reported by a ln/link
operation. dht uses links as part of a rename transaction and it is
probably getting triggered due to that.

Including dht maintainers Raghavendra and Shyam to take a look into this issue.

Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] stripe 2 replica 2 VS disperse 4 redundancy 2

2016-06-16 Thread Ravishankar N

On 06/17/2016 03:05 AM, Manuel Padrón Martínez wrote:

Hi:

I have a big doubt.
I have 2 servers with 2 disks of 2 TB each. I've been thinking to create a 
volume with stripe 2 replica 2 creating a brick with each disk and using 
server1:/b1 server2:/b1 server1:/b2 server2:/b2.

Striping is not actively developed. Sharding [1] is the successor to it.

This seems to work fine 4TB of space and if one disk or even one server fails 
the volume is still there. But I just found disperse volumes, I understand that 
disperse 4 redundancy 2 work in the same way.

Any suggestion on which solution is better?

It depends on your workload really.

  which one is faster?
Replica volumes are faster than disperse because there is no erasure 
code math to be done during I/O but  as is obvious, you'd get less 
volume space than disperse.

which one you'll recommend?
For  high I/O rate workloads, replica could be a better choice. You 
should try both and see what works best for you.

Btw, you need 6 bricks for a 4+2 disperse configuration.

-Ravi

[1] http://blog.gluster.org/2015/12/introducing-shard-translator/



Thanks from Canary Islands

Manuel Padrón Martínez
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] stripe 2 replica 2 VS disperse 4 redundancy 2

2016-06-16 Thread Manuel Padrón Martínez
Hi:

I have a big doubt. 
I have 2 servers with 2 disks of 2 TB each. I've been thinking to create a 
volume with stripe 2 replica 2 creating a brick with each disk and using 
server1:/b1 server2:/b1 server1:/b2 server2:/b2. 
This seems to work fine 4TB of space and if one disk or even one server fails 
the volume is still there. But I just found disperse volumes, I understand that 
disperse 4 redundancy 2 work in the same way. 

Any suggestion on which solution is better? which one is faster? which one 
you'll recommend?

Thanks from Canary Islands

Manuel Padrón Martínez
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster warning remote operation failed during recovery from backups

2016-06-16 Thread Steve Dainard
I'm restoring some data to gluster from TSM backups and the client errors
out trying to retrieve xattrs at some point during the restore, killing
progress:
...
Restoring   8,118,878
/storage/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_04.asc
[Done]
ANS1587W Unable to read extended attributes for object
/storage/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_08.asc
due to errno: 34, reason: Numerical result out of range
 ** Unsuccessful **
...

In the gluster fuse logs for the volume I see this:
[2016-06-16 10:07:55.622020] W [MSGID: 114031]
[client-rpc-fops.c:1161:client3_3_getxattr_cbk] 0-storage-client-2: remote
operation failed. Path:
/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_08.asc
(0e98a94b-7b86-4a72-88a9-a99a787e059d). Key: (null) [Numerical result out
of range]
[2016-06-16 10:07:55.622110] W [fuse-bridge.c:3353:fuse_xattr_cbk]
0-glusterfs-fuse: 76197165: GETXATTR((null))
/data/climate/ANUSPLIN/ANUSPLIN300/monthly/pcp_grids/1918/pcp300_08.asc =>
-1 (Numerical result out of range)

I'm trying to understand if gluster is bubbling up errors to the TSM client
(gluster fault), or reporting errors the TSM client is generating (TSM
fault).

Thanks,
Steve
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS Mesos Isolator

2016-06-16 Thread Savage, Rory (CORP)
Hello-

I am new to this list.   I currently have  few Mesos clusters running Docker 
containers launched under Marathon.I am also utilizing the GlusterFS/Docker 
Volume Driver plugin for these containers which works pretty well.   However, 
as more mesos frameworks mature, I want to drop docker containers for native 
mesos containers.   I thought I saw a glusterfs mesos islolator project out 
there, but I am no longer able to find it.   Does anyone know if there is a 
native mesos glusterfs isolator out there?   Or perhaps know how to build one?

Thanks,

Rory Savage

--
This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, notify the sender immediately by return email and delete the message 
and any attachments from your system.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Problem with glusterd locks on gluster 3.6.1

2016-06-16 Thread Atin Mukherjee


On 06/16/2016 01:32 PM, B.K.Raghuram wrote:
> Thanks a lot Atin,
> 
> The problem is that we are using a forked version of 3.6.1 which has
> been modified to work with ZFS (for snapshots) but we do not have the
> resources to port that over to the later versions of gluster.
> 
> Would you know of anyone who would be willing to take this on?!

If you can cherry pick the patches and apply them on your source and
rebuild it, I can point the patches to you, but you'd need to give a
day's time to me as I have some other items to finish from my plate.

~Atin
> 
> Regards,
> -Ram
> 
> On Thu, Jun 16, 2016 at 11:02 AM, Atin Mukherjee  > wrote:
> 
> 
> 
> On 06/16/2016 10:49 AM, B.K.Raghuram wrote:
> >
> >
> > On Wed, Jun 15, 2016 at 5:01 PM, Atin Mukherjee  
> > >> wrote:
> >
> >
> >
> > On 06/15/2016 04:24 PM, B.K.Raghuram wrote:
> > > Hi,
> > >
> > > We're using gluster 3.6.1 and we periodically find that gluster 
> commands
> > > fail saying the it could not get the lock on one of the brick 
> machines.
> > > The logs on that machine then say something like :
> > >
> > > [2016-06-15 08:17:03.076119] E
> > > [glusterd-op-sm.c:3058:glusterd_op_ac_lock] 0-management: Unable 
> to
> > > acquire lock for vol2
> >
> > This is a possible case if concurrent volume operations are run. Do 
> you
> > have any script which checks for volume status on an interval from 
> all
> > the nodes, if so then this is an expected behavior.
> >
> >
> > Yes, I do have a couple of scripts that check on volume and quota
> > status.. Given this, I do get a "Another transaction is in progress.."
> > message which is ok. The problem is that sometimes I get the volume lock
> > held message which never goes away. This sometimes results in glusterd
> > consuming a lot of memory and CPU and the problem can only be fixed with
> > a reboot. The log files are huge so I'm not sure if its ok to attach
> > them to an email.
> 
> Ok, so this is known. We have fixed lots of stale lock issues in 3.7
> branch and some of them if not all were also backported to 3.6 branch.
> The issue is you are using 3.6.1 which is quite old. If you can upgrade
> to latest versions of 3.7 or at worst of 3.6 I am confident that this
> will go away.
> 
> ~Atin
> >
> > >
> > > After sometime, glusterd then seems to give up and die..
> >
> > Do you mean glusterd shuts down or segfaults, if so I am more
> interested
> > in analyzing this part. Could you provide us the glusterd log,
> > cmd_history log file along with core (in case of SEGV) from
> all the
> > nodes for the further analysis?
> >
> >
> > There is no segfault. glusterd just shuts down. As I said above,
> > sometimes this happens and sometimes it just continues to hog a lot of
> > memory and CPU..
> >
> >
> > >
> > > Interestingly, I also find the following line in the
> beginning of
> > > etc-glusterfs-glusterd.vol.log and I dont know if this has any
> > > significance to the issue :
> > >
> > > [2016-06-14 06:48:57.282290] I
> > > [glusterd-store.c:2063:glusterd_restore_op_version]
> 0-management:
> > > Detected new install. Setting op-version to maximum : 30600
> > >
> >
> >
> > What does this line signify?
> 
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Add hot tier brick on tiered volume

2016-06-16 Thread Mohammed Rafi K C
Hi Vincent,

We are working on supporting add and remove brick on a tiered volume.
One patch is up in master branch towards this direction [1], which is
still under review.

Till then , the work around to scale tier volume is to detach the tier,
and then scale your cold tier, then reattach the hot tier after adding
the bricks to hot tier.

Let me know if you have further queries

[1] : http://review.gluster.org/13365

Regards
Rafi KC

On 06/16/2016 01:51 PM, Vincent Miszczak wrote:
>
> Hello,
>
>
> Playing with Gluster 3.7(lastest), I would like to be able to add
> bricks to a hot tier.
>
>
> Looks like not possible for now :
>
> /volume attach-tier: failed: Volume vol is already a tier./
>
> /
> /
>
> I've seen similar request here :
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1229237
>
> It's 3.1 and 2015
>
>
> Anyone working on this must have feature  (at least to me) ?
>
> You know, data is growing, SSD are cheap and you want a bunch of them.
>
>
> Vincent
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Add hot tier brick on tiered volume

2016-06-16 Thread Vincent Miszczak
Hello,


Playing with Gluster 3.7(lastest), I would like to be able to add bricks to a 
hot tier.


Looks like not possible for now :

volume attach-tier: failed: Volume vol is already a tier.


I've seen similar request here :

https://bugzilla.redhat.com/show_bug.cgi?id=1229237

It's 3.1 and 2015


Anyone working on this must have feature  (at least to me)  ?

You know, data is growing, SSD are cheap and you want a bunch of them.


Vincent

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS over S3FS

2016-06-16 Thread Vincent Miszczak
Hello,


Thank you for pointing this project, gonna try this.


The idea behind what I described, is to provide SMB shares with automatic 
placement based on usage pattern.

I work with large volumes, and only a fraction should have good (costly) 
performance. The rest can be archived.


Archiving "the old way" (I mean manually moving the files to a cold tier) is 
not convenient, it breaks URLs, unless someone has some tips about this.


I'm able to do the scenario described with normal Gluster nodes, some local 
with costly storage, some remotes with cheap storage. It's just tiering.

But I still have to manage the Linux behind the cold tier. Not interesting to 
me as AWS, Google or whatever can provide cheap object storage without 
maintenance (that means, less people in my organization to achieve the same 
job).


Vincent


From: gluster-users-boun...@gluster.org  on 
behalf of Niklaas Baudet von Gersdorff 
Sent: Wednesday, June 15, 2016 1:32:58 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterFS over S3FS

Vincent Miszczak [2016-06-15 10:27 +] :

> I would like to combine Glusterfs with S3FS.
[...]
> I also have the idea to test this with Swift object storage. Advises are 
> welcome.

Never tried this before. Maybe S3QL [1] works since it "is
a standard conforming, full featured UNIX file system that is
conceptually indistinguishable from any local file system".

1: https://bitbucket.org/nikratio/s3ql/

The entire approach sounds a bit hackish to me though. :-)

Niklaas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Problem with glusterd locks on gluster 3.6.1

2016-06-16 Thread B.K.Raghuram
Thanks a lot Atin,

The problem is that we are using a forked version of 3.6.1 which has been
modified to work with ZFS (for snapshots) but we do not have the resources
to port that over to the later versions of gluster.

Would you know of anyone who would be willing to take this on?!

Regards,
-Ram

On Thu, Jun 16, 2016 at 11:02 AM, Atin Mukherjee 
wrote:

>
>
> On 06/16/2016 10:49 AM, B.K.Raghuram wrote:
> >
> >
> > On Wed, Jun 15, 2016 at 5:01 PM, Atin Mukherjee  > > wrote:
> >
> >
> >
> > On 06/15/2016 04:24 PM, B.K.Raghuram wrote:
> > > Hi,
> > >
> > > We're using gluster 3.6.1 and we periodically find that gluster
> commands
> > > fail saying the it could not get the lock on one of the brick
> machines.
> > > The logs on that machine then say something like :
> > >
> > > [2016-06-15 08:17:03.076119] E
> > > [glusterd-op-sm.c:3058:glusterd_op_ac_lock] 0-management: Unable to
> > > acquire lock for vol2
> >
> > This is a possible case if concurrent volume operations are run. Do
> you
> > have any script which checks for volume status on an interval from
> all
> > the nodes, if so then this is an expected behavior.
> >
> >
> > Yes, I do have a couple of scripts that check on volume and quota
> > status.. Given this, I do get a "Another transaction is in progress.."
> > message which is ok. The problem is that sometimes I get the volume lock
> > held message which never goes away. This sometimes results in glusterd
> > consuming a lot of memory and CPU and the problem can only be fixed with
> > a reboot. The log files are huge so I'm not sure if its ok to attach
> > them to an email.
>
> Ok, so this is known. We have fixed lots of stale lock issues in 3.7
> branch and some of them if not all were also backported to 3.6 branch.
> The issue is you are using 3.6.1 which is quite old. If you can upgrade
> to latest versions of 3.7 or at worst of 3.6 I am confident that this
> will go away.
>
> ~Atin
> >
> > >
> > > After sometime, glusterd then seems to give up and die..
> >
> > Do you mean glusterd shuts down or segfaults, if so I am more
> interested
> > in analyzing this part. Could you provide us the glusterd log,
> > cmd_history log file along with core (in case of SEGV) from all the
> > nodes for the further analysis?
> >
> >
> > There is no segfault. glusterd just shuts down. As I said above,
> > sometimes this happens and sometimes it just continues to hog a lot of
> > memory and CPU..
> >
> >
> > >
> > > Interestingly, I also find the following line in the beginning of
> > > etc-glusterfs-glusterd.vol.log and I dont know if this has any
> > > significance to the issue :
> > >
> > > [2016-06-14 06:48:57.282290] I
> > > [glusterd-store.c:2063:glusterd_restore_op_version] 0-management:
> > > Detected new install. Setting op-version to maximum : 30600
> > >
> >
> >
> > What does this line signify?
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users