[Gluster-users] Glusterfs 3.7.13 node suddenly stops healing

2016-08-03 Thread Davy Croonen
Hi all, About a month ago we deployed a Glusterfs 3.7.13 cluster with 6 nodes (3 x 2 replication). Suddenly since this week one node in the cluster started reporting unsynced entries once a day. If I then run a gluster volume heal full command the unsynced entries disappear until the next day.

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Serkan Çoban
Thanks Pranith, I am waiting for RPMs to show, I will do the tests as soon as possible and inform you. On Wed, Aug 3, 2016 at 11:19 PM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri > wrote: >> >> >> >> On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
Yeah 5 MB because the VMs are serving monitoring software which doesn't do much, but i can easily hit +250 MB of write speed in benchmark. -- Respectfully Mahdi A. Mahdi > From: gandalf.corvotempe...@gmail.com > Date: Wed, 3 Aug 2016 22:44:16 +0200 > Subject: Re: [Gluster-users] Glu

[Gluster-users] do you still need ctdb with gluster?

2016-08-03 Thread Leno Vo
hi all is ctdb still needed with gluster, i just realize that dns is already round robin?  or nfs go down when one node do down?  i found ctdb also going down when one node go down, at least for a couple of minutes even with mount errors=continue and other paramters i found here on the net.. tha

Re: [Gluster-users] Gluster replica over WAN...

2016-08-03 Thread Ted Miller
On 8/2/2016 8:14 AM, Gilberto Nunes wrote: Hello list... This is my first post on this list. I have here two IBM Server, with 9 TB of hardisk on which one. Between this servers, I have a WAN connecting two offices,let say OFFICE1 and OFFICE2. This WAN connection is over fibre channel. When I

Re: [Gluster-users] 3.7.13 two node ssd solid rock

2016-08-03 Thread Leno Vo
i have to reboot each node to make it working with a time interval of 5-8 mins, after that it got stable but still lots of sharding didn't heal but there's no split-brain.  some vms lost it's vmx, so i created new vm and put to the storage to make working, wew!!! sharding is still faulty, wo

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Gandalf Corvotempesta
2016-08-03 22:33 GMT+02:00 Mahdi Adnan : > Yeah, only 3 for now running in 3 replica. > around 5MB (900 IOps) write and 3MB (250 IOps) read and the disks are 900GB > 10K SAS. 5MB => five megabytes/s ? Less than an older and ancient 4x DVD reader ? Really ? Are you sure? 50VMs with five megabytes/s

Re: [Gluster-users] 3.7.13 two node ssd solid rock

2016-08-03 Thread Leno Vo
my mistakes, the corruption happened after 6 hours, some vm had sharding won't heal but there's no split brain On Wednesday, August 3, 2016 11:13 AM, Leno Vo wrote: One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because o

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
Yeah, only 3 for now running in 3 replica.around 5MB (900 IOps) write and 3MB (250 IOps) read and the disks are 900GB 10K SAS. -- Respectfully Mahdi A. Mahdi > From: gandalf.corvotempe...@gmail.com > Date: Wed, 3 Aug 2016 22:09:59 +0200 > Subject: Re: [Gluster-users] Gluster 3.7.13

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Pranith Kumar Karampuri
On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban > wrote: > >> I use rpms for installation. Redhat/Centos 6.8. >> > > http://review.gluster.org/#/c/15084 is the patch. In some time the rpms > will be built actually. > In the same

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Pranith Kumar Karampuri
On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban wrote: > I use rpms for installation. Redhat/Centos 6.8. > http://review.gluster.org/#/c/15084 is the patch. In some time the rpms will be built actually. Use gluster volume set disperse.shd-max-threads While testing this I thought of ways to dec

[Gluster-users] LVM thin provisionning for my geo-rep slave

2016-08-03 Thread ML mail
Hello, I am planning to use snapshots on my geo-rep slave and as such wanted first to ask if the following procedure regarding the LVM thin provisionning is correct: Create physical volume: pvcreate /dev/xvdb Create volume group: vgcreate gfs_vg /dev/xvdb Create thin pool: lvcreate -L 4T -T gf

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Gandalf Corvotempesta
2016-08-03 21:40 GMT+02:00 Mahdi Adnan : > Hi, > > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, > 8x900GB spindles, with Intel X520 dual 10G ports. We are planning to migrate > more VMs and increase the number of servers in the cluster as soon as we > figure what's g

Re: [Gluster-users] Failed file system

2016-08-03 Thread Mahdi Adnan
Hi, I'm not expert in Gluster but, i think it would be better to replace the downed brick with a new one.Maybe start from here; https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick -- Respectfully Mahdi A. Mahdi Date: Wed, 3 Aug 2016 15:39:

Re: [Gluster-users] Failed file system

2016-08-03 Thread Atin Mukherjee
Use replace brick commit force. @Pranith/@Anuradha - post this will self heal be triggered automatically or a manual trigger is needed? On Thursday 4 August 2016, Andres E. Moya wrote: > Does anyone else have input? > > we are currently only running off 1 node and one node is offline in > repli

Re: [Gluster-users] 3.7.13 two node ssd solid rock

2016-08-03 Thread Ted Miller
On 8/3/2016 11:13 AM, Leno Vo wrote: One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB. Never got split-brain, and healed quickly. Can we say 3.7.13 two nodes

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Serkan Çoban
I had stability problems with centos 7.2 and gluster 3.7.11. Nodes were crashing without any clue. I cannot find a solution and switched to centos 6.8. All problems gone with 6.8. Maybe you can test with centos 6.8? On Wed, Aug 3, 2016 at 10:40 PM, Mahdi Adnan wrote: > Hi, > > Currently, we have

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
Hi, Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, 8x900GB spindles, with Intel X520 dual 10G ports. We are planning to migrate more VMs and increase the number of servers in the cluster as soon as we figure what's going on with the NFS mount. -- Respectfully

Re: [Gluster-users] Failed file system

2016-08-03 Thread Andres E. Moya
Does anyone else have input? we are currently only running off 1 node and one node is offline in replicate brick. we are not experiencing any downtime because the 1 node is up. I do not understand which is the best way to bring up a second node. Do we just re create a file system on the no

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Serkan Çoban
I use rpms for installation. Redhat/Centos 6.8. On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban wrote: >> >> I prefer 3.7 if it is ok for you. Can you also provide build instructions? > > > 3.7 should be fine. Do you use rpms/de

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Pranith Kumar Karampuri
On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban wrote: > I prefer 3.7 if it is ok for you. Can you also provide build instructions? > 3.7 should be fine. Do you use rpms/debs/anything-else? > > On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri > wrote: > > > > > > On Thu, Aug 4, 2016 at

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Serkan Çoban
I prefer 3.7 if it is ok for you. Can you also provide build instructions? On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban wrote: >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I have >> 26 disks on each ser

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Pranith Kumar Karampuri
On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban wrote: > Yes, but I can create 2+1(or 8+2) ec using two servers right? I have > 26 disks on each server. > On which release-branch do you want the patch? I am testing it on master-branch now. > > On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karamp

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Serkan Çoban
Yes, but I can create 2+1(or 8+2) ec using two servers right? I have 26 disks on each server. On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban wrote: >> >> I have two of my storage servers free, I think I can use them for >> testi

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Pranith Kumar Karampuri
On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban wrote: > I have two of my storage servers free, I think I can use them for > testing. Is two server testing environment ok for you? > I think it would be better if you have at least 3. You can test it with 2+1 ec configuration. > > On Wed, Aug 3, 2

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Serkan Çoban
I have two of my storage servers free, I think I can use them for testing. Is two server testing environment ok for you? On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri wrote: > > > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban wrote: >> >> Hi, >> >> May I ask if multi-threaded self heal

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Pranith Kumar Karampuri
On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban wrote: > Hi, > > May I ask if multi-threaded self heal for distributed disperse volumes > implemented in this release? > Serkan, At the moment I am a bit busy with different work, Is it possible for you to help test the feature if I provide a

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Gandalf Corvotempesta
2016-08-03 17:02 GMT+02:00 Mahdi Adnan : > the problem is, the current setup is used in a production environment, and > switching the mount point of +50 VMs from native nfs to nfs-ganesha is not > going to be smooth and without downtime, so i really appreciate your > thoughts on this matter. A li

Re: [Gluster-users] Gluster replica over WAN...

2016-08-03 Thread Gilberto Nunes
Hi Somebody can help??? Thanks 2016-08-02 9:14 GMT-03:00 Gilberto Nunes : > Hello list... > This is my first post on this list. > > I have here two IBM Server, with 9 TB of hardisk on which one. > Between this servers, I have a WAN connecting two offices,let say OFFICE1 > and OFFICE2. > This W

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
Hi, Unfortunately no, but i can setup a test bench and see if it gets the same results. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Wed, 3 Aug 2016 20:59:50 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gl

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Krutika Dhananjay
Do you have a test case that consistently recreates this problem? -Krutika On Wed, Aug 3, 2016 at 8:32 PM, Mahdi Adnan wrote: > Hi, > > So i have updated to 3.7.14 and i still have the same issue with NFS. > based on what i have provided so far from logs and dumps do you think it's > an NFS is

[Gluster-users] 3.7.13 two node ssd solid rock

2016-08-03 Thread Leno Vo
One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB. Never got split-brain, and healed quickly.  Can we say 3.7.13 two nodes with ssd is solid rock or just lucky? M

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
Hi, So i have updated to 3.7.14 and i still have the same issue with NFS.based on what i have provided so far from logs and dumps do you think it's an NFS issue ? should i switch to nfs-ganesha ? the problem is, the current setup is used in a production environment, and switching the mount poin

Re: [Gluster-users] gluster 3.8.1 issue in compiling from source tarball

2016-08-03 Thread Kaleb S. KEITHLEY
On 08/03/2016 10:42 AM, Yannick Perret wrote: > Le 03/08/2016 à 15:33, Amudhan P a écrit : >> Hi, >> >> I am trying to install gluster 3.8.1 from tarball in Ubuntu 14.04. >> >> 1. when i run "./configure --disable-tiering" at the end showing msg >> >> configure: WARNING: cache variable ac_cv_build

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-03 Thread Leno Vo
your 10G nic is capable, the problem is the disk speed, fix ur disk speed first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least. On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan wrote: Hi , I have gluster 3.6.2 installed on my server network. Due to internal

Re: [Gluster-users] gluster 3.8.1 issue in compiling from source tarball

2016-08-03 Thread Yannick Perret
Le 03/08/2016 à 15:33, Amudhan P a écrit : Hi, I am trying to install gluster 3.8.1 from tarball in Ubuntu 14.04. 1. when i run "./configure --disable-tiering" at the end showing msg configure: WARNING: cache variable ac_cv_build contains a newline configure: WARNING: cache variable ac_cv_host

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-03 Thread David Gossage
On Wed, Aug 3, 2016 at 7:57 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > On 3/08/2016 10:45 PM, Lindsay Mathieson wrote: > > On 3/08/2016 2:26 PM, Krutika Dhananjay wrote: > > Once I deleted old content from test volume it mounted to oVirt via > storage add when previously it woul

[Gluster-users] gluster 3.8.1 issue in compiling from source tarball

2016-08-03 Thread Amudhan P
Hi, I am trying to install gluster 3.8.1 from tarball in Ubuntu 14.04. 1. when i run "./configure --disable-tiering" at the end showing msg configure: WARNING: cache variable ac_cv_build contains a newline configure: WARNING: cache variable ac_cv_host contains a newline 2. running "make" comma

[Gluster-users] Meeting Update

2016-08-03 Thread Ankit Raj
We had a pretty well attended and lively meeting today, thanks to everyone who attended. The meeting minutes and logs are available at the links below. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-08-03/weekly_community_meeting_03-aug-2016.2016-08-03-12.01.html Minutes (text):

[Gluster-users] gluster reverting directory owndership?

2016-08-03 Thread Sergei Gerasenko
Hi, It seems that glusterfsd reverts ownership on directories on the brick. I would set a directory to be owned by root:root and within a half hour they are back to the previous value. Audit logs show that glusterfsd performed the change, thought it’s not clear if something “asked” glusterfsd t

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-03 Thread Lindsay Mathieson
On 3/08/2016 10:45 PM, Lindsay Mathieson wrote: On 3/08/2016 2:26 PM, Krutika Dhananjay wrote: Once I deleted old content from test volume it mounted to oVirt via storage add when previously it would error out. I am now creating a test VM with default disk caching settings (pretty sure oVirt i

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-03 Thread Lindsay Mathieson
On 3/08/2016 2:26 PM, Krutika Dhananjay wrote: Once I deleted old content from test volume it mounted to oVirt via storage add when previously it would error out. I am now creating a test VM with default disk caching settings (pretty sure oVirt is defaulting to none rather than writeback/throu

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Kaushal M
On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban wrote: > Hi, > > May I ask if multi-threaded self heal for distributed disperse volumes > implemented in this release? AFAIK, not yet. It's not yet available on the master branch yet. Pranith can give a better answer. > > Thanks, > Serkan > > On Tue,

Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Serkan Çoban
Hi, May I ask if multi-threaded self heal for distributed disperse volumes implemented in this release? Thanks, Serkan On Tue, Aug 2, 2016 at 5:30 PM, David Gossage wrote: > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson > wrote: >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: >>> >>> GlusterF

[Gluster-users] Weekly Gluster community meeeting

2016-08-03 Thread Ankit Raj
Hi all, The weekly Gluster community meeting is about to take place in 30 min Meeting details: - location: #gluster-meeting on Freenode IRC ( https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Wednesday - time: 12:00 UTC (in your terminal, run: date -d "12: