[Gluster-users] Glusterfs 3.7.13 node suddenly stops healing
Hi all, About a month ago we deployed a Glusterfs 3.7.13 cluster with 6 nodes (3 x 2 replication). Suddenly since this week one node in the cluster started reporting unsynced entries once a day. If I then run a gluster volume heal full command the unsynced entries disappear until the next day. For completeness the reported unsynced entries are always different. I checked all logs but could find a clue what’s causing this. Anybody any ideas? Kind regards Davy ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
Thanks Pranith, I am waiting for RPMs to show, I will do the tests as soon as possible and inform you. On Wed, Aug 3, 2016 at 11:19 PM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri > wrote: >> >> >> >> On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban >> wrote: >>> >>> I use rpms for installation. Redhat/Centos 6.8. >> >> >> http://review.gluster.org/#/c/15084 is the patch. In some time the rpms >> will be built actually. > > > In the same URL above it will actually post the rpms for fedora/el6/el7 at > the end of the page. > >> >> >> Use gluster volume set disperse.shd-max-threads > (range: 1-64)> >> >> While testing this I thought of ways to decrease the number of crawls as >> well. But they are a bit involved. Try to create same set of data and see >> what is the time it takes to complete heals using number of threads as you >> increase the number of parallel heals from 1 to 64. >> >>> >>> On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri >>> wrote: >>> > >>> > >>> > On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban >>> > wrote: >>> >> >>> >> I prefer 3.7 if it is ok for you. Can you also provide build >>> >> instructions? >>> > >>> > >>> > 3.7 should be fine. Do you use rpms/debs/anything-else? >>> > >>> >> >>> >> >>> >> On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri >>> >> wrote: >>> >> > >>> >> > >>> >> > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban >>> >> > >>> >> > wrote: >>> >> >> >>> >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I >>> >> >> have >>> >> >> 26 disks on each server. >>> >> > >>> >> > >>> >> > On which release-branch do you want the patch? I am testing it on >>> >> > master-branch now. >>> >> > >>> >> >> >>> >> >> >>> >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri >>> >> >> wrote: >>> >> >> > >>> >> >> > >>> >> >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban >>> >> >> > >>> >> >> > wrote: >>> >> >> >> >>> >> >> >> I have two of my storage servers free, I think I can use them >>> >> >> >> for >>> >> >> >> testing. Is two server testing environment ok for you? >>> >> >> > >>> >> >> > >>> >> >> > I think it would be better if you have at least 3. You can test >>> >> >> > it >>> >> >> > with >>> >> >> > 2+1 >>> >> >> > ec configuration. >>> >> >> > >>> >> >> >> >>> >> >> >> >>> >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri >>> >> >> >> wrote: >>> >> >> >> > >>> >> >> >> > >>> >> >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban >>> >> >> >> > >>> >> >> >> > wrote: >>> >> >> >> >> >>> >> >> >> >> Hi, >>> >> >> >> >> >>> >> >> >> >> May I ask if multi-threaded self heal for distributed >>> >> >> >> >> disperse >>> >> >> >> >> volumes >>> >> >> >> >> implemented in this release? >>> >> >> >> > >>> >> >> >> > >>> >> >> >> > Serkan, >>> >> >> >> > At the moment I am a bit busy with different work, Is >>> >> >> >> > it >>> >> >> >> > possible >>> >> >> >> > for you to help test the feature if I provide a patch? >>> >> >> >> > Actually >>> >> >> >> > the >>> >> >> >> > patch >>> >> >> >> > should be small. Testing is where lot of time will be spent >>> >> >> >> > on. >>> >> >> >> > >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> Thanks, >>> >> >> >> >> Serkan >>> >> >> >> >> >>> >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage >>> >> >> >> >> wrote: >>> >> >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson >>> >> >> >> >> > wrote: >>> >> >> >> >> >> >>> >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: >>> >> >> >> >> >>> >>> >> >> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular >>> >> >> >> >> >>> minor >>> >> >> >> >> >>> release. >>> >> >> >> >> >>> The release-notes are available at >>> >> >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> >> >> >>> >>> >> >> >> >> >>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> Thanks Kaushal, I'll check it out >>> >> >> >> >> >> >>> >> >> >> >> > >>> >> >> >> >> > So far on my test box its working as expected. At least >>> >> >> >> >> > the >>> >> >> >> >> > issues >>> >> >> >> >> > that >>> >> >> >> >> > prevented it from running as before have disappeared. Will >>> >> >> >> >> > need >>> >> >> >> >> > to >>> >> >> >> >> > see >>> >> >> >> >> > how >>> >> >> >> >> > my test VM behaves after a few days. >>> >> >> >> >> > >>> >> >> >> >> > >>> >> >> >> >> > >>> >> >> >> >> >> -- >>> >> >> >> >> >> Lindsay Mathieson >>> >> >> >> >> >> >>> >> >> >> >> >> ___ >>> >> >> >> >> >> Gluster-users mailing list >>> >> >> >> >> >> Gluster-users@gluster.org >>> >> >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users >>> >> >> >> >> > >>> >> >> >> >> > >>> >> >> >> >> > >>> >> >> >> >> > ___ >>> >> >> >> >> > Gluster-users mailing list >>> >> >>
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
Yeah 5 MB because the VMs are serving monitoring software which doesn't do much, but i can easily hit +250 MB of write speed in benchmark. -- Respectfully Mahdi A. Mahdi > From: gandalf.corvotempe...@gmail.com > Date: Wed, 3 Aug 2016 22:44:16 +0200 > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org > > 2016-08-03 22:33 GMT+02:00 Mahdi Adnan : > > Yeah, only 3 for now running in 3 replica. > > around 5MB (900 IOps) write and 3MB (250 IOps) read and the disks are 900GB > > 10K SAS. > > 5MB => five megabytes/s ? > Less than an older and ancient 4x DVD reader ? Really ? Are you sure? > 50VMs with five megabytes/s of reading speed? > > One SAS 10k disk should be able to reach at least 100MB/s > > Currently in my test cluster with 3 servers, replica 3, 1 brick per > server, all 7200 SATA disks, 1GB bonded network, i'm able to write at > about 50MB/s, ten times faster than you. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] do you still need ctdb with gluster?
hi all is ctdb still needed with gluster, i just realize that dns is already round robin? or nfs go down when one node do down? i found ctdb also going down when one node go down, at least for a couple of minutes even with mount errors=continue and other paramters i found here on the net.. thanks for any comment you may add.___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster replica over WAN...
On 8/2/2016 8:14 AM, Gilberto Nunes wrote: Hello list... This is my first post on this list. I have here two IBM Server, with 9 TB of hardisk on which one. Between this servers, I have a WAN connecting two offices,let say OFFICE1 and OFFICE2. This WAN connection is over fibre channel. When I setting up gluster with replica with two bricks, and mount the gluster volume in other folder, like this: mount -t glusterfs localhost:/VOLUME /STORAGE and when I go to that folder, and try to access the content, I get a lot of timeout... Even a single ls give a lot of time to return the list. This folder, /STORAGE is access by many users, through samba share. So, when OFFICE1 access the gluster server access the files over \\server\share, has a long delay to show the files Sometimes, get time out. My question is: is there some way to improve the gluster to work faster in this scenario?? Thanks a lot. Best regards -- Gilberto Ferreira +55 (47) 9676-7530 Skype: gilberto.nunes36 ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users I probably can't help, but I can tell you the kind of things that will help people to understand your problem better. What is the bandwidth between sites? What is the ping time between sites? If you run "ping -c1 " from OFFICE1, how many pings fail? What else can you tell us about the WAN connection that would help us understand your situation? How many files are there in the \\server\share directory? Are people at both sites actively writing files to the shared storage? If not, you may need to look at gluster geo-replication. It is one way, so all writing would have to be done to OFFICE1, with OFFICE2 having read-only access. (Some things that works wonderfully for, other things it doesn't work at all.) I can also tell you that no one who has tried it will endorse running a production replica 2 cluster over WAN. You are asking for either downtime or split-brain or both. Replica 3 is the minimum for production use of a replicate cluster, even over LAN. Ted Miller Elkhart, IN, USA ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 3.7.13 two node ssd solid rock
i have to reboot each node to make it working with a time interval of 5-8 mins, after that it got stable but still lots of sharding didn't heal but there's no split-brain. some vms lost it's vmx, so i created new vm and put to the storage to make working, wew!!! sharding is still faulty, won't recommend on yet. going back without it. On Wednesday, August 3, 2016 4:34 PM, Leno Vo wrote: my mistakes, the corruption happened after 6 hours, some vm had sharding won't heal but there's no split brain On Wednesday, August 3, 2016 11:13 AM, Leno Vo wrote: One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB. Never got split-brain, and healed quickly. Can we say 3.7.13 two nodes with ssd is solid rock or just lucky? My other gluster is on 3 nodes 3713, but one node never got up (old server proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, it never healed about 586 occurences but there's no split-brain too. and vms are intact too, working fine and fast. ahh never turned on caching on the array, the esx might not come up right away, u need to go to setup first to make it work and restart and then you can go to array setup (hp array F8) and turned off caching. then esx finally boot up. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
2016-08-03 22:33 GMT+02:00 Mahdi Adnan : > Yeah, only 3 for now running in 3 replica. > around 5MB (900 IOps) write and 3MB (250 IOps) read and the disks are 900GB > 10K SAS. 5MB => five megabytes/s ? Less than an older and ancient 4x DVD reader ? Really ? Are you sure? 50VMs with five megabytes/s of reading speed? One SAS 10k disk should be able to reach at least 100MB/s Currently in my test cluster with 3 servers, replica 3, 1 brick per server, all 7200 SATA disks, 1GB bonded network, i'm able to write at about 50MB/s, ten times faster than you. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 3.7.13 two node ssd solid rock
my mistakes, the corruption happened after 6 hours, some vm had sharding won't heal but there's no split brain On Wednesday, August 3, 2016 11:13 AM, Leno Vo wrote: One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB. Never got split-brain, and healed quickly. Can we say 3.7.13 two nodes with ssd is solid rock or just lucky? My other gluster is on 3 nodes 3713, but one node never got up (old server proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, it never healed about 586 occurences but there's no split-brain too. and vms are intact too, working fine and fast. ahh never turned on caching on the array, the esx might not come up right away, u need to go to setup first to make it work and restart and then you can go to array setup (hp array F8) and turned off caching. then esx finally boot up. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
Yeah, only 3 for now running in 3 replica.around 5MB (900 IOps) write and 3MB (250 IOps) read and the disks are 900GB 10K SAS. -- Respectfully Mahdi A. Mahdi > From: gandalf.corvotempe...@gmail.com > Date: Wed, 3 Aug 2016 22:09:59 +0200 > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org > > 2016-08-03 21:40 GMT+02:00 Mahdi Adnan : > > Hi, > > > > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, > > 8x900GB spindles, with Intel X520 dual 10G ports. We are planning to migrate > > more VMs and increase the number of servers in the cluster as soon as we > > figure what's going on with the NFS mount. > > Only 3 servers? How many IOPS are you getting and how much bandwidth > when reading/writing? > 900GB SAS 15k? ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban > wrote: > >> I use rpms for installation. Redhat/Centos 6.8. >> > > http://review.gluster.org/#/c/15084 is the patch. In some time the rpms > will be built actually. > In the same URL above it will actually post the rpms for fedora/el6/el7 at the end of the page. > > Use gluster volume set disperse.shd-max-threads (range: 1-64)> > > While testing this I thought of ways to decrease the number of crawls as > well. But they are a bit involved. Try to create same set of data and see > what is the time it takes to complete heals using number of threads as you > increase the number of parallel heals from 1 to 64. > > >> On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri >> wrote: >> > >> > >> > On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban >> wrote: >> >> >> >> I prefer 3.7 if it is ok for you. Can you also provide build >> instructions? >> > >> > >> > 3.7 should be fine. Do you use rpms/debs/anything-else? >> > >> >> >> >> >> >> On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri >> >> wrote: >> >> > >> >> > >> >> > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban > > >> >> > wrote: >> >> >> >> >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I have >> >> >> 26 disks on each server. >> >> > >> >> > >> >> > On which release-branch do you want the patch? I am testing it on >> >> > master-branch now. >> >> > >> >> >> >> >> >> >> >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri >> >> >> wrote: >> >> >> > >> >> >> > >> >> >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban < >> cobanser...@gmail.com> >> >> >> > wrote: >> >> >> >> >> >> >> >> I have two of my storage servers free, I think I can use them for >> >> >> >> testing. Is two server testing environment ok for you? >> >> >> > >> >> >> > >> >> >> > I think it would be better if you have at least 3. You can test it >> >> >> > with >> >> >> > 2+1 >> >> >> > ec configuration. >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri >> >> >> >> wrote: >> >> >> >> > >> >> >> >> > >> >> >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban >> >> >> >> > >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> Hi, >> >> >> >> >> >> >> >> >> >> May I ask if multi-threaded self heal for distributed disperse >> >> >> >> >> volumes >> >> >> >> >> implemented in this release? >> >> >> >> > >> >> >> >> > >> >> >> >> > Serkan, >> >> >> >> > At the moment I am a bit busy with different work, Is >> it >> >> >> >> > possible >> >> >> >> > for you to help test the feature if I provide a patch? Actually >> >> >> >> > the >> >> >> >> > patch >> >> >> >> > should be small. Testing is where lot of time will be spent on. >> >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Thanks, >> >> >> >> >> Serkan >> >> >> >> >> >> >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage >> >> >> >> >> wrote: >> >> >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson >> >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: >> >> >> >> >> >>> >> >> >> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular >> minor >> >> >> >> >> >>> release. >> >> >> >> >> >>> The release-notes are available at >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Thanks Kaushal, I'll check it out >> >> >> >> >> >> >> >> >> >> >> > >> >> >> >> >> > So far on my test box its working as expected. At least the >> >> >> >> >> > issues >> >> >> >> >> > that >> >> >> >> >> > prevented it from running as before have disappeared. Will >> >> >> >> >> > need >> >> >> >> >> > to >> >> >> >> >> > see >> >> >> >> >> > how >> >> >> >> >> > my test VM behaves after a few days. >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> >> -- >> >> >> >> >> >> Lindsay Mathieson >> >> >> >> >> >> >> >> >> >> >> >> ___ >> >> >> >> >> >> Gluster-users mailing list >> >> >> >> >> >> Gluster-users@gluster.org >> >> >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> > ___ >> >> >> >> >> > Gluster-users mailing list >> >> >> >> >> > Gluster-users@gluster.org >> >> >> >> >> > http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >> >> ___ >> >> >> >> >> Gluster-users mailing list >> >> >> >> >> Gluster-users@gluster.org >> >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > -- >> >> >> >> > Pranith >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > -
Re: [Gluster-users] GlusterFS-3.7.14 released
On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban wrote: > I use rpms for installation. Redhat/Centos 6.8. > http://review.gluster.org/#/c/15084 is the patch. In some time the rpms will be built actually. Use gluster volume set disperse.shd-max-threads While testing this I thought of ways to decrease the number of crawls as well. But they are a bit involved. Try to create same set of data and see what is the time it takes to complete heals using number of threads as you increase the number of parallel heals from 1 to 64. > On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri > wrote: > > > > > > On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban > wrote: > >> > >> I prefer 3.7 if it is ok for you. Can you also provide build > instructions? > > > > > > 3.7 should be fine. Do you use rpms/debs/anything-else? > > > >> > >> > >> On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri > >> wrote: > >> > > >> > > >> > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban > >> > wrote: > >> >> > >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I have > >> >> 26 disks on each server. > >> > > >> > > >> > On which release-branch do you want the patch? I am testing it on > >> > master-branch now. > >> > > >> >> > >> >> > >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri > >> >> wrote: > >> >> > > >> >> > > >> >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban < > cobanser...@gmail.com> > >> >> > wrote: > >> >> >> > >> >> >> I have two of my storage servers free, I think I can use them for > >> >> >> testing. Is two server testing environment ok for you? > >> >> > > >> >> > > >> >> > I think it would be better if you have at least 3. You can test it > >> >> > with > >> >> > 2+1 > >> >> > ec configuration. > >> >> > > >> >> >> > >> >> >> > >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri > >> >> >> wrote: > >> >> >> > > >> >> >> > > >> >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban > >> >> >> > > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> Hi, > >> >> >> >> > >> >> >> >> May I ask if multi-threaded self heal for distributed disperse > >> >> >> >> volumes > >> >> >> >> implemented in this release? > >> >> >> > > >> >> >> > > >> >> >> > Serkan, > >> >> >> > At the moment I am a bit busy with different work, Is it > >> >> >> > possible > >> >> >> > for you to help test the feature if I provide a patch? Actually > >> >> >> > the > >> >> >> > patch > >> >> >> > should be small. Testing is where lot of time will be spent on. > >> >> >> > > >> >> >> >> > >> >> >> >> > >> >> >> >> Thanks, > >> >> >> >> Serkan > >> >> >> >> > >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage > >> >> >> >> wrote: > >> >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson > >> >> >> >> > wrote: > >> >> >> >> >> > >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: > >> >> >> >> >>> > >> >> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular minor > >> >> >> >> >>> release. > >> >> >> >> >>> The release-notes are available at > >> >> >> >> >>> > >> >> >> >> >>> > >> >> >> >> >>> > >> >> >> >> >>> > >> >> >> >> >>> > >> >> >> >> >>> > https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> Thanks Kaushal, I'll check it out > >> >> >> >> >> > >> >> >> >> > > >> >> >> >> > So far on my test box its working as expected. At least the > >> >> >> >> > issues > >> >> >> >> > that > >> >> >> >> > prevented it from running as before have disappeared. Will > >> >> >> >> > need > >> >> >> >> > to > >> >> >> >> > see > >> >> >> >> > how > >> >> >> >> > my test VM behaves after a few days. > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> >> -- > >> >> >> >> >> Lindsay Mathieson > >> >> >> >> >> > >> >> >> >> >> ___ > >> >> >> >> >> Gluster-users mailing list > >> >> >> >> >> Gluster-users@gluster.org > >> >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > ___ > >> >> >> >> > Gluster-users mailing list > >> >> >> >> > Gluster-users@gluster.org > >> >> >> >> > http://www.gluster.org/mailman/listinfo/gluster-users > >> >> >> >> ___ > >> >> >> >> Gluster-users mailing list > >> >> >> >> Gluster-users@gluster.org > >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > -- > >> >> >> > Pranith > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > -- > >> >> > Pranith > >> > > >> > > >> > > >> > > >> > -- > >> > Pranith > > > > > > > > > > -- > > Pranith > -- Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] LVM thin provisionning for my geo-rep slave
Hello, I am planning to use snapshots on my geo-rep slave and as such wanted first to ask if the following procedure regarding the LVM thin provisionning is correct: Create physical volume: pvcreate /dev/xvdb Create volume group: vgcreate gfs_vg /dev/xvdb Create thin pool: lvcreate -L 4T -T gfs_vg/project1_tp Create thin volume: lvcreate -V 4T -T gfs_vg/project1_tp -n project1_tv Create filesystem: mkfs.xfs /dev/gfs_vg/project1_tv Another question, this is for one specific volume (project1) of my gluster master, now say I have another volume (project2) and want to add it to my geo-rep slave including also LVM thin provisioning: I just need to create a new thin pool as well as thin volume is that correct? I can re-use the same volume group, right? Regards, ML ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
2016-08-03 21:40 GMT+02:00 Mahdi Adnan : > Hi, > > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, > 8x900GB spindles, with Intel X520 dual 10G ports. We are planning to migrate > more VMs and increase the number of servers in the cluster as soon as we > figure what's going on with the NFS mount. Only 3 servers? How many IOPS are you getting and how much bandwidth when reading/writing? 900GB SAS 15k? ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Failed file system
Hi, I'm not expert in Gluster but, i think it would be better to replace the downed brick with a new one.Maybe start from here; https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick -- Respectfully Mahdi A. Mahdi Date: Wed, 3 Aug 2016 15:39:35 -0400 From: am...@moyasolutions.com To: gluster-users@gluster.org Subject: Re: [Gluster-users] Failed file system Does anyone else have input?we are currently only running off 1 node and one node is offline in replicate brick.we are not experiencing any downtime because the 1 node is up.I do not understand which is the best way to bring up a second node.Do we just re create a file system on the node that is down and the mount points and allow gluster to heal( my concern with this is whether the node that is down will some how take precedence and wipe out the data on the healthy node instead of vice versa)Or do we fully wipe out the config on the node that is down, re create the file system and re add the node that is down into gluster using the add brick command replica 3, and then wait for it to heal then run the remove brick command for the failed brickwhich would be the safest and easiest to accomplishthanks for any input From: "Leno Vo" To: "Andres E. Moya" Cc: "gluster-users" Sent: Tuesday, August 2, 2016 6:45:27 PM Subject: Re: [Gluster-users] Failed file system if you don't want any downtime (in the case that your node 2 really die), you have to create a new gluster san (if you have the resources of course, 3 nodes as much as possible this time), and then just migrate your vms (or files), therefore no downtime but you have to cross your finger that the only node will not die too... also without sharding the vm migration especially an rdp one, will be slow access from users till it migrated. you have to start testing sharding, it's fast and cool... On Tuesday, August 2, 2016 2:51 PM, Andres E. Moya wrote: couldnt we just add a new server bygluster peer probegluster volume add-brick replica 3 (will this command succeed with 1 current failed brick?)let it heal, then gluster volume remove remove-brickFrom: "Leno Vo" To: "Andres E. Moya" , "gluster-users" Sent: Tuesday, August 2, 2016 1:26:42 PMSubject: Re: [Gluster-users] Failed file systemyou need to have a downtime to recreate the second node, two nodes is actually not good for production and you should have put raid 1 or raid 5 as your gluster storage, when you recreate the second node you might try running some VMs that need to be up and rest of vm need to be down but stop all backup and if you have replication, stop it too. if you have 1G nic, 2cpu and less 8Gram, then i suggest all turn off the VMs during recreation of second node. someone said if you have sharding with 3.7.x, maybe some vip vm can be up...if it just a filesystem, then just turn off the backup service until you recreate the second node. depending on your resources and how big is your storage, it might be hours to recreate it and even days...here's my process on recreating the second or third node (copied and modifed from the net),#make sure partition is already addedThis procedure is for replacing a failed server, IF your newly installed server has the same hostname as the failed one:(If your new server will have a different hostname, see this article instead.)For purposes of this example, the server that crashed will be server3 and the other servers will be server1 and server2On both server1 and server2, make sure hostname server3 resolves to the correct IP address of the new replacement server.#On either server1 or server2, dogrep server3 /var/lib/glusterd/peers/*This will return a uuid followed by ":hostname1=server3"#On server3, make sure glusterd is stopped, then doecho UUID={uuid from previous step}>/var/lib/glusterd/glusterd.info#actual testing below,[root@node1 ~]# cat /var/lib/glusterd/glusterd.infoUUID=4b9d153c-5958-4dbe-8f91-7b5002882aacoperating-version=30710#the second line is new. maybe not needed...On server3:make sure that all brick directories are created/mountedstart glusterdpeer probe one of the existing servers#restart glusterd, check that full peer list has been populated using gluster peer status(if peers are missing, probe them explicitly, then restart glusterd again)#check that full volume configuration has been populated using gluster volume infoif volume configuration is missing, do #on the other nodegluster volume sync "replace-node" all#on the node to be replacedsetfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id /var/lib/glusterd/vols/v1/info | cut -d= -f2 | sed 's/-//g') /gfs/b1/v1setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id /var/lib/glusterd/vols/v2/info | cut -d= -f2 | sed 's/-//g') /gfs/b2/v2setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id /var/lib/glusterd/vols/config/info | cut -d= -f2 |
Re: [Gluster-users] Failed file system
Use replace brick commit force. @Pranith/@Anuradha - post this will self heal be triggered automatically or a manual trigger is needed? On Thursday 4 August 2016, Andres E. Moya wrote: > Does anyone else have input? > > we are currently only running off 1 node and one node is offline in > replicate brick. > > we are not experiencing any downtime because the 1 node is up. > > I do not understand which is the best way to bring up a second node. > > Do we just re create a file system on the node that is down and the mount > points and allow gluster to heal( my concern with this is whether the node > that is down will some how take precedence and wipe out the data on the > healthy node instead of vice versa) > > Or do we fully wipe out the config on the node that is down, re create the > file system and re add the node that is down into gluster using the add > brick command replica 3, and then wait for it to heal then run the remove > brick command for the failed brick > > which would be the safest and easiest to accomplish > > thanks for any input > > > > -- > *From: *"Leno Vo" > > *To: *"Andres E. Moya" > > *Cc: *"gluster-users" > > *Sent: *Tuesday, August 2, 2016 6:45:27 PM > *Subject: *Re: [Gluster-users] Failed file system > > if you don't want any downtime (in the case that your node 2 really die), > you have to create a new gluster san (if you have the resources of course, > 3 nodes as much as possible this time), and then just migrate your vms (or > files), therefore no downtime but you have to cross your finger that the > only node will not die too... also without sharding the vm migration > especially an rdp one, will be slow access from users till it migrated. > > you have to start testing sharding, it's fast and cool... > > > > > On Tuesday, August 2, 2016 2:51 PM, Andres E. Moya < > am...@moyasolutions.com > > wrote: > > > couldnt we just add a new server by > > gluster peer probe > gluster volume add-brick replica 3 (will this command succeed with 1 > current failed brick?) > > let it heal, then > > gluster volume remove remove-brick > -- > *From: *"Leno Vo" > > *To: *"Andres E. Moya" >, > "gluster-users" > > *Sent: *Tuesday, August 2, 2016 1:26:42 PM > *Subject: *Re: [Gluster-users] Failed file system > > you need to have a downtime to recreate the second node, two nodes is > actually not good for production and you should have put raid 1 or raid 5 > as your gluster storage, when you recreate the second node you might try > running some VMs that need to be up and rest of vm need to be down but stop > all backup and if you have replication, stop it too. if you have 1G nic, > 2cpu and less 8Gram, then i suggest all turn off the VMs during recreation > of second node. someone said if you have sharding with 3.7.x, maybe some > vip vm can be up... > > if it just a filesystem, then just turn off the backup service until you > recreate the second node. depending on your resources and how big is your > storage, it might be hours to recreate it and even days... > > here's my process on recreating the second or third node (copied and > modifed from the net), > > #make sure partition is already added > This procedure is for replacing a failed server, IF your newly installed > server has the same hostname as the failed one: > > (If your new server will have a different hostname, see this article > instead.) > > For purposes of this example, the server that crashed will be server3 and > the other servers will be server1 and server2 > > On both server1 and server2, make sure hostname server3 resolves to the > correct IP address of the new replacement server. > #On either server1 or server2, do > grep server3 /var/lib/glusterd/peers/* > > This will return a uuid followed by ":hostname1=server3" > > #On server3, make sure glusterd is stopped, then do > echo UUID={uuid from previous step}>/var/lib/glusterd/glusterd.info > > #actual testing below, > [root@node1 ~]# cat /var/lib/glusterd/glusterd.info > UUID=4b9d153c-5958-4dbe-8f91-7b5002882aac > operating-version=30710 > #the second line is new. maybe not needed... > > On server3: > make sure that all brick directories are created/mounted > start glusterd > peer probe one of the existing servers > > #restart glusterd, check that full peer list has been populated using > gluster peer status > > (if peers are missing, probe them explicitly, then restart glusterd again) > #check that full volume configuration has been populated using > gluster volume info > > if volume configuration is missing, do > #on the other node > gluster volume sync "replace-node" all > > #on the node to be replaced > setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id > /var/lib/glusterd/vols/v1/info | cut -d= -f2 | sed 's/-//g') /gfs/b1/v1 > setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id > /var/lib/glusterd/vols/v2/info | cut -d= -f2 | sed 's/-//g') /gf
Re: [Gluster-users] 3.7.13 two node ssd solid rock
On 8/3/2016 11:13 AM, Leno Vo wrote: One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB. Never got split-brain, and healed quickly. Can we say 3.7.13 two nodes with ssd is solid rock or just lucky? My other gluster is on 3 nodes 3713, but one node never got up (old server proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, it never healed about 586 occurences but there's no split-brain too. and vms are intact too, working fine and fast. ahh never turned on caching on the array, the esx might not come up right away, u need to go to setup first to make it work and restart and then you can go to array setup (hp array F8) and turned off caching. then esx finally boot up. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users I would say you are very lucky. I would not use anything less that replica 3 in production. Ted Miller Elkhart, IN, USA ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
I had stability problems with centos 7.2 and gluster 3.7.11. Nodes were crashing without any clue. I cannot find a solution and switched to centos 6.8. All problems gone with 6.8. Maybe you can test with centos 6.8? On Wed, Aug 3, 2016 at 10:40 PM, Mahdi Adnan wrote: > Hi, > > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, > 8x900GB spindles, with Intel X520 dual 10G ports. We are planning to migrate > more VMs and increase the number of servers in the cluster as soon as we > figure what's going on with the NFS mount. > > > -- > > Respectfully > Mahdi A. Mahdi > >> From: gandalf.corvotempe...@gmail.com >> Date: Wed, 3 Aug 2016 20:25:56 +0200 >> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash >> To: mahdi.ad...@outlook.com >> CC: kdhan...@redhat.com; gluster-users@gluster.org >> >> 2016-08-03 17:02 GMT+02:00 Mahdi Adnan : >> > the problem is, the current setup is used in a production environment, >> > and >> > switching the mount point of +50 VMs from native nfs to nfs-ganesha is >> > not >> > going to be smooth and without downtime, so i really appreciate your >> > thoughts on this matter. >> >> A little bit OT: >> >> 50+ VMs? Could you please share your hardware and network infrastructure? >> We are thinking about a gluster cluster for hosting about 80-100 VMs and >> we are >> looking for some production clusters to use as reference. > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
Hi, Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, 8x900GB spindles, with Intel X520 dual 10G ports. We are planning to migrate more VMs and increase the number of servers in the cluster as soon as we figure what's going on with the NFS mount. -- Respectfully Mahdi A. Mahdi > From: gandalf.corvotempe...@gmail.com > Date: Wed, 3 Aug 2016 20:25:56 +0200 > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: kdhan...@redhat.com; gluster-users@gluster.org > > 2016-08-03 17:02 GMT+02:00 Mahdi Adnan : > > the problem is, the current setup is used in a production environment, and > > switching the mount point of +50 VMs from native nfs to nfs-ganesha is not > > going to be smooth and without downtime, so i really appreciate your > > thoughts on this matter. > > A little bit OT: > > 50+ VMs? Could you please share your hardware and network infrastructure? > We are thinking about a gluster cluster for hosting about 80-100 VMs and we > are > looking for some production clusters to use as reference. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Failed file system
Does anyone else have input? we are currently only running off 1 node and one node is offline in replicate brick. we are not experiencing any downtime because the 1 node is up. I do not understand which is the best way to bring up a second node. Do we just re create a file system on the node that is down and the mount points and allow gluster to heal( my concern with this is whether the node that is down will some how take precedence and wipe out the data on the healthy node instead of vice versa) Or do we fully wipe out the config on the node that is down, re create the file system and re add the node that is down into gluster using the add brick command replica 3, and then wait for it to heal then run the remove brick command for the failed brick which would be the safest and easiest to accomplish thanks for any input From: "Leno Vo" To: "Andres E. Moya" Cc: "gluster-users" Sent: Tuesday, August 2, 2016 6:45:27 PM Subject: Re: [Gluster-users] Failed file system if you don't want any downtime (in the case that your node 2 really die), you have to create a new gluster san (if you have the resources of course, 3 nodes as much as possible this time), and then just migrate your vms (or files), therefore no downtime but you have to cross your finger that the only node will not die too... also without sharding the vm migration especially an rdp one, will be slow access from users till it migrated. you have to start testing sharding, it's fast and cool... On Tuesday, August 2, 2016 2:51 PM, Andres E. Moya wrote: couldnt we just add a new server by gluster peer probe gluster volume add-brick replica 3 (will this command succeed with 1 current failed brick?) let it heal, then gluster volume remove remove-brick From: "Leno Vo" To: "Andres E. Moya" , "gluster-users" Sent: Tuesday, August 2, 2016 1:26:42 PM Subject: Re: [Gluster-users] Failed file system you need to have a downtime to recreate the second node, two nodes is actually not good for production and you should have put raid 1 or raid 5 as your gluster storage, when you recreate the second node you might try running some VMs that need to be up and rest of vm need to be down but stop all backup and if you have replication, stop it too. if you have 1G nic, 2cpu and less 8Gram, then i suggest all turn off the VMs during recreation of second node. someone said if you have sharding with 3.7.x, maybe some vip vm can be up... if it just a filesystem, then just turn off the backup service until you recreate the second node. depending on your resources and how big is your storage, it might be hours to recreate it and even days... here's my process on recreating the second or third node (copied and modifed from the net), #make sure partition is already added This procedure is for replacing a failed server, IF your newly installed server has the same hostname as the failed one: (If your new server will have a different hostname, see this article instead.) For purposes of this example, the server that crashed will be server3 and the other servers will be server1 and server2 On both server1 and server2, make sure hostname server3 resolves to the correct IP address of the new replacement server. #On either server1 or server2, do grep server3 /var/lib/glusterd/peers/* This will return a uuid followed by ":hostname1=server3" #On server3, make sure glusterd is stopped, then do echo UUID={uuid from previous step}>/var/lib/glusterd/glusterd.info #actual testing below, [root@node1 ~]# cat /var/lib/glusterd/glusterd.info UUID=4b9d153c-5958-4dbe-8f91-7b5002882aac operating-version=30710 #the second line is new. maybe not needed... On server3: make sure that all brick directories are created/mounted start glusterd peer probe one of the existing servers #restart glusterd, check that full peer list has been populated using gluster peer status (if peers are missing, probe them explicitly, then restart glusterd again) #check that full volume configuration has been populated using gluster volume info if volume configuration is missing, do #on the other node gluster volume sync "replace-node" all #on the node to be replaced setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id /var/lib/glusterd/vols/v1/info | cut -d= -f2 | sed 's/-//g') /gfs/b1/v1 setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id /var/lib/glusterd/vols/v2/info | cut -d= -f2 | sed 's/-//g') /gfs/b2/v2 setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id /var/lib/glusterd/vols/config/info | cut -d= -f2 | sed 's/-//g') /gfs/b1/config/c1 mount -t glusterfs localhost:config /data/data1 #install ctdb if not yet installed and put it back online, use the step on creating the ctdb config but #use your common sense not to deleted or modify current one. gluster vol heal v1 full gluster vol heal v2 full gluster vol heal
Re: [Gluster-users] GlusterFS-3.7.14 released
I use rpms for installation. Redhat/Centos 6.8. On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban wrote: >> >> I prefer 3.7 if it is ok for you. Can you also provide build instructions? > > > 3.7 should be fine. Do you use rpms/debs/anything-else? > >> >> >> On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri >> wrote: >> > >> > >> > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban >> > wrote: >> >> >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I have >> >> 26 disks on each server. >> > >> > >> > On which release-branch do you want the patch? I am testing it on >> > master-branch now. >> > >> >> >> >> >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri >> >> wrote: >> >> > >> >> > >> >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban >> >> > wrote: >> >> >> >> >> >> I have two of my storage servers free, I think I can use them for >> >> >> testing. Is two server testing environment ok for you? >> >> > >> >> > >> >> > I think it would be better if you have at least 3. You can test it >> >> > with >> >> > 2+1 >> >> > ec configuration. >> >> > >> >> >> >> >> >> >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri >> >> >> wrote: >> >> >> > >> >> >> > >> >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> Hi, >> >> >> >> >> >> >> >> May I ask if multi-threaded self heal for distributed disperse >> >> >> >> volumes >> >> >> >> implemented in this release? >> >> >> > >> >> >> > >> >> >> > Serkan, >> >> >> > At the moment I am a bit busy with different work, Is it >> >> >> > possible >> >> >> > for you to help test the feature if I provide a patch? Actually >> >> >> > the >> >> >> > patch >> >> >> > should be small. Testing is where lot of time will be spent on. >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> Thanks, >> >> >> >> Serkan >> >> >> >> >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage >> >> >> >> wrote: >> >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: >> >> >> >> >>> >> >> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular minor >> >> >> >> >>> release. >> >> >> >> >>> The release-notes are available at >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Thanks Kaushal, I'll check it out >> >> >> >> >> >> >> >> >> > >> >> >> >> > So far on my test box its working as expected. At least the >> >> >> >> > issues >> >> >> >> > that >> >> >> >> > prevented it from running as before have disappeared. Will >> >> >> >> > need >> >> >> >> > to >> >> >> >> > see >> >> >> >> > how >> >> >> >> > my test VM behaves after a few days. >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> >> -- >> >> >> >> >> Lindsay Mathieson >> >> >> >> >> >> >> >> >> >> ___ >> >> >> >> >> Gluster-users mailing list >> >> >> >> >> Gluster-users@gluster.org >> >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > ___ >> >> >> >> > Gluster-users mailing list >> >> >> >> > Gluster-users@gluster.org >> >> >> >> > http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >> ___ >> >> >> >> Gluster-users mailing list >> >> >> >> Gluster-users@gluster.org >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > -- >> >> >> > Pranith >> >> > >> >> > >> >> > >> >> > >> >> > -- >> >> > Pranith >> > >> > >> > >> > >> > -- >> > Pranith > > > > > -- > Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban wrote: > I prefer 3.7 if it is ok for you. Can you also provide build instructions? > 3.7 should be fine. Do you use rpms/debs/anything-else? > > On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri > wrote: > > > > > > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban > wrote: > >> > >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I have > >> 26 disks on each server. > > > > > > On which release-branch do you want the patch? I am testing it on > > master-branch now. > > > >> > >> > >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri > >> wrote: > >> > > >> > > >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban > >> > wrote: > >> >> > >> >> I have two of my storage servers free, I think I can use them for > >> >> testing. Is two server testing environment ok for you? > >> > > >> > > >> > I think it would be better if you have at least 3. You can test it > with > >> > 2+1 > >> > ec configuration. > >> > > >> >> > >> >> > >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri > >> >> wrote: > >> >> > > >> >> > > >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban < > cobanser...@gmail.com> > >> >> > wrote: > >> >> >> > >> >> >> Hi, > >> >> >> > >> >> >> May I ask if multi-threaded self heal for distributed disperse > >> >> >> volumes > >> >> >> implemented in this release? > >> >> > > >> >> > > >> >> > Serkan, > >> >> > At the moment I am a bit busy with different work, Is it > >> >> > possible > >> >> > for you to help test the feature if I provide a patch? Actually the > >> >> > patch > >> >> > should be small. Testing is where lot of time will be spent on. > >> >> > > >> >> >> > >> >> >> > >> >> >> Thanks, > >> >> >> Serkan > >> >> >> > >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage > >> >> >> wrote: > >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: > >> >> >> >>> > >> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular minor > >> >> >> >>> release. > >> >> >> >>> The release-notes are available at > >> >> >> >>> > >> >> >> >>> > >> >> >> >>> > >> >> >> >>> > >> >> >> >>> > https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md > >> >> >> >> > >> >> >> >> > >> >> >> >> Thanks Kaushal, I'll check it out > >> >> >> >> > >> >> >> > > >> >> >> > So far on my test box its working as expected. At least the > >> >> >> > issues > >> >> >> > that > >> >> >> > prevented it from running as before have disappeared. Will need > >> >> >> > to > >> >> >> > see > >> >> >> > how > >> >> >> > my test VM behaves after a few days. > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> >> -- > >> >> >> >> Lindsay Mathieson > >> >> >> >> > >> >> >> >> ___ > >> >> >> >> Gluster-users mailing list > >> >> >> >> Gluster-users@gluster.org > >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > ___ > >> >> >> > Gluster-users mailing list > >> >> >> > Gluster-users@gluster.org > >> >> >> > http://www.gluster.org/mailman/listinfo/gluster-users > >> >> >> ___ > >> >> >> Gluster-users mailing list > >> >> >> Gluster-users@gluster.org > >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > -- > >> >> > Pranith > >> > > >> > > >> > > >> > > >> > -- > >> > Pranith > > > > > > > > > > -- > > Pranith > -- Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
I prefer 3.7 if it is ok for you. Can you also provide build instructions? On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban wrote: >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I have >> 26 disks on each server. > > > On which release-branch do you want the patch? I am testing it on > master-branch now. > >> >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri >> wrote: >> > >> > >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban >> > wrote: >> >> >> >> I have two of my storage servers free, I think I can use them for >> >> testing. Is two server testing environment ok for you? >> > >> > >> > I think it would be better if you have at least 3. You can test it with >> > 2+1 >> > ec configuration. >> > >> >> >> >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri >> >> wrote: >> >> > >> >> > >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban >> >> > wrote: >> >> >> >> >> >> Hi, >> >> >> >> >> >> May I ask if multi-threaded self heal for distributed disperse >> >> >> volumes >> >> >> implemented in this release? >> >> > >> >> > >> >> > Serkan, >> >> > At the moment I am a bit busy with different work, Is it >> >> > possible >> >> > for you to help test the feature if I provide a patch? Actually the >> >> > patch >> >> > should be small. Testing is where lot of time will be spent on. >> >> > >> >> >> >> >> >> >> >> >> Thanks, >> >> >> Serkan >> >> >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage >> >> >> wrote: >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson >> >> >> > wrote: >> >> >> >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: >> >> >> >>> >> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular minor >> >> >> >>> release. >> >> >> >>> The release-notes are available at >> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md >> >> >> >> >> >> >> >> >> >> >> >> Thanks Kaushal, I'll check it out >> >> >> >> >> >> >> > >> >> >> > So far on my test box its working as expected. At least the >> >> >> > issues >> >> >> > that >> >> >> > prevented it from running as before have disappeared. Will need >> >> >> > to >> >> >> > see >> >> >> > how >> >> >> > my test VM behaves after a few days. >> >> >> > >> >> >> > >> >> >> > >> >> >> >> -- >> >> >> >> Lindsay Mathieson >> >> >> >> >> >> >> >> ___ >> >> >> >> Gluster-users mailing list >> >> >> >> Gluster-users@gluster.org >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> > >> >> >> > >> >> >> > >> >> >> > ___ >> >> >> > Gluster-users mailing list >> >> >> > Gluster-users@gluster.org >> >> >> > http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> ___ >> >> >> Gluster-users mailing list >> >> >> Gluster-users@gluster.org >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> > >> >> > >> >> > >> >> > >> >> > -- >> >> > Pranith >> > >> > >> > >> > >> > -- >> > Pranith > > > > > -- > Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban wrote: > Yes, but I can create 2+1(or 8+2) ec using two servers right? I have > 26 disks on each server. > On which release-branch do you want the patch? I am testing it on master-branch now. > > On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri > wrote: > > > > > > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban > wrote: > >> > >> I have two of my storage servers free, I think I can use them for > >> testing. Is two server testing environment ok for you? > > > > > > I think it would be better if you have at least 3. You can test it with > 2+1 > > ec configuration. > > > >> > >> > >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri > >> wrote: > >> > > >> > > >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban > >> > wrote: > >> >> > >> >> Hi, > >> >> > >> >> May I ask if multi-threaded self heal for distributed disperse > volumes > >> >> implemented in this release? > >> > > >> > > >> > Serkan, > >> > At the moment I am a bit busy with different work, Is it > >> > possible > >> > for you to help test the feature if I provide a patch? Actually the > >> > patch > >> > should be small. Testing is where lot of time will be spent on. > >> > > >> >> > >> >> > >> >> Thanks, > >> >> Serkan > >> >> > >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage > >> >> wrote: > >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson > >> >> > wrote: > >> >> >> > >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: > >> >> >>> > >> >> >>> GlusterFS-3.7.14 has been released. This is a regular minor > >> >> >>> release. > >> >> >>> The release-notes are available at > >> >> >>> > >> >> >>> > >> >> >>> > >> >> >>> > https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md > >> >> >> > >> >> >> > >> >> >> Thanks Kaushal, I'll check it out > >> >> >> > >> >> > > >> >> > So far on my test box its working as expected. At least the issues > >> >> > that > >> >> > prevented it from running as before have disappeared. Will need to > >> >> > see > >> >> > how > >> >> > my test VM behaves after a few days. > >> >> > > >> >> > > >> >> > > >> >> >> -- > >> >> >> Lindsay Mathieson > >> >> >> > >> >> >> ___ > >> >> >> Gluster-users mailing list > >> >> >> Gluster-users@gluster.org > >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users > >> >> > > >> >> > > >> >> > > >> >> > ___ > >> >> > Gluster-users mailing list > >> >> > Gluster-users@gluster.org > >> >> > http://www.gluster.org/mailman/listinfo/gluster-users > >> >> ___ > >> >> Gluster-users mailing list > >> >> Gluster-users@gluster.org > >> >> http://www.gluster.org/mailman/listinfo/gluster-users > >> > > >> > > >> > > >> > > >> > -- > >> > Pranith > > > > > > > > > > -- > > Pranith > -- Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
Yes, but I can create 2+1(or 8+2) ec using two servers right? I have 26 disks on each server. On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri wrote: > > > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban wrote: >> >> I have two of my storage servers free, I think I can use them for >> testing. Is two server testing environment ok for you? > > > I think it would be better if you have at least 3. You can test it with 2+1 > ec configuration. > >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri >> wrote: >> > >> > >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban >> > wrote: >> >> >> >> Hi, >> >> >> >> May I ask if multi-threaded self heal for distributed disperse volumes >> >> implemented in this release? >> > >> > >> > Serkan, >> > At the moment I am a bit busy with different work, Is it >> > possible >> > for you to help test the feature if I provide a patch? Actually the >> > patch >> > should be small. Testing is where lot of time will be spent on. >> > >> >> >> >> >> >> Thanks, >> >> Serkan >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage >> >> wrote: >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson >> >> > wrote: >> >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: >> >> >>> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular minor >> >> >>> release. >> >> >>> The release-notes are available at >> >> >>> >> >> >>> >> >> >>> >> >> >>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md >> >> >> >> >> >> >> >> >> Thanks Kaushal, I'll check it out >> >> >> >> >> > >> >> > So far on my test box its working as expected. At least the issues >> >> > that >> >> > prevented it from running as before have disappeared. Will need to >> >> > see >> >> > how >> >> > my test VM behaves after a few days. >> >> > >> >> > >> >> > >> >> >> -- >> >> >> Lindsay Mathieson >> >> >> >> >> >> ___ >> >> >> Gluster-users mailing list >> >> >> Gluster-users@gluster.org >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> > >> >> > >> >> > >> >> > ___ >> >> > Gluster-users mailing list >> >> > Gluster-users@gluster.org >> >> > http://www.gluster.org/mailman/listinfo/gluster-users >> >> ___ >> >> Gluster-users mailing list >> >> Gluster-users@gluster.org >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> > >> > >> > >> > >> > -- >> > Pranith > > > > > -- > Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban wrote: > I have two of my storage servers free, I think I can use them for > testing. Is two server testing environment ok for you? > I think it would be better if you have at least 3. You can test it with 2+1 ec configuration. > > On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri > wrote: > > > > > > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban > wrote: > >> > >> Hi, > >> > >> May I ask if multi-threaded self heal for distributed disperse volumes > >> implemented in this release? > > > > > > Serkan, > > At the moment I am a bit busy with different work, Is it possible > > for you to help test the feature if I provide a patch? Actually the patch > > should be small. Testing is where lot of time will be spent on. > > > >> > >> > >> Thanks, > >> Serkan > >> > >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage > >> wrote: > >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson > >> > wrote: > >> >> > >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: > >> >>> > >> >>> GlusterFS-3.7.14 has been released. This is a regular minor release. > >> >>> The release-notes are available at > >> >>> > >> >>> > >> >>> > https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md > >> >> > >> >> > >> >> Thanks Kaushal, I'll check it out > >> >> > >> > > >> > So far on my test box its working as expected. At least the issues > that > >> > prevented it from running as before have disappeared. Will need to > see > >> > how > >> > my test VM behaves after a few days. > >> > > >> > > >> > > >> >> -- > >> >> Lindsay Mathieson > >> >> > >> >> ___ > >> >> Gluster-users mailing list > >> >> Gluster-users@gluster.org > >> >> http://www.gluster.org/mailman/listinfo/gluster-users > >> > > >> > > >> > > >> > ___ > >> > Gluster-users mailing list > >> > Gluster-users@gluster.org > >> > http://www.gluster.org/mailman/listinfo/gluster-users > >> ___ > >> Gluster-users mailing list > >> Gluster-users@gluster.org > >> http://www.gluster.org/mailman/listinfo/gluster-users > > > > > > > > > > -- > > Pranith > -- Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
I have two of my storage servers free, I think I can use them for testing. Is two server testing environment ok for you? On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri wrote: > > > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban wrote: >> >> Hi, >> >> May I ask if multi-threaded self heal for distributed disperse volumes >> implemented in this release? > > > Serkan, > At the moment I am a bit busy with different work, Is it possible > for you to help test the feature if I provide a patch? Actually the patch > should be small. Testing is where lot of time will be spent on. > >> >> >> Thanks, >> Serkan >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage >> wrote: >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson >> > wrote: >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: >> >>> >> >>> GlusterFS-3.7.14 has been released. This is a regular minor release. >> >>> The release-notes are available at >> >>> >> >>> >> >>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md >> >> >> >> >> >> Thanks Kaushal, I'll check it out >> >> >> > >> > So far on my test box its working as expected. At least the issues that >> > prevented it from running as before have disappeared. Will need to see >> > how >> > my test VM behaves after a few days. >> > >> > >> > >> >> -- >> >> Lindsay Mathieson >> >> >> >> ___ >> >> Gluster-users mailing list >> >> Gluster-users@gluster.org >> >> http://www.gluster.org/mailman/listinfo/gluster-users >> > >> > >> > >> > ___ >> > Gluster-users mailing list >> > Gluster-users@gluster.org >> > http://www.gluster.org/mailman/listinfo/gluster-users >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users > > > > > -- > Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban wrote: > Hi, > > May I ask if multi-threaded self heal for distributed disperse volumes > implemented in this release? > Serkan, At the moment I am a bit busy with different work, Is it possible for you to help test the feature if I provide a patch? Actually the patch should be small. Testing is where lot of time will be spent on. > > Thanks, > Serkan > > On Tue, Aug 2, 2016 at 5:30 PM, David Gossage > wrote: > > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson > > wrote: > >> > >> On 2/08/2016 5:07 PM, Kaushal M wrote: > >>> > >>> GlusterFS-3.7.14 has been released. This is a regular minor release. > >>> The release-notes are available at > >>> > >>> > https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md > >> > >> > >> Thanks Kaushal, I'll check it out > >> > > > > So far on my test box its working as expected. At least the issues that > > prevented it from running as before have disappeared. Will need to see > how > > my test VM behaves after a few days. > > > > > > > >> -- > >> Lindsay Mathieson > >> > >> ___ > >> Gluster-users mailing list > >> Gluster-users@gluster.org > >> http://www.gluster.org/mailman/listinfo/gluster-users > > > > > > > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > http://www.gluster.org/mailman/listinfo/gluster-users > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > -- Pranith ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
2016-08-03 17:02 GMT+02:00 Mahdi Adnan : > the problem is, the current setup is used in a production environment, and > switching the mount point of +50 VMs from native nfs to nfs-ganesha is not > going to be smooth and without downtime, so i really appreciate your > thoughts on this matter. A little bit OT: 50+ VMs? Could you please share your hardware and network infrastructure? We are thinking about a gluster cluster for hosting about 80-100 VMs and we are looking for some production clusters to use as reference. ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster replica over WAN...
Hi Somebody can help??? Thanks 2016-08-02 9:14 GMT-03:00 Gilberto Nunes : > Hello list... > This is my first post on this list. > > I have here two IBM Server, with 9 TB of hardisk on which one. > Between this servers, I have a WAN connecting two offices,let say OFFICE1 > and OFFICE2. > This WAN connection is over fibre channel. > When I setting up gluster with replica with two bricks, and mount the > gluster volume in other folder, like this: > > mount -t glusterfs localhost:/VOLUME /STORAGE > > > and when I go to that folder, and try to access the content, I get a lot > of timeout... Even a single ls give a lot of time to return the list. > > This folder, /STORAGE is access by many users, through samba share. > > So, when OFFICE1 access the gluster server access the files over > \\server\share, has a long delay to show the files Sometimes, get time > out. > > My question is: is there some way to improve the gluster to work faster in > this scenario?? > > Thanks a lot. > > Best regards > > -- > > Gilberto Ferreira > +55 (47) 9676-7530 > Skype: gilberto.nunes36 > > -- Gilberto Ferreira +55 (47) 9676-7530 Skype: gilberto.nunes36 ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
Hi, Unfortunately no, but i can setup a test bench and see if it gets the same results. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Wed, 3 Aug 2016 20:59:50 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gluster.org Do you have a test case that consistently recreates this problem? -Krutika On Wed, Aug 3, 2016 at 8:32 PM, Mahdi Adnan wrote: Hi, So i have updated to 3.7.14 and i still have the same issue with NFS.based on what i have provided so far from logs and dumps do you think it's an NFS issue ? should i switch to nfs-ganesha ? the problem is, the current setup is used in a production environment, and switching the mount point of +50 VMs from native nfs to nfs-ganesha is not going to be smooth and without downtime, so i really appreciate your thoughts on this matter. -- Respectfully Mahdi A. Mahdi From: mahdi.ad...@outlook.com To: kdhan...@redhat.com Date: Tue, 2 Aug 2016 08:44:16 +0300 CC: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash Hi, The NFS just crashed again, latest bt; (gdb) bt#0 0x7f0b71a9f210 in pthread_spin_lock () from /lib64/libpthread.so.0#1 0x7f0b72c6fcd5 in fd_anonymous (inode=0x0) at fd.c:804#2 0x7f0b64ca5787 in shard_common_inode_write_do (frame=0x7f0b707c062c, this=0x7f0b6002ac10) at shard.c:3716#3 0x7f0b64ca5a53 in shard_common_inode_write_post_lookup_shards_handler (frame=, this=) at shard.c:3769#4 0x7f0b64c9eff5 in shard_common_lookup_shards_cbk (frame=0x7f0b707c062c, cookie=, this=0x7f0b6002ac10, op_ret=0, op_errno=, inode=, buf=0x7f0b51407640, xdata=0x7f0b72f57648, postparent=0x7f0b514076b0) at shard.c:1601#5 0x7f0b64efe141 in dht_lookup_cbk (frame=0x7f0b7075fcdc, cookie=, this=, op_ret=0, op_errno=0, inode=0x7f0b5f1d1f58, stbuf=0x7f0b51407640, xattr=0x7f0b72f57648, postparent=0x7f0b514076b0) at dht-common.c:2174#6 0x7f0b651871f3 in afr_lookup_done (frame=frame@entry=0x7f0b7079a4c8, this=this@entry=0x7f0b60023ba0) at afr-common.c:1825#7 0x7f0b65187b84 in afr_lookup_metadata_heal_check (frame=frame@entry=0x7f0b7079a4c8, this=0x7f0b60023ba0, this@entry=0xca0bd88259f5a800)at afr-common.c:2068#8 0x7f0b6518834f in afr_lookup_entry_heal (frame=frame@entry=0x7f0b7079a4c8, this=0xca0bd88259f5a800, this@entry=0x7f0b60023ba0) at afr-common.c:2157#9 0x7f0b6518867d in afr_lookup_cbk (frame=0x7f0b7079a4c8, cookie=, this=0x7f0b60023ba0, op_ret=, op_errno=, inode=, buf=0x7f0b564e9940, xdata=0x7f0b72f708c8, postparent=0x7f0b564e99b0) at afr-common.c:2205#10 0x7f0b653d6e42 in client3_3_lookup_cbk (req=, iov=, count=, myframe=0x7f0b7076354c)at client-rpc-fops.c:2981#11 0x7f0b72a00a30 in rpc_clnt_handle_reply (clnt=clnt@entry=0x7f0b603393c0, pollin=pollin@entry=0x7f0b50c1c2d0) at rpc-clnt.c:764#12 0x7f0b72a00cef in rpc_clnt_notify (trans=, mydata=0x7f0b603393f0, event=, data=0x7f0b50c1c2d0) at rpc-clnt.c:925#13 0x7f0b729fc7c3 in rpc_transport_notify (this=this@entry=0x7f0b60349040, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7f0b50c1c2d0) at rpc-transport.c:546#14 0x7f0b678c39a4 in socket_event_poll_in (this=this@entry=0x7f0b60349040) at socket.c:2353#15 0x7f0b678c65e4 in socket_event_handler (fd=fd@entry=29, idx=idx@entry=17, data=0x7f0b60349040, poll_in=1, poll_out=0, poll_err=0) at socket.c:2466#16 0x7f0b72ca0f7a in event_dispatch_epoll_handler (event=0x7f0b564e9e80, event_pool=0x7f0b7349bf20) at event-epoll.c:575#17 event_dispatch_epoll_worker (data=0x7f0b60152d40) at event-epoll.c:678#18 0x7f0b71a9adc5 in start_thread () from /lib64/libpthread.so.0#19 0x7f0b713dfced in clone () from /lib64/libc.so.6 -- Respectfully Mahdi A. Mahdi From: mahdi.ad...@outlook.com To: kdhan...@redhat.com Date: Mon, 1 Aug 2016 16:31:50 +0300 CC: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash Many thanks, here's the results; (gdb) p cur_block$15 = 4088(gdb) p last_block$16 = 4088(gdb) p local->first_block$17 = 4087(gdb) p odirect$18 = _gf_false(gdb) p fd->flags$19 = 2(gdb) p local->call_count$20 = 2 If you need more core dumps, i have several files i can upload. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Mon, 1 Aug 2016 18:39:27 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gluster.org Sorry I didn't make myself clear. The reason I asked YOU to do it is because i tried it on my system and im not getting the backtrace (it's all question marks). Attach the core to gdb. At the gdb prompt, go to frame 2 by typing (gdb) f 2 There, for each of the variables i asked you to get the values of, type p followed by the variable name. For instance, to get the value of the variable 'odirect', do this: (gdb) p odirect and gdb will print its value for you in respo
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
Do you have a test case that consistently recreates this problem? -Krutika On Wed, Aug 3, 2016 at 8:32 PM, Mahdi Adnan wrote: > Hi, > > So i have updated to 3.7.14 and i still have the same issue with NFS. > based on what i have provided so far from logs and dumps do you think it's > an NFS issue ? should i switch to nfs-ganesha ? > the problem is, the current setup is used in a production environment, and > switching the mount point of +50 VMs from native nfs to nfs-ganesha is not > going to be smooth and without downtime, so i really appreciate your > thoughts on this matter. > > -- > > Respectfully > *Mahdi A. Mahdi* > > > > -- > From: mahdi.ad...@outlook.com > To: kdhan...@redhat.com > Date: Tue, 2 Aug 2016 08:44:16 +0300 > > CC: gluster-users@gluster.org > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > > Hi, > > The NFS just crashed again, latest bt; > > (gdb) bt > #0 0x7f0b71a9f210 in pthread_spin_lock () from /lib64/libpthread.so.0 > #1 0x7f0b72c6fcd5 in fd_anonymous (inode=0x0) at fd.c:804 > #2 0x7f0b64ca5787 in shard_common_inode_write_do > (frame=0x7f0b707c062c, this=0x7f0b6002ac10) at shard.c:3716 > #3 0x7f0b64ca5a53 in > shard_common_inode_write_post_lookup_shards_handler (frame=, > this=) at shard.c:3769 > #4 0x7f0b64c9eff5 in shard_common_lookup_shards_cbk > (frame=0x7f0b707c062c, cookie=, this=0x7f0b6002ac10, > op_ret=0, > op_errno=, inode=, buf=0x7f0b51407640, > xdata=0x7f0b72f57648, postparent=0x7f0b514076b0) at shard.c:1601 > #5 0x7f0b64efe141 in dht_lookup_cbk (frame=0x7f0b7075fcdc, > cookie=, this=, op_ret=0, op_errno=0, > inode=0x7f0b5f1d1f58, > stbuf=0x7f0b51407640, xattr=0x7f0b72f57648, postparent=0x7f0b514076b0) > at dht-common.c:2174 > #6 0x7f0b651871f3 in afr_lookup_done (frame=frame@entry=0x7f0b7079a4c8, > this=this@entry=0x7f0b60023ba0) at afr-common.c:1825 > #7 0x7f0b65187b84 in afr_lookup_metadata_heal_check > (frame=frame@entry=0x7f0b7079a4c8, > this=0x7f0b60023ba0, this@entry=0xca0bd88259f5a800) > at afr-common.c:2068 > #8 0x7f0b6518834f in afr_lookup_entry_heal > (frame=frame@entry=0x7f0b7079a4c8, > this=0xca0bd88259f5a800, this@entry=0x7f0b60023ba0) at afr-common.c:2157 > #9 0x7f0b6518867d in afr_lookup_cbk (frame=0x7f0b7079a4c8, > cookie=, this=0x7f0b60023ba0, op_ret=, > op_errno=, inode=, buf=0x7f0b564e9940, > xdata=0x7f0b72f708c8, postparent=0x7f0b564e99b0) at afr-common.c:2205 > #10 0x7f0b653d6e42 in client3_3_lookup_cbk (req=, > iov=, count=, myframe=0x7f0b7076354c) > at client-rpc-fops.c:2981 > #11 0x7f0b72a00a30 in rpc_clnt_handle_reply > (clnt=clnt@entry=0x7f0b603393c0, > pollin=pollin@entry=0x7f0b50c1c2d0) at rpc-clnt.c:764 > #12 0x7f0b72a00cef in rpc_clnt_notify (trans=, > mydata=0x7f0b603393f0, event=, data=0x7f0b50c1c2d0) at > rpc-clnt.c:925 > #13 0x7f0b729fc7c3 in rpc_transport_notify > (this=this@entry=0x7f0b60349040, > event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry > =0x7f0b50c1c2d0) > at rpc-transport.c:546 > #14 0x7f0b678c39a4 in socket_event_poll_in > (this=this@entry=0x7f0b60349040) > at socket.c:2353 > #15 0x7f0b678c65e4 in socket_event_handler (fd=fd@entry=29, > idx=idx@entry=17, data=0x7f0b60349040, poll_in=1, poll_out=0, poll_err=0) > at socket.c:2466 > #16 0x7f0b72ca0f7a in event_dispatch_epoll_handler > (event=0x7f0b564e9e80, event_pool=0x7f0b7349bf20) at event-epoll.c:575 > #17 event_dispatch_epoll_worker (data=0x7f0b60152d40) at event-epoll.c:678 > #18 0x7f0b71a9adc5 in start_thread () from /lib64/libpthread.so.0 > #19 0x7f0b713dfced in clone () from /lib64/libc.so.6 > > > -- > > Respectfully > *Mahdi A. Mahdi* > > -- > From: mahdi.ad...@outlook.com > To: kdhan...@redhat.com > Date: Mon, 1 Aug 2016 16:31:50 +0300 > CC: gluster-users@gluster.org > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > > Many thanks, > > here's the results; > > > (gdb) p cur_block > $15 = 4088 > (gdb) p last_block > $16 = 4088 > (gdb) p local->first_block > $17 = 4087 > (gdb) p odirect > $18 = _gf_false > (gdb) p fd->flags > $19 = 2 > (gdb) p local->call_count > $20 = 2 > > > If you need more core dumps, i have several files i can upload. > > -- > > Respectfully > *Mahdi A. Mahdi* > > > > -- > From: kdhan...@redhat.com > Date: Mon, 1 Aug 2016 18:39:27 +0530 > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org > > Sorry I didn't make myself clear. The reason I asked YOU to do it is > because i tried it on my system and im not getting the backtrace (it's all > question marks). > > Attach the core to gdb. > At the gdb prompt, go to frame 2 by typing > (gdb) f 2 > > There, for each of the variables i asked you to get the values of, type p > followed by the variable name. > For instance, to get the value of the variable 'odirect', do this: > > (gdb) p odirect > > and gdb will pri
[Gluster-users] 3.7.13 two node ssd solid rock
One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB. Never got split-brain, and healed quickly. Can we say 3.7.13 two nodes with ssd is solid rock or just lucky? My other gluster is on 3 nodes 3713, but one node never got up (old server proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, it never healed about 586 occurences but there's no split-brain too. and vms are intact too, working fine and fast. ahh never turned on caching on the array, the esx might not come up right away, u need to go to setup first to make it work and restart and then you can go to array setup (hp array F8) and turned off caching. then esx finally boot up.___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.7.13 NFS Crash
Hi, So i have updated to 3.7.14 and i still have the same issue with NFS.based on what i have provided so far from logs and dumps do you think it's an NFS issue ? should i switch to nfs-ganesha ? the problem is, the current setup is used in a production environment, and switching the mount point of +50 VMs from native nfs to nfs-ganesha is not going to be smooth and without downtime, so i really appreciate your thoughts on this matter. -- Respectfully Mahdi A. Mahdi From: mahdi.ad...@outlook.com To: kdhan...@redhat.com Date: Tue, 2 Aug 2016 08:44:16 +0300 CC: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash Hi, The NFS just crashed again, latest bt; (gdb) bt#0 0x7f0b71a9f210 in pthread_spin_lock () from /lib64/libpthread.so.0#1 0x7f0b72c6fcd5 in fd_anonymous (inode=0x0) at fd.c:804#2 0x7f0b64ca5787 in shard_common_inode_write_do (frame=0x7f0b707c062c, this=0x7f0b6002ac10) at shard.c:3716#3 0x7f0b64ca5a53 in shard_common_inode_write_post_lookup_shards_handler (frame=, this=) at shard.c:3769#4 0x7f0b64c9eff5 in shard_common_lookup_shards_cbk (frame=0x7f0b707c062c, cookie=, this=0x7f0b6002ac10, op_ret=0, op_errno=, inode=, buf=0x7f0b51407640, xdata=0x7f0b72f57648, postparent=0x7f0b514076b0) at shard.c:1601#5 0x7f0b64efe141 in dht_lookup_cbk (frame=0x7f0b7075fcdc, cookie=, this=, op_ret=0, op_errno=0, inode=0x7f0b5f1d1f58, stbuf=0x7f0b51407640, xattr=0x7f0b72f57648, postparent=0x7f0b514076b0) at dht-common.c:2174#6 0x7f0b651871f3 in afr_lookup_done (frame=frame@entry=0x7f0b7079a4c8, this=this@entry=0x7f0b60023ba0) at afr-common.c:1825#7 0x7f0b65187b84 in afr_lookup_metadata_heal_check (frame=frame@entry=0x7f0b7079a4c8, this=0x7f0b60023ba0, this@entry=0xca0bd88259f5a800)at afr-common.c:2068#8 0x7f0b6518834f in afr_lookup_entry_heal (frame=frame@entry=0x7f0b7079a4c8, this=0xca0bd88259f5a800, this@entry=0x7f0b60023ba0) at afr-common.c:2157#9 0x7f0b6518867d in afr_lookup_cbk (frame=0x7f0b7079a4c8, cookie=, this=0x7f0b60023ba0, op_ret=, op_errno=, inode=, buf=0x7f0b564e9940, xdata=0x7f0b72f708c8, postparent=0x7f0b564e99b0) at afr-common.c:2205#10 0x7f0b653d6e42 in client3_3_lookup_cbk (req=, iov=, count=, myframe=0x7f0b7076354c)at client-rpc-fops.c:2981#11 0x7f0b72a00a30 in rpc_clnt_handle_reply (clnt=clnt@entry=0x7f0b603393c0, pollin=pollin@entry=0x7f0b50c1c2d0) at rpc-clnt.c:764#12 0x7f0b72a00cef in rpc_clnt_notify (trans=, mydata=0x7f0b603393f0, event=, data=0x7f0b50c1c2d0) at rpc-clnt.c:925#13 0x7f0b729fc7c3 in rpc_transport_notify (this=this@entry=0x7f0b60349040, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7f0b50c1c2d0) at rpc-transport.c:546#14 0x7f0b678c39a4 in socket_event_poll_in (this=this@entry=0x7f0b60349040) at socket.c:2353#15 0x7f0b678c65e4 in socket_event_handler (fd=fd@entry=29, idx=idx@entry=17, data=0x7f0b60349040, poll_in=1, poll_out=0, poll_err=0) at socket.c:2466#16 0x7f0b72ca0f7a in event_dispatch_epoll_handler (event=0x7f0b564e9e80, event_pool=0x7f0b7349bf20) at event-epoll.c:575#17 event_dispatch_epoll_worker (data=0x7f0b60152d40) at event-epoll.c:678#18 0x7f0b71a9adc5 in start_thread () from /lib64/libpthread.so.0#19 0x7f0b713dfced in clone () from /lib64/libc.so.6 -- Respectfully Mahdi A. Mahdi From: mahdi.ad...@outlook.com To: kdhan...@redhat.com Date: Mon, 1 Aug 2016 16:31:50 +0300 CC: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash Many thanks, here's the results; (gdb) p cur_block$15 = 4088(gdb) p last_block$16 = 4088(gdb) p local->first_block$17 = 4087(gdb) p odirect$18 = _gf_false(gdb) p fd->flags$19 = 2(gdb) p local->call_count$20 = 2 If you need more core dumps, i have several files i can upload. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Mon, 1 Aug 2016 18:39:27 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gluster.org Sorry I didn't make myself clear. The reason I asked YOU to do it is because i tried it on my system and im not getting the backtrace (it's all question marks). Attach the core to gdb. At the gdb prompt, go to frame 2 by typing (gdb) f 2 There, for each of the variables i asked you to get the values of, type p followed by the variable name. For instance, to get the value of the variable 'odirect', do this: (gdb) p odirect and gdb will print its value for you in response. -Krutika On Mon, Aug 1, 2016 at 4:55 PM, Mahdi Adnan wrote: Hi, How to get the results of the below variables ? i cant get the results from gdb. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Mon, 1 Aug 2016 15:51:38 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gluster.org Could you also print and share the values of the following varia
Re: [Gluster-users] gluster 3.8.1 issue in compiling from source tarball
On 08/03/2016 10:42 AM, Yannick Perret wrote: > Le 03/08/2016 à 15:33, Amudhan P a écrit : >> Hi, >> >> I am trying to install gluster 3.8.1 from tarball in Ubuntu 14.04. >> >> 1. when i run "./configure --disable-tiering" at the end showing msg >> >> configure: WARNING: cache variable ac_cv_build contains a newline >> configure: WARNING: cache variable ac_cv_host contains a newline >> >> 2. running "make" command throws below msg and stops >> >> Makefile:90: *** missing separator. Stop. >> > Got the same problem when trying to compile it on Debian 8.2. > try ./autogen.sh && ./configure --disable-tiering (works for me on my debian 8 box) -- Kaleb ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster not saturating 10gb network
your 10G nic is capable, the problem is the disk speed, fix ur disk speed first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least. On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan wrote: Hi , I have gluster 3.6.2 installed on my server network. Due to internal issues we are not allowed to upgrade the gluster version. All the clients are on the same version of gluster. When transferring files to/from the clients or between my nodes over the 10gb network, the transfer rate is capped at 450Mb/s .Is there any way to increase the transfer speeds for gluster mounts? Our server setup is as following: 2 gluster servers -gfs1 and gfs2 volume name : gfsvolume3 clients - hpc1, hpc2,hpc3gluster volume mounted on /export/gfsmount/ The following is the average results what i did so far: 1) test bandwith with iperf between all machines - 9.4 GiB/s2) test write speed with dd dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1 result=399Mb/s 3) test read speed with dd dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1 result=284MB/s My gluster volume configuration: Volume Name: gfsvolumeType: ReplicateVolume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059bStatus: StartedNumber of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: gfs1:/export/sda/brickBrick2: gfs2:/export/sda/brickOptions Reconfigured:performance.quick-read: offnetwork.ping-timeout: 30network.frame-timeout: 90performance.cache-max-file-size: 2MBcluster.server-quorum-type: nonenfs.addr-namelookup: offnfs.trusted-write: offperformance.write-behind-window-size: 4MBcluster.data-self-heal-algorithm: diffperformance.cache-refresh-timeout: 60performance.cache-size: 1GBcluster.quorum-type: fixedauth.allow: 172.*cluster.quorum-count: 1diagnostics.latency-measurement: ondiagnostics.count-fop-hits: oncluster.server-quorum-ratio: 50% Any help would be appreciated. Thanks,Kaamesh ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster 3.8.1 issue in compiling from source tarball
Le 03/08/2016 à 15:33, Amudhan P a écrit : Hi, I am trying to install gluster 3.8.1 from tarball in Ubuntu 14.04. 1. when i run "./configure --disable-tiering" at the end showing msg configure: WARNING: cache variable ac_cv_build contains a newline configure: WARNING: cache variable ac_cv_host contains a newline 2. running "make" command throws below msg and stops Makefile:90: *** missing separator. Stop. Got the same problem when trying to compile it on Debian 8.2. -- Y. any help on this. Thanks, Amudhan ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users smime.p7s Description: Signature cryptographique S/MIME ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu
On Wed, Aug 3, 2016 at 7:57 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > On 3/08/2016 10:45 PM, Lindsay Mathieson wrote: > > On 3/08/2016 2:26 PM, Krutika Dhananjay wrote: > > Once I deleted old content from test volume it mounted to oVirt via > storage add when previously it would error out. I am now creating a test > VM with default disk caching settings (pretty sure oVirt is defaulting to > none rather than writeback/through). So far all shards are being created > properly. > > > I can confirm that it works with ProxMox VM's in direct (no cache mode) as > well. > > > Also Gluster 3.8.1 is good to > ugh almost done updating to 3.7.14 and already feeling the urge to start testing and updating to 3.8 branch. > > -- > Lindsay Mathieson > > ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] gluster 3.8.1 issue in compiling from source tarball
Hi, I am trying to install gluster 3.8.1 from tarball in Ubuntu 14.04. 1. when i run "./configure --disable-tiering" at the end showing msg configure: WARNING: cache variable ac_cv_build contains a newline configure: WARNING: cache variable ac_cv_host contains a newline 2. running "make" command throws below msg and stops Makefile:90: *** missing separator. Stop. any help on this. Thanks, Amudhan ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Meeting Update
We had a pretty well attended and lively meeting today, thanks to everyone who attended. The meeting minutes and logs are available at the links below. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-08-03/weekly_community_meeting_03-aug-2016.2016-08-03-12.01.html Minutes (text): https://meetbot.fedoraproject.org/gluster-meeting/2016-08-03/weekly_community_meeting_03-aug-2016.2016-08-03-12.01.txt Log: https://meetbot.fedoraproject.org/gluster-meeting/2016-08-03/weekly_community_meeting_03-aug-2016.2016-08-03-12.01.log.html Next weeks meeting will be held at the same time. Thanks all. ~Ankit Raj ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] gluster reverting directory owndership?
Hi, It seems that glusterfsd reverts ownership on directories on the brick. I would set a directory to be owned by root:root and within a half hour they are back to the previous value. Audit logs show that glusterfsd performed the change, thought it’s not clear if something “asked” glusterfsd to do it: type=PATH msg=audit(08/03/2016 00:08:02.401:93734) : item=0 name=/data/ftp_gluster_brick/admin inode=7301352 dev=fd:02 mode=dir,755 ouid=root ogid=root rdev=00:00 objtype=NORMAL type=CWD msg=audit(08/03/2016 00:08:02.401:93734) : cwd=/ type=SYSCALL msg=audit(08/03/2016 00:08:02.401:93734) : arch=x86_64 syscall=lchown success=yes exit=0 a0=0x7fd608314730 a1=0x a2=0x30 a3=0x0 items=1 ppid=1 pid=8774 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=glusterfsd exe=/usr/sbin/glusterfsd key=monitor-brick Is this a known issue? Can I do something about it? Thanks, Sergei___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu
On 3/08/2016 10:45 PM, Lindsay Mathieson wrote: On 3/08/2016 2:26 PM, Krutika Dhananjay wrote: Once I deleted old content from test volume it mounted to oVirt via storage add when previously it would error out. I am now creating a test VM with default disk caching settings (pretty sure oVirt is defaulting to none rather than writeback/through). So far all shards are being created properly. I can confirm that it works with ProxMox VM's in direct (no cache mode) as well. Also Gluster 3.8.1 is good to -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu
On 3/08/2016 2:26 PM, Krutika Dhananjay wrote: Once I deleted old content from test volume it mounted to oVirt via storage add when previously it would error out. I am now creating a test VM with default disk caching settings (pretty sure oVirt is defaulting to none rather than writeback/through). So far all shards are being created properly. I can confirm that it works with ProxMox VM's in direct (no cache mode) as well. Load is sky rocketing but I have all 3 gluster bricks running off 1 hard drive on test box so I would expect horrible io/load issues with that. Ha! Same config for my test Host :) -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban wrote: > Hi, > > May I ask if multi-threaded self heal for distributed disperse volumes > implemented in this release? AFAIK, not yet. It's not yet available on the master branch yet. Pranith can give a better answer. > > Thanks, > Serkan > > On Tue, Aug 2, 2016 at 5:30 PM, David Gossage > wrote: >> On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson >> wrote: >>> >>> On 2/08/2016 5:07 PM, Kaushal M wrote: GlusterFS-3.7.14 has been released. This is a regular minor release. The release-notes are available at https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md >>> >>> >>> Thanks Kaushal, I'll check it out >>> >> >> So far on my test box its working as expected. At least the issues that >> prevented it from running as before have disappeared. Will need to see how >> my test VM behaves after a few days. >> >> >> >>> -- >>> Lindsay Mathieson >>> >>> ___ >>> Gluster-users mailing list >>> Gluster-users@gluster.org >>> http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS-3.7.14 released
Hi, May I ask if multi-threaded self heal for distributed disperse volumes implemented in this release? Thanks, Serkan On Tue, Aug 2, 2016 at 5:30 PM, David Gossage wrote: > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson > wrote: >> >> On 2/08/2016 5:07 PM, Kaushal M wrote: >>> >>> GlusterFS-3.7.14 has been released. This is a regular minor release. >>> The release-notes are available at >>> >>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md >> >> >> Thanks Kaushal, I'll check it out >> > > So far on my test box its working as expected. At least the issues that > prevented it from running as before have disappeared. Will need to see how > my test VM behaves after a few days. > > > >> -- >> Lindsay Mathieson >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users > > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Weekly Gluster community meeeting
Hi all, The weekly Gluster community meeting is about to take place in 30 min Meeting details: - location: #gluster-meeting on Freenode IRC ( https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Wednesday - time: 12:00 UTC (in your terminal, run: date -d "12:00 UTC") Currently the following items are listed: * Roll Call * Meeting starts Topic details ( https://public.pad.fsfe.org/p/gluster-community-meetings ) Appreciate your participation Regards, Ankit Raj ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users