Re: [Gluster-users] [Gluster-devel] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Surabhi Bhalothia
On 10/06/2016 11:36 AM, Soumya Koduri wrote: On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote: On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri mailto:skod...@redhat.com>> wrote: Hi, With http://review.gluster.org/#/c/15051/ , performace

Re: [Gluster-users] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Soumya Koduri
On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote: On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri mailto:skod...@redhat.com>> wrote: Hi, With http://review.gluster.org/#/c/15051/ , performace/client-io-threads is enabled by default. B

[Gluster-users] gluster and LIO, fairly basic setup, having major issues

2016-10-05 Thread Michael Ciccarelli
So I have a fairly basic setup using glusterfs between 2 nodes. The nodes have 10 gig connections and the bricks reside on SSD LVM LUNs: Brick1: media1-be:/gluster/brick1/gluster_volume_0 Brick2: media2-be:/gluster/brick1/gluster_volume_0 On this volume I have a LIO iscsi target with 1 fileio ba

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread Lindsay Mathieson
On 6/10/2016 6:37 AM, Gandalf Corvotempesta wrote: Only 8GB ? Why ? Its enough? I also run 10 windows VM's per node. My servers typically run at 4-6% max ioload. They idle under 1% -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@

Re: [Gluster-users] Remove a brick, rebuild it, put it back in

2016-10-05 Thread Joe Julian
What I always do is just shut it down, repair (or replace) the brick, then start it up again with "... start $volname force". On October 5, 2016 11:27:36 PM GMT+02:00, Sergei Gerasenko wrote: >Hi, sorry if this has been asked before but the documentation is a bit >conflicting in various sourc

Re: [Gluster-users] Remove a brick, rebuild it, put it back in

2016-10-05 Thread Sergei Gerasenko
Is this still current: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick ? On Wed, Oct 5, 2016 at 4:32 PM, Sergei Gerasenko wrote: > Oh, I'm running 3.7.12 > > On Wed, Oct 5, 2016 at 4:27 PM, Sergei Gerasenko > wrote: > >> Hi, sorry if this has bee

Re: [Gluster-users] 3.6 branch upgrade plan

2016-10-05 Thread Roman
Hello, Anyone please? 2016-10-03 15:42 GMT+03:00 Roman : > Hello, dear community! > It was pretty much time for me not writing here, but it only means, that > everything was just fine with our gluster storage for KVM VMs. > > We are running 3.6.5 @ debian wheezy servers. As wheezy is a part of >

Re: [Gluster-users] Remove a brick, rebuild it, put it back in

2016-10-05 Thread Sergei Gerasenko
Oh, I'm running 3.7.12 On Wed, Oct 5, 2016 at 4:27 PM, Sergei Gerasenko wrote: > Hi, sorry if this has been asked before but the documentation is a bit > conflicting in various sources on what to do exactly. > > I have an 6-node, distributed replicated cluster with a replica factor of > 2. So it

[Gluster-users] Remove a brick, rebuild it, put it back in

2016-10-05 Thread Sergei Gerasenko
Hi, sorry if this has been asked before but the documentation is a bit conflicting in various sources on what to do exactly. I have an 6-node, distributed replicated cluster with a replica factor of 2. So it's 3 pairs of servers. I need to remove a server from one of those replica sets, rebuild it

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread Gandalf Corvotempesta
2016-10-05 22:35 GMT+02:00 Lindsay Mathieson : > 64Gb RAM in each server, 8GB reversed for ZFS. Only 8GB ? Why ? ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread Lindsay Mathieson
On 6/10/2016 6:20 AM, Gandalf Corvotempesta wrote: > L2ARC depends on your workload. For me its not useful - VM hosting on a sharded volume, never got better than 6% cache hits. The vast majority of hits were via ARC (memory). ZFS seems to be really good at that :) > How many ram do you hav

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread Gandalf Corvotempesta
Il 05 ott 2016 10:12 PM, "Lindsay Mathieson" ha scritto: > > L2ARC depends on your workload. For me its not useful - VM hosting on a sharded volume, never got better than 6% cache hits. The vast majority of hits were via ARC (memory). ZFS seems to be really good at that :) > How many ram do you h

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread Lindsay Mathieson
On 6/10/2016 4:29 AM, Gandalf Corvotempesta wrote: > I was thinking about creating one or more raidz2 to use as bricks, with 2 ssd. One small partition on these ssd would be used as a mirrored SLOG and the other 2 would be used as standalone arc cache. will this worth the use of SSD or would

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread Joe Julian
On September 30, 2016 1:46:31 PM GMT+02:00, Gandalf Corvotempesta wrote: > >As suggestion for gluster developers: if ZFS is considered stable it >could >be used as default (replacing xfs) and many features that zfs already >has >could be removed from gluster (like bitrot) keeping gluster small

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread David Gossage
On Wed, Oct 5, 2016 at 2:14 PM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > 2016-10-05 20:50 GMT+02:00 David Gossage : > > The mirrored slog will be useful. Depending on what you put on the pool > > l2arc may not get used much. I removed mine as it got such a low hit > rate

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread Gandalf Corvotempesta
2016-10-05 20:50 GMT+02:00 David Gossage : > The mirrored slog will be useful. Depending on what you put on the pool > l2arc may not get used much. I removed mine as it got such a low hit rate > serving VM's. I'll use shards. The most accessed shard isn't cached in L2ARC? I'll also other pools

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread David Gossage
On Wed, Oct 5, 2016 at 1:29 PM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Il 30 set 2016 1:46 PM, "Gandalf Corvotempesta" < > gandalf.corvotempe...@gmail.com> ha scritto: > > > I was thinking about creating one or more raidz2 to use as bricks, with > 2 ssd. One small partit

Re: [Gluster-users] Production cluster planning

2016-10-05 Thread Gandalf Corvotempesta
Il 30 set 2016 1:46 PM, "Gandalf Corvotempesta" < gandalf.corvotempe...@gmail.com> ha scritto: > I was thinking about creating one or more raidz2 to use as bricks, with 2 ssd. One small partition on these ssd would be used as a mirrored SLOG and the other 2 would be used as standalone arc cache. w

Re: [Gluster-users] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Pranith Kumar Karampuri
On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri wrote: > Hi, > > With http://review.gluster.org/#/c/15051/, performace/client-io-threads > is enabled by default. But with that we see regression caused to > nfs-ganesha application trying to un/re-export any glusterfs volume. This > shall be the same

[Gluster-users] [Gluster-devel] Reminder: Weekly Gluster Community Meeting

2016-10-05 Thread Ankit Raj
Hi all, The weekly Gluster community meeting is about to take place in ~30 minutes. Meeting details: - Location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting) - Date: Every Wednesday - Time: 12:00 UTC (on your terminal, run: date -d

Re: [Gluster-users] [ovirt-users] 4.0 - 2nd node fails on deploy

2016-10-05 Thread Joe Julian
"no route to host" is a network problem. Looks like quorum loss is appropriate. On October 5, 2016 12:31:18 PM GMT+02:00, Sahina Bose wrote: >On Wed, Oct 5, 2016 at 1:56 PM, Jason Jeffrey wrote: > >> HI, >> >> >> >> Logs attached >> > >Have you probed 2 interfaces for same host, that is - dcasr

Re: [Gluster-users] [ovirt-users] 4.0 - 2nd node fails on deploy

2016-10-05 Thread Sahina Bose
On Wed, Oct 5, 2016 at 1:56 PM, Jason Jeffrey wrote: > HI, > > > > Logs attached > Have you probed 2 interfaces for same host, that is - dcasrv02 and dcastor02? Does "gluster peer status" understand both names as for same host? >From glusterd logs and the mount logs - the connection between the

[Gluster-users] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Soumya Koduri
Hi, With http://review.gluster.org/#/c/15051/, performace/client-io-threads is enabled by default. But with that we see regression caused to nfs-ganesha application trying to un/re-export any glusterfs volume. This shall be the same case with any gfapi application using glfs_fini(). More det

Re: [Gluster-users] [ovirt-users] 4.0 - 2nd node fails on deploy

2016-10-05 Thread Sahina Bose
[Adding gluster-users ML] The brick logs are filled with errors : [2016-10-05 19:30:28.659061] E [MSGID: 113077] [posix-handle.c:309:posix_handle_pump] 0-engine-posix: malformed internal link /var/run/vdsm/storage/0a021563-91b5-4f49-9c6b-fff45e85a025/d84f0551-0f2b-457c-808c-6369c6708d43/1b5a5e34-81