Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Hu Bert
If i remember correctly: in the video they suggested not to make a RAID 10 too big (i.e. too many (big) disks), because the RAID resync then could take a long time. They didn't mention a limit; on my 3 servers with 2 RAID 10 (1x4 disks, 1x6 disks), no disk failed so far, but there were automatic

Re: [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
Hi Abhishek, Please use statedumps taken at intervals to determine where the memory is increasing. See [1] for details. Regards, Nithya [1] https://docs.gluster.org/en/latest/Troubleshooting/statedump/ On Fri, 7 Jun 2019 at 08:13, ABHISHEK PALIWAL wrote: > Hi Nithya, > > We are having the

Re: [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread ABHISHEK PALIWAL
Hi Nithya, We are having the setup where copying the file to and deleting it from gluster mount point to update the latest file. We noticed due to this having some memory increase in glusterfsd process. To find the memory leak we are using valgrind but didn't get any help. That's why contacted

[Gluster-users] healing of disperse volume

2019-06-06 Thread fusillator
: on transport.address-family: inet # gluster volume heal elastic-volume info Brick dev01:/data/gfs/lv_elastic/brick1/brick /data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log /data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log /data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Jim Kinney
I have about 200TB in a gluster replicate only 3-node setup. We stopped using hardware RAID6 after the third drive failed on one array at the same time we replaced the other two and before recovery could complete. 200TB is a mess to resync. So now each hard drive is a single entity. We add 1 drive

[Gluster-users] geo-replication session information

2019-06-06 Thread Jim Shelton
I need help cleaning up a faulty geo-replication session. I tried deleting all related directories and files. But I am currently in a state such that when I try and recreate the session via gluster> volume geo-replication icp_kube-system_nfs-pvc_69753a58-819f-11e9-b3a0-005056b694b5

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Michael Metz-Martini
Hi Am 06.06.19 um 18:48 schrieb Eduardo Mayoral: > Your comment actually helps me more than you think, one of the main > doubts I have is whether I go for JOBD with replica 3 or SW RAID 6 with > replica2 + arbitrer. Before reading your email I was leaning more > towards JOBD, as reconstruction of

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Eduardo Mayoral
Yes to the 10 GB NICS (they are already on the servers). Nice idea with the SSDs, but I do not have a HW RAID card on these servers or the possibility to get / install one. What I do have is an extra SSD disk per server which I plan to use as LVM cache for the bricks (Maybe just 1 disk, maybe 2

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Vincent Royer
What if you have two fast 2TB SSDs per server in hardware RAID 1, 3 hosts in replica 3. Dual 10gb enterprise nics. This would end up being a single 2TB volume, correct? Seems like that would offer great speed and have pretty decent survivability. On Wed, Jun 5, 2019 at 11:54 PM Hu Bert wrote:

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Eduardo Mayoral
Your comment actually helps me more than you think, one of the main doubts I have is whether I go for JOBD with replica 3 or SW RAID 6 with replica2 + arbitrer. Before reading your email I was leaning more towards JOBD, as reconstruction of a moderately big RAID 6 with mdadm can be painful too.

Re: [Gluster-users] Geo Replication stops replicating

2019-06-06 Thread Sunny Kumar
You should not have used this one: > > gluster-mountbroker remove --volume code-misc --user sas -- This one is to remove volume/user from mount broker. Please try setting up mount broker once again. -Sunny On Thu, Jun 6, 2019 at 5:28 PM deepu srinivasan wrote: > > Hi Sunny > Please find the

Re: [Gluster-users] Geo Replication stops replicating

2019-06-06 Thread Sunny Kumar
Whats current trackback please share. -Sunny On Thu, Jun 6, 2019 at 4:53 PM deepu srinivasan wrote: > > Hi Sunny > I have changed the file in /usr/libexec/glusterfs/peer_mountbroker.py as > mentioned in the patch. > Now the "gluster-mountbroker status" command is working fine. But the >

Re: [Gluster-users] Geo Replication stops replicating

2019-06-06 Thread Sunny Kumar
Above error can be tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1709248 and patch link: https://review.gluster.org/#/c/glusterfs/+/22716/ You can apply patch and test it however its waiting on regression to pass and merge. -Sunny On Thu, Jun 6, 2019 at 4:00 PM deepu srinivasan

Re: [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
Hi Abhishek, I am still not clear as to the purpose of the tests. Can you clarify why you are using valgrind and why you think there is a memory leak? Regards, Nithya On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL wrote: > Hi Nithya, > > Here is the Setup details and test which we are doing as

Re: [Gluster-users] write request hung in write-behind

2019-06-06 Thread Pranith Kumar Karampuri
On Tue, Jun 4, 2019 at 7:36 AM Xie Changlong wrote: > To me, all 'df' commands on specific(not all) nfs client hung forever. > The temporary solution is disable performance.nfs.write-behind and > cluster.eager-lock. > > I'll try to get more info back if encounter this problem again . > If you

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Hu Bert
Good morning, my comment won't help you directly, but i thought i'd send it anyway... Our first glusterfs setup had 3 servers withs 4 disks=bricks (10TB, JBOD) each. Was running fine in the beginning, but then 1 disk failed. The following heal took ~1 month, with a bad performance (quite high