Re: [Gluster-users] any one uses all flash GlusterFS setup?

2021-03-27 Thread Mahdi Adnan
healing process a lot of vms > got i/o failure and timeouts. > we never got this with the single zfs+nfs box. > > now we are planning 3 boxes in each box 2 HHHL NVME 3.8TB > samsungs(special dev) +8x1.9TB storage as a zfsbox, so they will be glued > as 2+1 glusterfs. > > >

Re: [Gluster-users] any one uses all flash GlusterFS setup?

2021-03-26 Thread Mahdi Adnan
Hello Arman, We have several volumes running all flash bricks hosting VMs for RHV/oVirt. as far as I know, there's no profile specifically for SSD, we just use the usual virt group for the volume which has the essential options for the volume to be used for VMs. I have no experience with Gluster

Re: [Gluster-users] qemu raw image file - qemu and grub2 can't find boot content from VM

2021-02-01 Thread Mahdi Adnan
ext boot fails > > > again as does luanching it on the other two). > > > > > > Based on feedback, I will not change the shard size at this time and > > > will leave that for later. Some people suggest larger sizes but it > isn't > > > a universal suggestion. I

Re: [Gluster-users] qemu raw image file - qemu and grub2 can't find boot content from VM

2021-01-27 Thread Mahdi Adnan
I would leave it on 64M in volumes with spindle disks, but with SSD volumes, I would increase it to 128M or even 256M, but it varies from one workload to another. On Wed, Jan 27, 2021 at 10:02 PM Erik Jacobson wrote: > > Also, I would like to point that I have VMs with large disks 1TB and >

Re: [Gluster-users] qemu raw image file - qemu and grub2 can't find boot content from VM

2021-01-27 Thread Mahdi Adnan
I think the following messages are not harmful; [2021-01-26 19:28:40.652898] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937a a84806/48bb5288-e27e-46c9-9f7c-944a804df361.1: dentry not found in 48bb5288-e27e-46c9-9f7c-944a804df361 [2021-01-26 19:28:40.652975]

Re: [Gluster-users] qemu raw image file - qemu and grub2 can't find boot content from VM

2021-01-25 Thread Mahdi Adnan
Hello Erik, Anything in the logs of the fuse mount? can you stat the file from the mount? also, the report of an image is only 64M makes me think about Sharding as the default value of Shard size is 64M. Do you have any clues on when this issue start to happen? was there any operation done to

Re: [Gluster-users] Recommended version for use with RHEV

2020-11-17 Thread Mahdi Adnan
Hello David, Always keep both clients and servers running the same major version, so for RHV 4.2 I think the client is running Gluster 3.8 and in RHV 4.2.x it was upgraded to 3.12, just check the client version inside of one of the RHV hosts "gluster --version" so you know which version to use

Re: [Gluster-users] Docs on gluster parameters

2020-11-13 Thread Mahdi Adnan
Hi, Differently, the Gluster docs missing quite a bit regarding the available options that can be used in the volumes. Not only that, there are some options that might corrupt data and do not have proper documentation, for example, disable Sharding will lead to data corruption and I think it

Re: [Gluster-users] missing files on FUSE mount

2020-11-04 Thread Mahdi Adnan
Hello Martín, Try to disable "performance.readdir-ahead", we had a similar issue, and disabling "performance.readdir-ahead" solved our issue. gluster volume set tapeless performance.readdir-ahead off On Tue, Oct 27, 2020 at 8:23 PM Martín Lorenzo wrote: > Hi Strahil, today we have the same

[Gluster-users] Gluster monitoring

2020-10-26 Thread Mahdi Adnan
Hello How do you keep track of the health status of your Gluster volumes? When Brick went down (crash, failure, shutdown), node failure, peering issue, on-going healing? Gluster Tendrl is complex and sometimes it's broken, Prometheus exporter still lacking, gstatus is basic. Currently, to

Re: [Gluster-users] State of Gluster project

2020-06-23 Thread Mahdi Adnan
ience? > Thanks. > > [1] https://access.redhat.com/solutions/22231 (accound required) > [2] https://bugzilla.redhat.com/show_bug.cgi?id=489889 (old, but I can > not find anything newer) > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it [1] >

Re: [Gluster-users] State of Gluster project

2020-06-22 Thread Mahdi Adnan
22, 2020 at 10:55 AM Gionatan Danti wrote: > Il 2020-06-21 20:41 Mahdi Adnan ha scritto: > > Hello Gionatan, > > > > Using Gluster brick in a RAID configuration might be safer and > > require less work from Gluster admins but, it is a waste of disk > > space.

Re: [Gluster-users] Upgrade from 5.13 to 7.5 full of weird messages

2020-06-21 Thread Mahdi Adnan
I think if it's reproducible than someone can look into it, can you list the steps to reproduce it? On Sun, Jun 21, 2020 at 9:12 PM Artem Russakovskii wrote: > There's been 0 progress or attention to this issue in a month on github or > otherwise. > > Sincerely, > Artem > > -- > Founder,

Re: [Gluster-users] State of Gluster project

2020-06-21 Thread Mahdi Adnan
Hello Gionatan, Using Gluster brick in a RAID configuration might be safer and require less work from Gluster admins but, it is a waste of disk space. Gluster bricks are replicated "assuming you're creating a distributed-replica volume" so when brick went down, it should be easy to recover it

Re: [Gluster-users] State of Gluster project

2020-06-19 Thread Mahdi Adnan
My concern for Glusterd2 deprecation is, it tried to implement and fix several things that we need in Gluster, and the promised features were not considered in Gluster afterward "better logging, Journal Based Replication". We're running both Ceph and Gluster, while both solutions are great in

[Gluster-users] State of Gluster project

2020-06-16 Thread Mahdi Adnan
Hello, I'm wondering what's the current and future plan for Gluster project overall, I see that the project is not as busy as it was before "at least this is what I'm seeing" Like there are fewer blogs about what the roadmap or future plans of the project, the deprecation of Glusterd2, even Red

Re: [Gluster-users] Gluster client 4.1.5 with Gluster server 6.7

2020-01-30 Thread Mahdi Adnan
Hello, We had a similar issue when we upgraded one of our clusters to 6.5 and clients were running 4.1.5 and 4.1.9, both crashed after few seconds of mounting, we did not dig into the issue instead, we upgraded the clients to 6.5 and it worked fine. On Tue, Jan 28, 2020 at 1:35 AM Laurent

Re: [Gluster-users] Gluster endless heal

2018-01-22 Thread Mahdi Adnan
-boun...@gluster.org <gluster-users-boun...@gluster.org> on behalf of Mahdi Adnan <mahdi.ad...@outlook.com> Sent: Wednesday, January 17, 2018 9:50 PM To: gluster-users@gluster.org Subject: [Gluster-users] Gluster endless heal Hi, I have an issue with Gluster 3.8.14. The cluste

[Gluster-users] Gluster endless heal

2018-01-18 Thread Mahdi Adnan
Hi, I have an issue with Gluster 3.8.14. The cluster is 4 nodes with replica count 2, on of the nodes went offline for around 15 minutes, when it came back online, self heal triggered and it just did not stop afterward, it's been running for 3 days now, maxing the bricks utilization without

Re: [Gluster-users] Gluster CLI Feedback

2017-10-17 Thread Mahdi Adnan
Hi, 1. How often do you use the Gluster CLI? Is it a preferred method to manage Gluster? It's the only way we manage our volumes. 2. What operations do you commonly perform using the CLI? Create, replace, set, and healing info. 3. How intuitive/easy to use do you find the CLI ? it's

Re: [Gluster-users] data corruption - any update?

2017-10-12 Thread Mahdi Adnan
Hi, We had issues with data corruption before but, with glusterfs 3.8.12 we tested expanding a sharded volume and it worked fine without issues. Try expanding a test volume and see the results yourself, for me, it was 100% reproducible. -- Respectfully Mahdi A. Mahdi

Re: [Gluster-users] Gluster 3.8.13 data corruption

2017-10-09 Thread Mahdi Adnan
created from the template, is this correct ? https://paste.fedoraproject.org/paste/qzHmK8t-eJHM3hcZBVs5Yw -- Respectfully Mahdi A. Mahdi From: Krutika Dhananjay <kdhan...@redhat.com> Sent: Monday, October 9, 2017 1:59 PM To: Mahdi Adnan Cc: Lindsay Mat

Re: [Gluster-users] Gluster 3.8.13 data corruption

2017-10-06 Thread Mahdi Adnan
ober 6, 2017 7:39 AM To: Lindsay Mathieson Cc: Mahdi Adnan; gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.8.13 data corruption Could you disable stat-prefetch on the volume and create another vm off that template and see if it works? -Krutika On Fri, Oct 6, 2017 at 8:28 AM, L

[Gluster-users] Gluster 3.8.13 data corruption

2017-10-05 Thread Mahdi Adnan
Hi, We're running Gluster 3.8.13 replica 2 (SSDs), it's used as storage domain for oVirt. Today, we found an issue with one of the VMs template, after deploying a VM from this template it will not boot, it stuck at mount the root partition. We've been using this templates for months now and we

Re: [Gluster-users] Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption

2017-07-13 Thread Mahdi Adnan
Hi, Doing online upgrade with replica 2 should be fine, i think there might be something else causing the corruption. -- Respectfully Mahdi A. Mahdi From: gluster-users-boun...@gluster.org on behalf of Pavel Szalbot

Re: [Gluster-users] Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption

2017-07-12 Thread Mahdi Adnan
Noted, many thanks -- Respectfully Mahdi A. Mahdi From: Pranith Kumar Karampuri <pkara...@redhat.com> Sent: Tuesday, July 11, 2017 6:41:28 AM To: Mahdi Adnan Cc: Pavel Szalbot; gluster-users Subject: Re: [Gluster-users] Upgrading Gluster revision (

Re: [Gluster-users] Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption

2017-07-10 Thread Mahdi Adnan
I upgraded from 3.8.12 to 3.8.13 without issues. Two replicated volumes with online update, upgraded clients first and followed by servers upgrade, "stop glusterd, pkill gluster*, update gluster*, start glusterd, monitor healing process and logs, after completion proceed to the other node"

Re: [Gluster-users] Gluster install using Ganesha for NFS

2017-07-07 Thread Mahdi Adnan
Hi, Why change to storhaug ? and whats going to happen to the current setup if i want to update Gluster to 3.11 or beyond ? -- Respectfully Mahdi A. Mahdi From: gluster-users-boun...@gluster.org on behalf of Kaleb S.

Re: [Gluster-users] Teaming vs Bond?

2017-06-19 Thread Mahdi Adnan
Hi, In general and not in Gluster. we used Teaming for some time and we switched back to Bonding because we had issues with the load balancing of Teaming. With teaming config was "LACP, eth,ipv4,ipv6" the results was one interface utilized more then the other one, and in some cases one

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-06-06 Thread Mahdi Adnan
June 6, 2017 9:17:40 AM To: Mahdi Adnan Cc: gluster-user; Gandalf Corvotempesta; Lindsay Mathieson; Kevin Lemonnier Subject: Re: Rebalance + VM corruption - current status and request for feedback Hi Mahdi, Did you get a chance to verify this fix again? If this fix works for you, is it OK if w

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-29 Thread Mahdi Adnan
or directory] Although the process went smooth, i will run another extensive test tomorrow just to be sure. -- Respectfully Mahdi A. Mahdi From: Krutika Dhananjay <kdhan...@redhat.com> Sent: Monday, May 29, 2017 9:20:29 AM To: Mahdi Adnan Cc: gluster-user; G

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-26 Thread Mahdi Adnan
rning message" VMs started to fail after rebalancing. -- Respectfully Mahdi A. Mahdi From: Krutika Dhananjay <kdhan...@redhat.com> Sent: Wednesday, May 17, 2017 6:59:20 AM To: gluster-user Cc: Gandalf Corvotempesta; Lindsay Mathieson; Kevin Lemonnier; Mah

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-24 Thread Mahdi Adnan
Hi, Still no RPMs in SIG repository. -- Respectfully Mahdi A. Mahdi From: Niels de Vos <nde...@redhat.com> Sent: Monday, May 22, 2017 3:26:02 PM To: Atin Mukherjee Cc: Mahdi Adnan; Vijay Bellur; gluster-user Subject: Re: [Gluster-users] Rebalanc

Re: [Gluster-users] Distributed re-balance issue

2017-05-24 Thread Mahdi Adnan
A. Mahdi From: Nithya Balachandran <nbala...@redhat.com> Sent: Wednesday, May 24, 2017 8:16:53 PM To: Mahdi Adnan Cc: Mohammed Rafi K C; gluster-users@gluster.org Subject: Re: [Gluster-users] Distributed re-balance issue On 24 May 2017 at 22:45, Nithya Balach

[Gluster-users] Distributed re-balance issue

2017-05-24 Thread Mahdi Adnan
Hi, I have a distributed volume with 6 bricks, each have 5TB and it's hosting large qcow2 VM disks (I know it's reliable but it's not important data) I started with 5 bricks and then added another one, started the re balance process, everything went well, but now im looking at the bricks free

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-20 Thread Mahdi Adnan
Good morning, SIG repository does not have the latest glusterfs 3.10.2. Do you have any idea when it's going to be updated ? Is there any other recommended place to get the latest rpms ? -- Respectfully Mahdi A. Mahdi From: Mahdi Adnan <mahdi

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-20 Thread Mahdi Adnan
at.com> Sent: Saturday, May 20, 2017 6:46:51 PM To: Krutika Dhananjay Cc: Mahdi Adnan; raghavendra talur; gluster-user Subject: Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback On Sat, May 20, 2017 at 6:38 AM, Krutika Dhananjay <kdhan...@redhat.com&

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-19 Thread Mahdi Adnan
indsay Mathieson; Kevin Lemonnier; Mahdi Adnan Subject: Rebalance + VM corruption - current status and request for feedback Hi, In the past couple of weeks, we've sent the following fixes concerning VM corruption upon doing rebalance - https://review.gluster.org/#/q/status:merged+project:glu

Re: [Gluster-users] Gluster long healing process

2017-05-06 Thread Mahdi Adnan
Okay so it's fixed by killing Gluster and rebooting the node again. -- Respectfully Mahdi A. Mahdi From: gluster-users-boun...@gluster.org <gluster-users-boun...@gluster.org> on behalf of Mahdi Adnan <mahdi.ad...@outlook.com> Sent: Wednesday, May 3

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-04 Thread Mahdi Adnan
Hi, Same here, when i reboot the node i have to manually execute "pcs cluster start gluster01" and pcsd already enabled and started. Gluster 3.8.11 Centos 7.3 latest Installed using CentOS Storage SIG repository -- Respectfully Mahdi A. Mahdi From:

[Gluster-users] Gluster long healing process

2017-05-03 Thread Mahdi Adnan
Hi, I have a 4 node Gluster volume, each has 24 SSD brick running Gluster 3.8.10 (two volumes), i updated one of the nodes to 3.8.11 and rebooted the node, after it came back online the healing process started and it never ended. It has been 24 hours and the healing is still going, gluster

Re: [Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-01 Thread Mahdi Adnan
I first encountered this bug about a year ago, and lost more than 100 VM. Sharding is essential to VM datastores and i think Gluster is't that useful without this feature for VMs. I appreciate all the hard work that the developers putting on this bug, but i think a warning in CLI or something

Re: [Gluster-users] Quorum replica 2 and arbiter

2017-04-23 Thread Mahdi Adnan
Thank you guys. I'll be testing this and let you know if i have any issues. -- Respectfully Mahdi A. Mahdi From: Pranith Kumar Karampuri <pkara...@redhat.com> Sent: Saturday, April 22, 2017 3:06:20 PM To: Ravishankar N Cc: Mahdi Adnan; gluster

Re: [Gluster-users] Replica 2 Quorum and arbiter

2017-04-22 Thread Mahdi Adnan
Thank you very much. -- Respectfully Mahdi A. Mahdi From: Karthik Subrahmanya <ksubr...@redhat.com> Sent: Wednesday, April 19, 2017 4:30:30 PM To: Mahdi Adnan Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Replica 2 Quorum and arbiter Hi, Co

Re: [Gluster-users] Replica 2 Quorum and arbiter

2017-04-22 Thread Mahdi Adnan
Thank you very much. -- Respectfully Mahdi A. Mahdi From: Karthik Subrahmanya <ksubr...@redhat.com> Sent: Wednesday, April 19, 2017 4:30:30 PM To: Mahdi Adnan Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Replica 2 Quorum and arbiter Hi, Co

[Gluster-users] Replica 2 Quorum and arbiter

2017-04-18 Thread Mahdi Adnan
Hi, We have a replica 2 volume and we have issue with setting proper quorum. The volumes used as datastore for vmware/ovirt, the current settings for the quorum are: cluster.quorum-type: auto cluster.server-quorum-type: server cluster.server-quorum-ratio: 51% Losing the first node which

[Gluster-users] Quorum replica 2 and arbiter

2017-04-18 Thread Mahdi Adnan
Hi, We have a replica 2 volume and we have issue with setting proper quorum. The volumes used as datastore for vmware/ovirt, the current settings for the quorum are: cluster.quorum-type: auto cluster.server-quorum-type: server cluster.server-quorum-ratio: 51% Losing the first node which

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-04-03 Thread Mahdi Adnan
Good to hear. Eagerly waiting for the patch. Thank you guys. Get Outlook for Android<https://aka.ms/ghei36> From: Krutika Dhananjay <kdhan...@redhat.com> Sent: Monday, April 3, 2017 11:22:40 AM To: Pranith Kumar Karampuri Cc: Mahdi Adnan; g

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-28 Thread Mahdi Adnan
Hi, Do you guys have any update regarding this issue ? -- Respectfully Mahdi A. Mahdi From: Krutika Dhananjay <kdhan...@redhat.com> Sent: Tuesday, March 21, 2017 3:02:55 PM To: Mahdi Adnan Cc: Nithya Balachandran; Gowdappa, Raghavendra; Susant Palai; g

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-21 Thread Mahdi Adnan
luster-users] Gluster 3.8.10 rebalance VMs corruption To: Krutika Dhananjay Cc: Mahdi Adnan, Gowdappa, Raghavendra, Susant Palai, gluster-users@gluster.org List Hi, Do you know the GFIDs of the VM images which were corrupted? Regards, Nithya On 20 March 2017 at 20:37, Krutika Dhananjay

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-19 Thread Mahdi Adnan
han...@redhat.com> Sent: Sunday, March 19, 2017 2:01:49 PM To: Mahdi Adnan Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption While I'm still going through the logs, just wanted to point out a couple of things: 1. It is recommended that you use

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-19 Thread Mahdi Adnan
2017 8:02:19 AM To: Mahdi Adnan Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan <mahdi.ad...@outlook.com<mailto:mahdi.ad...@outlook.com>> wrote: Kindly, check the attached new log

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Mahdi Adnan
Although i have tested the patch before it got released, but apparently it was't a thorough test. In Gluster 3.7.x i lost around 100 VMs, now in 3.8.x i just lost a few test VMs. I hope there will be a fix soon. -- Respectfully Mahdi A. Mahdi From:

[Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-17 Thread Mahdi Adnan
Hi, I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in a volume contains few VMs. After the completion of rebalance, i have rebooted the VMs, some of ran just fine, and others just crashed. Windows boot to recovery mode and Linux throw xfs errors and does not boot. I

[Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-17 Thread Mahdi Adnan
Hi, I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in a volume contains few VMs. After the completion of rebalance, i have rebooted the VMs, some of ran just fine, and others just crashed. Windows boot to recovery mode and Linux throw xfs errors and does not boot. I ran

Re: [Gluster-users] nfs-ganesha logs

2017-03-02 Thread Mahdi Adnan
ull, where can i check the current queue status ? -- Respectfully Mahdi A. Mahdi From: Nithya Balachandran <nbala...@redhat.com> Sent: Thursday, March 2, 2017 8:32:52 AM To: Soumya Koduri Cc: Mahdi Adnan; gluster-users@gluster.org; Krutika Dhananjay; Frank

[Gluster-users] nfs-ganesha logs

2017-02-28 Thread Mahdi Adnan
Hi, We have a Gluster volume hosting VMs for ESXi exported via Ganesha. Im getting the following messages in ganesha-gfapi.log and ganesha.log = [2017-02-28 07:44:55.194621] E [MSGID: 109040] [dht-helper.c:1198:dht_migration_complete_check_task] 0-vmware2-dht: : failed to lookup the

Re: [Gluster-users] Volume rebalance issue

2017-02-27 Thread Mahdi Adnan
From: Krutika Dhananjay <kdhan...@redhat.com> Sent: Monday, February 27, 2017 8:11:31 AM To: Mahdi Adnan Cc: Gandalf Corvotempesta; gluster-users@gluster.org Subject: Re: [Gluster-users] Volume rebalance issue I've attached the src tarball with the patches that fix this issue, applied o

Re: [Gluster-users] Volume rebalance issue

2017-02-26 Thread Mahdi Adnan
; Sent: Sunday, February 26, 2017 11:07:04 AM To: Mahdi Adnan Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Volume rebalance issue How did you replicate the issue? Next week I'll spin up a gluster storage and I would like to try the same to see the corruption and to test any patche

Re: [Gluster-users] Volume rebalance issue

2017-02-26 Thread Mahdi Adnan
Hi, Yes, i would love to try it out. Steps to apply the patch would be highly appreciated. -- Respectfully Mahdi A. Mahdi From: Krutika Dhananjay <kdhan...@redhat.com> Sent: Sunday, February 26, 2017 5:37:11 PM To: Mahdi Adnan Cc: gluster-users@glust

[Gluster-users] Volume rebalance issue

2017-02-25 Thread Mahdi Adnan
Hi, We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs for ESXi, i tried expanding the volume with 8 more bricks, and after rebalancing the volume, the VMs got corrupted. Gluster version is 3.8.9 and the volume is using the default parameters of group "virt" plus

[Gluster-users] Gluster Disks configuration

2017-02-18 Thread Mahdi Adnan
Hi, I have a question regarding disk preparation. I have 4 nodes, each has 24 SSD, i would like to know whats the best practice to setup the disks. The pool will be used as a vmware datastore. im planning on using each disk as a brick without lvm, pool will be distributed replicas with

Re: [Gluster-users] NFS-Ganesha lo traffic

2016-08-10 Thread Mahdi Adnan
A. Mahdi > Subject: Re: [Gluster-users] NFS-Ganesha lo traffic > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org; nfs-ganesha-de...@lists.sourceforge.net > From: skod...@redhat.com > Date: Wed, 10 Aug 2016 11:05:50 +0530 > > > > On 08/09/2016 09:06 PM

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Mahdi Adnan
t;server ip address"; volume = "home"; } CLIENT {Clients = *;Access_Type = RW;Squash = None; } } On Tue, Aug 9, 2016 at 11:44 AM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: Hi, Please post ganesha configuration file. -- Respectfully

Re: [Gluster-users] Nfs-ganesha...

2016-08-09 Thread Mahdi Adnan
Hi, Please post ganesha configuration file. -- Respectfully Mahdi A. Mahdi From: corey.kov...@gmail.com Date: Tue, 9 Aug 2016 11:24:58 -0600 To: gluster-users@gluster.org Subject: [Gluster-users] Nfs-ganesha... If not an appropriate place to ask, my apologies. I have been trying

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-09 Thread Mahdi Adnan
the steps to recreate the issue, along with the relevant information about volume configuration, logs, core, version etc, then it would be good to track this issue through a bug report. -Krutika On Mon, Aug 8, 2016 at 8:56 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: Thank you ver

[Gluster-users] NFS-Ganesha lo traffic

2016-08-09 Thread Mahdi Adnan
Hi, Im using NFS-Ganesha to access my volume, it's working fine for now but im seeing lots of traffic on the Loopback interface, in fact it's the same amount of traffic on the bonding interface, can anyone please explain to me why is this happening ?also, i got the following error in the

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-08 Thread Mahdi Adnan
haven't had the chance to look into this issue last week. Do you mind raising a bug in upstream with all the relevant information and I'll take a look sometime this week? -Krutika On Fri, Aug 5, 2016 at 11:58 AM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: Hi, Yes, i got some me

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-05 Thread Mahdi Adnan
2016 at 1:14 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: Hi, Kindly check the following link for all 7 bricks logs; https://db.tt/YP5qTGXk -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Thu, 4 Aug 2016 13:00:43 +0530 Subject: Re: [Gluster-users] Glu

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-04 Thread Mahdi Adnan
@gluster.org Could you also attach the brick logs please? -Krutika On Thu, Aug 4, 2016 at 12:48 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: appreciate your help, (gdb) frame 2#2 0x7f195deb1787 in shard_common_inode_write_do (frame=0x7f19699f1164, this=0x7f195802ac10) at shard.c:37

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-04 Thread Mahdi Adnan
lso print the values of the following variables from the original core: i. i ii. local->inode_list[0] iii. local->inode_list[1] -Krutika On Wed, Aug 3, 2016 at 9:01 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: Hi, Unfortunately no, but i can setup a test bench and see if it g

Re: [Gluster-users] Glusterfs 3.7.13 node suddenly stops healing

2016-08-04 Thread Mahdi Adnan
Hi, Please attach the logs and "gluster volume info $VOLUMENAME" output here; -- Respectfully Mahdi A. Mahdi > From: davy.croo...@smartbit.be > To: gluster-users@gluster.org > Date: Wed, 3 Aug 2016 13:01:36 + > Subject: [Gluster-users] Glusterfs 3.7.13 node suddenly stops

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
ter-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org > > 2016-08-03 22:33 GMT+02:00 Mahdi Adnan <mahdi.ad...@outlook.com>: > > Yeah, only 3 for now running in 3 replica. > > around 5MB (900 IOps) write and 3MB (250 IOps) r

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org > > 2016-08-03 21:40 GMT+02:00 Mahdi Adnan <mahdi.ad...@outlook.com>: > > Hi, > > > > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, > > 8

Re: [Gluster-users] Failed file system

2016-08-03 Thread Mahdi Adnan
Hi, I'm not expert in Gluster but, i think it would be better to replace the downed brick with a new one.Maybe start from here; https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick -- Respectfully Mahdi A. Mahdi Date: Wed, 3 Aug 2016

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
. -- Respectfully Mahdi A. Mahdi > From: gandalf.corvotempe...@gmail.com > Date: Wed, 3 Aug 2016 20:25:56 +0200 > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: kdhan...@redhat.com; gluster-users@gluster.org > > 2016-08-03 17:02 GMT+02:00

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
@gluster.org Do you have a test case that consistently recreates this problem? -Krutika On Wed, Aug 3, 2016 at 8:32 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: Hi, So i have updated to 3.7.14 and i still have the same issue with NFS.based on what i have provided so far from logs and

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
rect', do this: (gdb) p odirect and gdb will print its value for you in response. -Krutika On Mon, Aug 1, 2016 at 4:55 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: Hi, How to get the results of the below variables ? i cant get the results from gdb. -- Respectfully Mahdi A. Mahdi

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-01 Thread Mahdi Adnan
variable 'odirect', do this: (gdb) p odirect and gdb will print its value for you in response. -Krutika On Mon, Aug 1, 2016 at 4:55 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote: Hi, How to get the results of the below variables ? i cant get the results from gdb. -- Respectf

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-01 Thread Mahdi Adnan
@gluster.org Could you also print and share the values of the following variables from the backtrace please: i. cur_block ii. last_block iii. local->first_block iv. odirect v. fd->flags vi. local->call_count -Krutika On Sat, Jul 30, 2016 at 5:04 PM, Mahdi Adnan <mahdi.ad...@outlook.com&g

[Gluster-users] Gluster 3.7.13 NFS Crash

2016-07-30 Thread Mahdi Adnan
Hi, I really appreciate if someone can help me fix my nfs crash, its happening a lot and it's causing lots of issues to my VMs;the problem is every few hours the native nfs crash and the volume become unavailable from the affected node unless i restart glusterd.the volume is used by vmware esxi

[Gluster-users] Gluster nfs crash

2016-07-23 Thread Mahdi Adnan
Hi, i have im having issues with gluster nfs, it keep crashing after few hours under medium load. OS: CentOS 7.2 Gluster version 3.7.13 Gluster info; Volume Name: vlm01 Type: Distributed-Replicate Volume ID: eacd8248-dca3-4530-9aed-7714a5a114f2 Status: Started Number of Bricks: 7 x 3 = 21

Re: [Gluster-users] Replicated striped data lose

2016-03-15 Thread Mahdi Adnan
0 4312 offset: 0x0 requested: 0x200 read: 0x95 Respectfully* **Mahdi A. Mahdi* Skype: mahdi.ad...@outlook.com <mailto:mahdi.ad...@outlook.com> On 03/15/2016 03:06 PM, Mahdi Adnan wrote: [2016-03-15 14:12:01.421615] I [MSGID: 109036] [dht-common.c:8043:dht_log_new_layout_for_dir_se

Re: [Gluster-users] Replicated striped data lose

2016-03-15 Thread Mahdi Adnan
a On Tue, Mar 15, 2016 at 1:45 PM, Mahdi Adnan <mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>> wrote: Okay, here's what i did; Volume Name: v Type: Distributed-Replicate Volume ID: b348fd8e-b117-469d-bcc0-56a56bdfc930 Status: Started Nu

Re: [Gluster-users] Replicated striped data lose

2016-03-15 Thread Mahdi Adnan
te: OK but what if you use it with replication? Do you still see the error? I think not. Could you give it a try and tell me what you find? -Krutika On Tue, Mar 15, 2016 at 1:23 PM, Mahdi Adnan <mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>> wrote: Hi,

Re: [Gluster-users] Replicated striped data lose

2016-03-15 Thread Mahdi Adnan
on it, and enable sharding on it, set the shard-block-size that you feel appropriate and then just start off with VM image creation etc. If you run into any issues even after you do this, let us know and we'll help you out. -Krutika On Tue, Mar 15, 2016 at 1:07 PM, Mahdi Adnan <mahdi

Re: [Gluster-users] Replicated striped data lose

2016-03-15 Thread Mahdi Adnan
On Mon, Mar 14, 2016 at 3:17 PM, Mahdi Adnan <mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>> wrote: sorry for serial posting but, i got new logs it might help.. the message appear during the migration; /var/log/glusterfs/nfs.log [2016

Re: [Gluster-users] Replicated striped data lose

2016-03-14 Thread Mahdi Adnan
45:05.079657] E [MSGID: 112069] [nfs3.c:3649:nfs3_rmdir_resume] 0-nfs-nfsv3: No such file or directory: (192.168.221.52:826) testv : ----0001 Respectfully* **Mahdi A. Mahd * On 03/14/2016 11:14 AM, Mahdi Adnan wrote: So i have deployed a new server "Cisco UCS C2

Re: [Gluster-users] Replicated striped data lose

2016-03-14 Thread Mahdi Adnan
ahead: off performance.quick-read: off performance.readdir-ahead: off same error .. can anyone share with me the info of a working striped volume ? On 03/14/2016 09:02 AM, Mahdi Adnan wrote: I have a pool of two bricks in the same server; Volume Name: k Type: Stripe Volume ID: 1e9281ce-2a8b-44e8-a0c6-e3

Re: [Gluster-users] Replicated striped data lose

2016-03-14 Thread Mahdi Adnan
o cp them to a temp name within the volume, and then rename them back to the original file name. HTH, Krutika On Sun, Mar 13, 2016 at 11:49 PM, Mahdi Adnan <mahdi.ad...@earthlinktele.com wrote: I couldn't find anything related to cache in the HBAs. what logs are useful in my case ? i see only bricks l

Re: [Gluster-users] Replicated striped data lose

2016-03-13 Thread Mahdi Adnan
My setup is 2 servers with a floating ip controlled by CTDB and my ESXi server mount the NFS via the floating ip. On 03/13/2016 08:40 PM, pkoelle wrote: Am 13.03.2016 um 18:22 schrieb David Gossage: On Sun, Mar 13, 2016 at 11:07 AM, Mahdi Adnan <mahdi.ad...@earthlinktele.com w

Re: [Gluster-users] Replicated striped data lose

2016-03-13 Thread Mahdi Adnan
e, niether sharding nor striping works for me. i did follow up with some of threads in the mailing list and tried some of the fixes that worked with the others, none worked for me. :( On 03/13/2016 06:54 PM, David Gossage wrote: On Sun, Mar 13, 2016 at 8:16 AM, Mahdi Adnan

Re: [Gluster-users] Replicated striped data lose

2016-03-13 Thread Mahdi Adnan
size: 16MB features.shard: on performance.readdir-ahead: off On 03/12/2016 08:11 PM, David Gossage wrote: On Sat, Mar 12, 2016 at 10:21 AM, Mahdi Adnan <mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>> wrote: Both servers have HBA no RAIDs and i

Re: [Gluster-users] Replicated striped data lose

2016-03-12 Thread Mahdi Adnan
a replicated striped) and again same thing, data corruption. On 03/12/2016 07:02 PM, David Gossage wrote: On Sat, Mar 12, 2016 at 9:51 AM, Mahdi Adnan <mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>> wrote: Thanks David, My settings are all defaults,

Re: [Gluster-users] Replicated striped data lose

2016-03-12 Thread Mahdi Adnan
performance.quick-read: off performance.readdir-ahead: on On 03/12/2016 03:25 PM, David Gossage wrote: On Sat, Mar 12, 2016 at 1:55 AM, Mahdi Adnan <mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>> wrote: Dears, I have created a replicated striped vol

[Gluster-users] Replicated striped data lose

2016-03-12 Thread Mahdi Adnan
Appreciate your help. Respectfully Mahdi Adnan System Admin ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users