Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-09 Thread Mahdi Adnan
again. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Tue, 9 Aug 2016 11:02:44 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gluster.org Well, i'm not entirely sure it is a setup-related issue. If you have

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-08 Thread Krutika Dhananjay
Date: Mon, 8 Aug 2016 16:33:19 +0530 > > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org > > Hi, > > Sorry I haven't had the chance to look into this issue last week. Do you > mind raising a bug in upstream

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-08 Thread Mahdi Adnan
file a bug report ? or maybe it's an issue with my setup only ? -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Mon, 8 Aug 2016 16:33:19 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gluster.org Hi, Sorry I

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-08 Thread Krutika Dhananjay
3716anon_fd = fd_anonymous (local->inode_list[i]); > (gdb) p local->fop > $1 = GF_FOP_WRITE > (gdb) > > > -- > > Respectfully > *Mahdi A. Mahdi* > > > > -- > From: kdhan...@redhat.com > Date: F

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-05 Thread Mahdi Adnan
anon_fd = fd_anonymous (local->inode_list[i]);(gdb) p local->fop$1 = GF_FOP_WRITE(gdb) -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Fri, 5 Aug 2016 10:48:36 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-04 Thread Krutika Dhananjay
i* >> >> >> >> -------------- >> From: kdhan...@redhat.com >> Date: Thu, 4 Aug 2016 13:00:43 +0530 >> >> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash >> To: mahdi.ad...@outlook.com >> CC: gluster-user

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-04 Thread Krutika Dhananjay
ttps://db.tt/YP5qTGXk > > > -- > > Respectfully > *Mahdi A. Mahdi* > > > > -- > From: kdhan...@redhat.com > Date: Thu, 4 Aug 2016 13:00:43 +0530 > > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@out

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-04 Thread Mahdi Adnan
Hi, Kindly check the following link for all 7 bricks logs; https://db.tt/YP5qTGXk -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Thu, 4 Aug 2016 13:00:43 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-04 Thread Mahdi Adnan
gdb) p local->inode_list[1]$5 = (inode_t *) 0x0(gdb) -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Thu, 4 Aug 2016 12:43:10 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gluster.org OK. Could you a

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-04 Thread Krutika Dhananjay
a test bench and see if it gets the same > results. > > -- > > Respectfully > *Mahdi A. Mahdi* > > > > -- > From: kdhan...@redhat.com > Date: Wed, 3 Aug 2016 20:59:50 +0530 > > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Cras

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
ter-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org > > 2016-08-03 22:33 GMT+02:00 Mahdi Adnan <mahdi.ad...@outlook.com>: > > Yeah, only 3 for now running in 3 replica. > > around 5MB (900 IOps) write and 3MB (250 IOps) r

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Gandalf Corvotempesta
2016-08-03 22:33 GMT+02:00 Mahdi Adnan : > Yeah, only 3 for now running in 3 replica. > around 5MB (900 IOps) write and 3MB (250 IOps) read and the disks are 900GB > 10K SAS. 5MB => five megabytes/s ? Less than an older and ancient 4x DVD reader ? Really ? Are you sure?

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: gluster-users@gluster.org > > 2016-08-03 21:40 GMT+02:00 Mahdi Adnan <mahdi.ad...@outlook.com>: > > Hi, > > > > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, > > 8

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Gandalf Corvotempesta
2016-08-03 21:40 GMT+02:00 Mahdi Adnan : > Hi, > > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM, > 8x900GB spindles, with Intel X520 dual 10G ports. We are planning to migrate > more VMs and increase the number of servers in the cluster as

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Serkan Çoban
ng on with the NFS mount. > > > -- > > Respectfully > Mahdi A. Mahdi > >> From: gandalf.corvotempe...@gmail.com >> Date: Wed, 3 Aug 2016 20:25:56 +0200 >> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash >> To: mahdi.ad...@outlook.com >> CC: kdhan...@redhat.

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
. -- Respectfully Mahdi A. Mahdi > From: gandalf.corvotempe...@gmail.com > Date: Wed, 3 Aug 2016 20:25:56 +0200 > Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash > To: mahdi.ad...@outlook.com > CC: kdhan...@redhat.com; gluster-users@gluster.org > > 2016-08-03 17:02 GMT+02:00

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Gandalf Corvotempesta
2016-08-03 17:02 GMT+02:00 Mahdi Adnan : > the problem is, the current setup is used in a production environment, and > switching the mount point of +50 VMs from native nfs to nfs-ganesha is not > going to be smooth and without downtime, so i really appreciate your >

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
Hi, Unfortunately no, but i can setup a test bench and see if it gets the same results. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Wed, 3 Aug 2016 20:59:50 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Krutika Dhananjay
e, so i really appreciate your > thoughts on this matter. > > -- > > Respectfully > *Mahdi A. Mahdi* > > > > -- > From: mahdi.ad...@outlook.com > To: kdhan...@redhat.com > Date: Tue, 2 Aug 2016 08:44:16 +0300 > > CC: gluster-users

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-03 Thread Mahdi Adnan
-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash Hi, The NFS just crashed again, latest bt; (gdb) bt#0 0x7f0b71a9f210 in pthread_spin_lock () from /lib64/libpthread.so.0#1 0x7f0b72c6fcd5 in fd_anonymous (inode=0x0) at fd.c:804#2 0x7f0b64ca5787

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-01 Thread Mahdi Adnan
an upload. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Mon, 1 Aug 2016 18:39:27 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users@gluster.org Sorry I didn't make myself clear. The reason I asked YOU to do it is because i tried

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-01 Thread Krutika Dhananjay
> How to get the results of the below variables ? i cant get the results > from gdb. > > > -- > > Respectfully > *Mahdi A. Mahdi* > > > > -- > From: kdhan...@redhat.com > Date: Mon, 1 Aug 2016 15:51:38 +0530 > Subject: Re

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-01 Thread Mahdi Adnan
Hi, How to get the results of the below variables ? i cant get the results from gdb. -- Respectfully Mahdi A. Mahdi From: kdhan...@redhat.com Date: Mon, 1 Aug 2016 15:51:38 +0530 Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash To: mahdi.ad...@outlook.com CC: gluster-users

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-08-01 Thread Krutika Dhananjay
Could you also print and share the values of the following variables from the backtrace please: i. cur_block ii. last_block iii. local->first_block iv. odirect v. fd->flags vi. local->call_count -Krutika On Sat, Jul 30, 2016 at 5:04 PM, Mahdi Adnan wrote: > Hi, > > I

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-07-30 Thread Soumya Koduri
Inode stored in the shard xlator local is NULL. CCin Kruthika to comment. Thanks, Soumya (gdb) bt #0 0x7f196acab210 in pthread_spin_lock () from /lib64/libpthread.so.0 #1 0x7f196be7bcd5 in fd_anonymous (inode=0x0) at fd.c:804 #2 0x7f195deb1787 in shard_common_inode_write_do

[Gluster-users] Gluster 3.7.13 NFS Crash

2016-07-30 Thread Mahdi Adnan
Hi, I really appreciate if someone can help me fix my nfs crash, its happening a lot and it's causing lots of issues to my VMs;the problem is every few hours the native nfs crash and the volume become unavailable from the affected node unless i restart glusterd.the volume is used by vmware esxi