again.
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Tue, 9 Aug 2016 11:02:44 +0530
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
To: mahdi.ad...@outlook.com
CC: gluster-users@gluster.org
Well, i'm not entirely sure it is a setup-related issue. If you have
Date: Mon, 8 Aug 2016 16:33:19 +0530
>
> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@outlook.com
> CC: gluster-users@gluster.org
>
> Hi,
>
> Sorry I haven't had the chance to look into this issue last week. Do you
> mind raising a bug in upstream
file a bug report
? or maybe it's an issue with my setup only ?
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Mon, 8 Aug 2016 16:33:19 +0530
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
To: mahdi.ad...@outlook.com
CC: gluster-users@gluster.org
Hi,
Sorry I
3716anon_fd = fd_anonymous (local->inode_list[i]);
> (gdb) p local->fop
> $1 = GF_FOP_WRITE
> (gdb)
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
>
> --
> From: kdhan...@redhat.com
> Date: F
anon_fd = fd_anonymous
(local->inode_list[i]);(gdb) p local->fop$1 = GF_FOP_WRITE(gdb)
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Fri, 5 Aug 2016 10:48:36 +0530
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
To: mahdi.ad...@outlook.com
CC: gluster
i*
>>
>>
>>
>> --------------
>> From: kdhan...@redhat.com
>> Date: Thu, 4 Aug 2016 13:00:43 +0530
>>
>> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
>> To: mahdi.ad...@outlook.com
>> CC: gluster-user
ttps://db.tt/YP5qTGXk
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
>
> --
> From: kdhan...@redhat.com
> Date: Thu, 4 Aug 2016 13:00:43 +0530
>
> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@out
Hi,
Kindly check the following link for all 7 bricks logs;
https://db.tt/YP5qTGXk
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Thu, 4 Aug 2016 13:00:43 +0530
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
To: mahdi.ad...@outlook.com
CC: gluster-users
gdb) p
local->inode_list[1]$5 = (inode_t *) 0x0(gdb)
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Thu, 4 Aug 2016 12:43:10 +0530
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
To: mahdi.ad...@outlook.com
CC: gluster-users@gluster.org
OK.
Could you a
a test bench and see if it gets the same
> results.
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
>
> --
> From: kdhan...@redhat.com
> Date: Wed, 3 Aug 2016 20:59:50 +0530
>
> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Cras
ter-users] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@outlook.com
> CC: gluster-users@gluster.org
>
> 2016-08-03 22:33 GMT+02:00 Mahdi Adnan <mahdi.ad...@outlook.com>:
> > Yeah, only 3 for now running in 3 replica.
> > around 5MB (900 IOps) write and 3MB (250 IOps) r
2016-08-03 22:33 GMT+02:00 Mahdi Adnan :
> Yeah, only 3 for now running in 3 replica.
> around 5MB (900 IOps) write and 3MB (250 IOps) read and the disks are 900GB
> 10K SAS.
5MB => five megabytes/s ?
Less than an older and ancient 4x DVD reader ? Really ? Are you sure?
] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@outlook.com
> CC: gluster-users@gluster.org
>
> 2016-08-03 21:40 GMT+02:00 Mahdi Adnan <mahdi.ad...@outlook.com>:
> > Hi,
> >
> > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM,
> > 8
2016-08-03 21:40 GMT+02:00 Mahdi Adnan :
> Hi,
>
> Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM,
> 8x900GB spindles, with Intel X520 dual 10G ports. We are planning to migrate
> more VMs and increase the number of servers in the cluster as
ng on with the NFS mount.
>
>
> --
>
> Respectfully
> Mahdi A. Mahdi
>
>> From: gandalf.corvotempe...@gmail.com
>> Date: Wed, 3 Aug 2016 20:25:56 +0200
>> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
>> To: mahdi.ad...@outlook.com
>> CC: kdhan...@redhat.
.
--
Respectfully
Mahdi A. Mahdi
> From: gandalf.corvotempe...@gmail.com
> Date: Wed, 3 Aug 2016 20:25:56 +0200
> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@outlook.com
> CC: kdhan...@redhat.com; gluster-users@gluster.org
>
> 2016-08-03 17:02 GMT+02:00
2016-08-03 17:02 GMT+02:00 Mahdi Adnan :
> the problem is, the current setup is used in a production environment, and
> switching the mount point of +50 VMs from native nfs to nfs-ganesha is not
> going to be smooth and without downtime, so i really appreciate your
>
Hi,
Unfortunately no, but i can setup a test bench and see if it gets the same
results.
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Wed, 3 Aug 2016 20:59:50 +0530
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
To: mahdi.ad...@outlook.com
CC: gluster-users
e, so i really appreciate your
> thoughts on this matter.
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
>
> --
> From: mahdi.ad...@outlook.com
> To: kdhan...@redhat.com
> Date: Tue, 2 Aug 2016 08:44:16 +0300
>
> CC: gluster-users
-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
Hi,
The NFS just crashed again, latest bt;
(gdb) bt#0 0x7f0b71a9f210 in pthread_spin_lock () from
/lib64/libpthread.so.0#1 0x7f0b72c6fcd5 in fd_anonymous (inode=0x0) at
fd.c:804#2 0x7f0b64ca5787
an upload.
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Mon, 1 Aug 2016 18:39:27 +0530
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
To: mahdi.ad...@outlook.com
CC: gluster-users@gluster.org
Sorry I didn't make myself clear. The reason I asked YOU to do it is because i
tried
> How to get the results of the below variables ? i cant get the results
> from gdb.
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
>
> --
> From: kdhan...@redhat.com
> Date: Mon, 1 Aug 2016 15:51:38 +0530
> Subject: Re
Hi,
How to get the results of the below variables ? i cant get the results from gdb.
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Mon, 1 Aug 2016 15:51:38 +0530
Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
To: mahdi.ad...@outlook.com
CC: gluster-users
Could you also print and share the values of the following variables from
the backtrace please:
i. cur_block
ii. last_block
iii. local->first_block
iv. odirect
v. fd->flags
vi. local->call_count
-Krutika
On Sat, Jul 30, 2016 at 5:04 PM, Mahdi Adnan
wrote:
> Hi,
>
> I
Inode stored in the shard xlator local is NULL. CCin Kruthika to comment.
Thanks,
Soumya
(gdb) bt
#0 0x7f196acab210 in pthread_spin_lock () from /lib64/libpthread.so.0
#1 0x7f196be7bcd5 in fd_anonymous (inode=0x0) at fd.c:804
#2 0x7f195deb1787 in shard_common_inode_write_do
Hi,
I really appreciate if someone can help me fix my nfs crash, its happening a
lot and it's causing lots of issues to my VMs;the problem is every few hours
the native nfs crash and the volume become unavailable from the affected node
unless i restart glusterd.the volume is used by vmware esxi
26 matches
Mail list logo