- Original Message -
> From: "Raghavendra Gowdappa"
> To: "Joe Julian"
> Cc: "Raghavendra G" , "Pranith Kumar Karampuri"
> , "Soumya Koduri"
> , rta...@redhat.com, gluster-us...@gluster.org, "Gluster
> Devel" ,
> "Oleksandr Natalenko"
> Sent: Friday, October 14, 2016 8:29:18 AM
> Sub
I've a patch [1], which I think fixes the problem in write-behind. If possible
can anyone of you please let me know whether it fixes the issue (with
client.io-threads turned on)?
[1] http://review.gluster.org/15579
- Original Message -
> From: "Joe Julian"
> To: "Raghavendra G" , "Pran
Interesting. I just encountered a hanging flush problem, too. Probably
unrelated but if you want to give this a try a temporary workaround I found was
to drop caches, "echo 3 > /proc/vm/drop_caches", on all the servers prior to
the flush operation.
On February 4, 2016 10:06:45 PM PST, Raghavend
+soumyak, +rtalur.
On Fri, Jan 29, 2016 at 2:34 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On 01/28/2016 05:05 PM, Pranith Kumar Karampuri wrote:
>
>> With baul jianguo's help I am able to see that FLUSH fops are hanging for
>> some reason.
>>
>> pk1@localhost - ~/Downloads
>
On 01/28/2016 05:05 PM, Pranith Kumar Karampuri wrote:
With baul jianguo's help I am able to see that FLUSH fops are hanging
for some reason.
pk1@localhost - ~/Downloads
17:02:13 :) ⚡ grep "unique=" client-dump1.txt
unique=3160758373
unique=2073075682
unique=1455047665
unique=0
pk1@localhost
With baul jianguo's help I am able to see that FLUSH fops are hanging
for some reason.
pk1@localhost - ~/Downloads
17:02:13 :) ⚡ grep "unique=" client-dump1.txt
unique=3160758373
unique=2073075682
unique=1455047665
unique=0
pk1@localhost - ~/Downloads
17:02:21 :) ⚡ grep "unique=" client-dump-0.
the client glusterfs gdb info, main thread id is 70800。
In the top output,70800 thread time 1263:30,70810 thread time
1321:10,other thread time too small。
(gdb) thread apply all bt
Thread 9 (Thread 0x7fc21acaf700 (LWP 70801)):
#0 0x7fc21cc0c535 in sigwait () from /lib64/libpthread.so.0
#
On 01/28/2016 02:59 PM, baul jianguo wrote:
http://pastebin.centos.org/38941/
client statedump,only the pid 27419,168030,208655 hang,you can search
this pid in the statedump file。
Could you take one more statedump please?
Pranith
On Wed, Jan 27, 2016 at 4:35 PM, Pranith Kumar Karampuri
wro
http://pastebin.centos.org/38941/
client statedump,only the pid 27419,168030,208655 hang,you can search
this pid in the statedump file。
On Wed, Jan 27, 2016 at 4:35 PM, Pranith Kumar Karampuri
wrote:
> Hi,
> If the hang appears on enabling client side io-threads then it could
> be because o
Hi,
If the hang appears on enabling client side io-threads then it
could be because of some race that is seen when io-threads is enabled on
the client side. 2 things will help us debug this issue:
1) thread apply all bt inside gdb (with debuginfo rpms/debs installed )
2) Complete statedum
the client statedump is at http://pastebin.centos.org/38671/
On Mon, Jan 25, 2016 at 3:33 PM, baul jianguo wrote:
> 3.5.7 also hangs.only the flush op hung. Yes,off the
> performance.client-io-threads ,no hang.
>
> The hang does not relate the client kernel version.
>
> One client statdump about
3.5.7 also hangs.only the flush op hung. Yes,off the
performance.client-io-threads ,no hang.
The hang does not relate the client kernel version.
One client statdump about flush op,any abnormal?
[global.callpool.stack.12]
uid=0
gid=0
pid=14432
unique=16336007098
lk-owner=77cb199aa36f3641
op
With "performance.client-io-threads" set to "off" no hangs occurred in 3
rsync/rm rounds. Could that be some fuse-bridge lock race? Will bring that
option to "on" back again and try to get full statedump.
On четвер, 21 січня 2016 р. 14:54:47 EET Raghavendra G wrote:
> On Thu, Jan 21, 2016 at 10:
On Thu, Jan 21, 2016 at 10:49 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote:
>
>> XFS. Server side works OK, I'm able to mount volume again. Brick is 30%
>> full.
>>
>
> Oleksandr,
> Will it be possible to get the statedump
On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote:
XFS. Server side works OK, I'm able to mount volume again. Brick is 30% full.
Oleksandr,
Will it be possible to get the statedump of the client, bricks
output next time it happens?
https://github.com/gluster/glusterfs/blob/master/doc/
XFS. Server side works OK, I'm able to mount volume again. Brick is 30% full.
On понеділок, 18 січня 2016 р. 15:07:18 EET baul jianguo wrote:
> What is your brick file system? and the glusterfsd process and all
> thread status?
> I met same issue when client app such as rsync stay in D status,and
What is your brick file system? and the glusterfsd process and all
thread status?
I met same issue when client app such as rsync stay in D status,and
the brick process and relate thread also be in the D status.
And the brick dev disk util is 100% .
On Sun, Jan 17, 2016 at 6:13 AM, Oleksandr Natale
Wrong assumption, rsync hung again.
On субота, 16 січня 2016 р. 22:53:04 EET Oleksandr Natalenko wrote:
> One possible reason:
>
> cluster.lookup-optimize: on
> cluster.readdir-optimize: on
>
> I've disabled both optimizations, and at least as of now rsync still does
> its job with no issues. I
One possible reason:
cluster.lookup-optimize: on
cluster.readdir-optimize: on
I've disabled both optimizations, and at least as of now rsync still does its
job with no issues. I would like to find out what option causes such a
behavior and why. Will test more.
On пʼятниця, 15 січня 2016 р. 16:
Another observation: if rsyncing is resumed after hang, rsync itself
hangs a lot faster because it does stat of already copied files. So, the
reason may be not writing itself, but massive stat on GlusterFS volume
as well.
15.01.2016 09:40, Oleksandr Natalenko написав:
While doing rsync over m
Here is similar issue described on serverfault.com:
https://serverfault.com/questions/716410/rsync-crashes-machine-while-performing-sync-on-glusterfs-mounted-share
I've checked GlusterFS logs with no luck — as if nothing happened.
P.S. GlusterFS v3.7.6.
15.01.2016 09:40, Oleksandr Natalenko на
21 matches
Mail list logo