Here's my result of strace -f -c -p glusterPID
Process 1993 detached
Process 1994 detached
Process 1996 detached
Process 2004 detached
Process 2006 detached
Process 2013 detached
Process 3469 detached
% time seconds usecs/call callserrors syscall
-- --- --- - -
38.39 23.261809 11899 195586 lgetxattr
28.31 17.155779 35667 48152 futex
16.58 10.046922 70258 143 epoll_wait
15.339.292713 92927110 nanosleep
1.360.826873 826873 1 restart_syscall
0.010.007801 51 154 getdents
0.000.002299 4 572 readv
0.000.002176 1 1787 lstat
0.000.001078 9012 read
0.000.000896 6 141 writev
0.000.000472 3 142 lseek
0.000.000207 2 126 clock_gettime
0.000.000194 49 4 munmap
0.000.79 422 fcntl
0.000.00 026 open
0.000.00 025 close
0.000.00 020 stat
0.000.00 0 4 fstat
0.000.00 0 4 mmap
0.000.00 0 4 statfs
-- --- --- - -
100.00 60.599298 5633 138 total
-Original Message-
From: Pavan T C [mailto:t...@gluster.com]
Sent: Wednesday, July 27, 2011 6:18 PM
To: 공용준(yongjoon kong)/Cloud Computing 기술담당/SKCC
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] gluster's cpuload is too high on specific birck
daemon
On Wednesday 27 July 2011 01:45 PM, 공용준(yongjoon kong)/Cloud
Computing 기술담당/SKCC wrote:
> Hello,
>
> I'm gluster with distributed-replicated mode(4 brick server)
>
> And 10client server mount gluster volume brick1 server. ( mount -t glustefs
> brick1:/volume /mnt)
>
> And there's very strange thing.
>
> The brick1's cpu load is too high. From 'top' command, it's over 400%
> But other brick's load is too low.
It is possible that an AFR self heal is getting triggered.
On the brick, run the following command:
strace -f -c -p
and provide the output.
Pavan
>
> Is there any reason for this? Or Is there anyway tracking down this issue?
>
> Thanks.
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users