Re: [Gluster-users] [ovirt-users] Raid-5 like gluster method?

2014-09-28 Thread Mingfan Lu
use 3.6 disperse feature, but it is beta2 now, you could use it when it is
GA

On Wed, Sep 24, 2014 at 2:55 PM, Sahina Bose sab...@redhat.com wrote:

  [+gluster-users]

 On 09/24/2014 11:59 AM, Demeter Tibor wrote:

  Hi,

  Is there any method in glusterfs, like raid-5?

  I have three node, each node has 5 TB of disk. I would like utilize all
 of space with redundancy, like raid-5.
 If it not possible, can I make raid-6 like redundanci within three node?
 (two brick/node?).
 Thanks in advance,

  Tibor



 ___
 Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster 3.4 or 3.5 compatibility with 3.3.x

2014-05-06 Thread Mingfan Lu
Your 3.4 cluster is a newly deployed one or a upgraded one ( from 3.3)?
If yours are newly deployed, you could not use 3.3 client to mount for the
op version is set 2
If yours are upgraded one, you could use 3.3 client to mount for the op
version is set 1.
the op version is newly introduced in 3.4


On Tue, May 6, 2014 at 3:11 PM, Cristiano Corsani ccors...@gmail.comwrote:

 Hi all. I have many clients 3.3.1 (I can't upgrade because of an old
 distro version) I would like to mount a 3.4 or 3.5 volume.

 From documentation it seems that 3.3.x is compatible with 3.4. But it
 does not work. The error is unable to get the volume file from
 server.
 My systems (server and client) are all x86.

 Thank you

 --
 Cristiano Corsani, PhD
 -
 http://www.cryx.it
 i...@cryx.it
 ccors...@gmail.com
 --
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] server.allow-insecure doesn't work in 3.4.2?

2014-04-27 Thread Mingfan Lu
Vijay,
  Is there any bugzilla item for this issue? So I could track if there is
some bugfix on it then I could backport to my deployment :)


On Wed, Apr 23, 2014 at 6:34 AM, Vijay Bellur vbel...@redhat.com wrote:

 On 04/22/2014 01:53 PM, Vijay Bellur wrote:

 On 04/21/2014 11:42 PM, Mingfan Lu wrote:

 yes. After I restart the volume, it works.  But it should be a
 workaround for sometimes it is impossible to restart the volume in
 proeuction envriment.


 Right, ideally it should not require a restart. Can you please provide
 output of gluster volume info for this volume?


 Please ignore this. I observed your volume configuration in the first post
 on this thread.

 -Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] server.allow-insecure doesn't work in 3.4.2?

2014-04-22 Thread Mingfan Lu
yes. After I restart the volume, it works.  But it should be a workaround
for sometimes it is impossible to restart the volume in proeuction
envriment.


On Tue, Apr 22, 2014 at 2:09 PM, Humble Devassy Chirammal 
humble.deva...@gmail.com wrote:

 Hi Mingfan,

 Can you please try to restart the subjected volume [1]  and find the
 result?

 http://review.gluster.org/#/c/7412/7/doc/release-notes/3.5.0.md

 --Humble


 On Tue, Apr 22, 2014 at 10:46 AM, Mingfan Lu mingfan...@gmail.com wrote:

 I saw something in
 https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_Notes
  I wonder whether I should restart the glusterd?
 Known Issues:

-

The following configuration changes are necessary for qemu and samba
integration with libgfapi to work seamlessly:

 1) gluster volume set volname server.allow-insecure on

 2) Edit /etc/glusterfs/glusterd.vol to contain this line:
option rpc-auth-allow-insecure on

Post 2), restarting glusterd would be necessary.





 On Tue, Apr 22, 2014 at 11:55 AM, Mingfan Lu mingfan...@gmail.comwrote:

 I have created a volume named test_auth and set server.allow-insecure on

 Volume Name: test_auth
 Type: Distribute
 Volume ID: d9bdc43e-15ce-4072-8d89-a34063e82427
 Status: Started
 Number of Bricks: 3
 Transport-type: tcp
 Bricks:
 Brick1: server1:/mnt/xfsd/test_auth
 Brick2: server2:/mnt/xfsd/test_auth
 Brick3: server3:/mnt/xfsd/test_auth
 Options Reconfigured:
 server.allow-insecure: on

 and then, I tried to mount the volume using client-bind-insecure option,
 but failed to mount.

 /usr/sbin/glusterfs --volfile-id=test_auth --volfile-server=server1
 /mnt/test_auth_bind_insecure --client-bind-insecure

 I got the error message in servers' logs:
 server1 : [2014-04-22 03:44:52.817165] E [addr.c:143:gf_auth]
 0-auth/addr: client is bound to port 37756 which is not privileged
 server2: [2014-04-22 03:44:52.810565] E [addr.c:143:gf_auth]
 0-auth/addr: client is bound to port 16852 which is not privileged
 server3: [2014-04-22 03:44:52.811844] E [addr.c:143:gf_auth]
 0-auth/addr: client is bound to port 17733 which is not privileged

 I got the error messages like:

 [2014-04-22 03:43:59.757024] W
 [client-handshake.c:1365:client_setvolume_cbk] 0-test_auth-client-1: failed
 to set the volume (Permission denied)
 [2014-04-22 03:43:59.757024] W
 [client-handshake.c:1391:client_setvolume_cbk] 0-test_auth-client-1: failed
 to get 'process-uuid' from reply dict
 [2014-04-22 03:43:59.757102] E
 [client-handshake.c:1397:client_setvolume_cbk] 0-test_auth-client-1:
 SETVOLUME on remote-host failed: Authentication failed
 [2014-04-22 03:43:59.757109] I
 [client-handshake.c:1483:client_setvolume_cbk] 0-test_auth-client-1:
 sending AUTH_FAILED event
 [2014-04-22 03:43:59.757116] E [fuse-bridge.c:4834:notify] 0-fuse:
 Server authenication failed. Shutting down.


 Could anyone give some comments on this issue?









 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] server.allow-insecure doesn't work in 3.4.2?

2014-04-21 Thread Mingfan Lu
I saw something in
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_Notes
I wonder whether I should restart the glusterd?
Known Issues:

   -

   The following configuration changes are necessary for qemu and samba
   integration with libgfapi to work seamlessly:

1) gluster volume set volname server.allow-insecure on

2) Edit /etc/glusterfs/glusterd.vol to contain this line:
   option rpc-auth-allow-insecure on

   Post 2), restarting glusterd would be necessary.





On Tue, Apr 22, 2014 at 11:55 AM, Mingfan Lu mingfan...@gmail.com wrote:

 I have created a volume named test_auth and set server.allow-insecure on

 Volume Name: test_auth
 Type: Distribute
 Volume ID: d9bdc43e-15ce-4072-8d89-a34063e82427
 Status: Started
 Number of Bricks: 3
 Transport-type: tcp
 Bricks:
 Brick1: server1:/mnt/xfsd/test_auth
 Brick2: server2:/mnt/xfsd/test_auth
 Brick3: server3:/mnt/xfsd/test_auth
 Options Reconfigured:
 server.allow-insecure: on

 and then, I tried to mount the volume using client-bind-insecure option,
 but failed to mount.

 /usr/sbin/glusterfs --volfile-id=test_auth --volfile-server=server1
 /mnt/test_auth_bind_insecure --client-bind-insecure

 I got the error message in servers' logs:
 server1 : [2014-04-22 03:44:52.817165] E [addr.c:143:gf_auth] 0-auth/addr:
 client is bound to port 37756 which is not privileged
 server2: [2014-04-22 03:44:52.810565] E [addr.c:143:gf_auth] 0-auth/addr:
 client is bound to port 16852 which is not privileged
 server3: [2014-04-22 03:44:52.811844] E [addr.c:143:gf_auth] 0-auth/addr:
 client is bound to port 17733 which is not privileged

 I got the error messages like:

 [2014-04-22 03:43:59.757024] W
 [client-handshake.c:1365:client_setvolume_cbk] 0-test_auth-client-1: failed
 to set the volume (Permission denied)
 [2014-04-22 03:43:59.757024] W
 [client-handshake.c:1391:client_setvolume_cbk] 0-test_auth-client-1: failed
 to get 'process-uuid' from reply dict
 [2014-04-22 03:43:59.757102] E
 [client-handshake.c:1397:client_setvolume_cbk] 0-test_auth-client-1:
 SETVOLUME on remote-host failed: Authentication failed
 [2014-04-22 03:43:59.757109] I
 [client-handshake.c:1483:client_setvolume_cbk] 0-test_auth-client-1:
 sending AUTH_FAILED event
 [2014-04-22 03:43:59.757116] E [fuse-bridge.c:4834:notify] 0-fuse: Server
 authenication failed. Shutting down.


 Could anyone give some comments on this issue?








___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gfid different on subvolume

2014-03-31 Thread Mingfan Lu
I have seen some errors like gfid different on subvolume in my deployment.
e.g.
[2014-03-26 07:56:17.224262] W
[afr-common.c:1196:afr_detect_self_heal_by_iatt]
0-sh_ugc4_mams-replicate-1: /operation_1/video/2014/03/26/24/19: gfid
different on subvolume

my clients (3.3) have already backported the patches mentioned in
https://bugzilla.redhat.com/show_bug.cgi?id=907072

CHANGE: http://review.gluster.org/4459 (cluster/dht: ignore EEXIST error in
mkdir to avoid GFID mismatch) merged in master by Anand Avati

CHANGE: http://review.gluster.org/5849
http://review.gluster.org/5849 (cluster/dht: assign layout onto
missing directories too)


But I still saw such errors.

I though these changes were relative to clients? am I right? need I update
my servers using the patched release?

Or I miss other things?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] failed to mount for op-version

2014-03-28 Thread Mingfan Lu
when I using client of gluster3.3 to mount a gluster3.4.2 volume, I got a
mount failed error for 0-glusterd: Client x.x.x.x:709 (1 - 1) doesn't
support required op-version (2). Rejecting volfile request


Any comments?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] failed to mount for op-version

2014-03-28 Thread Mingfan Lu
no, we provide a service to our customers, it is not convenientto ask them
all (200+) to update clients.
Could we downgrade the op-version to 1 for a newly installed 3.4 cluster
that not upgraded from 3.3?


On Fri, Mar 28, 2014 at 7:12 PM, Carlos Capriotti 
capriotti.car...@gmail.com wrote:

 Any chance you can install/update your client ?


 On Fri, Mar 28, 2014 at 12:06 PM, Mingfan Lu mingfan...@gmail.com wrote:

 when I using client of gluster3.3 to mount a gluster3.4.2 volume, I got a
 mount failed error for 0-glusterd: Client x.x.x.x:709 (1 - 1) doesn't
 support required op-version (2). Rejecting volfile request


 Any comments?

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] failed to mount for op-version

2014-03-28 Thread Mingfan Lu
For my cluster is newly installed, could I update the glusterd.info files
in each node to set the operating_veresion=1 ??
It seems work, the op_version of  volumes created are 1 now.


On Fri, Mar 28, 2014 at 7:12 PM, Carlos Capriotti 
capriotti.car...@gmail.com wrote:

 Any chance you can install/update your client ?


 On Fri, Mar 28, 2014 at 12:06 PM, Mingfan Lu mingfan...@gmail.com wrote:

 when I using client of gluster3.3 to mount a gluster3.4.2 volume, I got a
 mount failed error for 0-glusterd: Client x.x.x.x:709 (1 - 1) doesn't
 support required op-version (2). Rejecting volfile request


 Any comments?

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster3.3 cFuse client dying after gfid different on subvolume ?

2014-03-02 Thread Mingfan Lu
Hi
   I saw one of our client dying after we see gfid different on
subvolume. Here is log of the client

[2014-03-03 08:30:54.286225] W
[afr-common.c:1196:afr_detect_self_heal_by_iatt] 0-bj-mams-replicate-6:
/operation/video/2014/03/03/e7/35/71/81f0a6656c077a16cad663e540543a78.pfvmeta:
gfid different on subvolume
[2014-03-03 08:30:54.287017] I
[afr-self-heal-common.c:1970:afr_sh_post_nb_entrylk_gfid_sh_cbk]
0-bj-mams-replicate-6: Non blocking entrylks failed.
[2014-03-03 08:30:54.287910] W [inode.c:914:inode_lookup]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/debug/io-stats.so(io_stats_lookup_cbk+0xff)
[0x7fb6fd630adf]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/mount/fuse.so(+0xf3f8)
[0x7fb7019da3f8]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/mount/fuse.so(+0xf25b)
[0x7fb7019da25b]))) 0-fuse: inode not found

I saw similar discussion in mail list, but i don't see the solution to fix
this issue.
http://www.gluster.org/pipermail/gluster-users/2013-June/036190.html

Using umount and remount, the client is alive now. but What I want know why
this happen ? Is there any bugfix on this?
thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd dead but subsys locked

2014-02-25 Thread Mingfan Lu
In 3.3, when we run service glusterd stop, it will stop glusterd and all
glusterfsd (server's processes for volumes) and not stop glusterfs
processes (client processes).
But in my installation of 3.4.2, it could only stop glsuterd, all
glusterfsd processes (server processes) are still alive (this is different
from 3.3, is it reasonable?), all glusterfs processes (client processes)
are alive too (this is reasonable )and following service glusterd status
reported lock file still exists.




On Wed, Feb 26, 2014 at 7:31 AM, Joe Julian j...@julianfamily.org wrote:

 Why is that a problem? Having the ability to restart management daemon
 without interrupting clients is a common and useful thing.

 On February 25, 2014 3:23:31 PM PST, Viktor Villafuerte 
 viktor.villafue...@optusnet.com.au wrote:

 Hi,

 I've got the same problem here. I did a completely new installation (no
 upgrades) and when I do 'service glusterd stop' and after 'status' it
 gives the same message. In the meantime there are other about 5

 processes
 glusterfsd + 4 x glusterfs
 that are still running. I can issue 'service glusterfsd stop' which
 stops the 'glusterfsd' process but the others stay running. In the logs
 there are 'I' messages about bricks/hosts not being available.

 It seems that I'm unable to stop gluster unless I start manually killing
 processes :(

 v3.4.2-1 from Gluster/latest/RHEL6/6.5


 Also there other problems I can see, but I won't confuse this post with

 them..

 v


 On Tue 25 Feb 2014 11:20:09, Khoi Mai wrote:

  When you tried
 gluster3.4.2-1. did you mean you upgraded it in place while
  glusterd was running?  Are you missing glusterfs-libs, meaning it didn't
  upgrade with all your other glusterfs packages?  Lastly, did you reboot?



  Khoi Mai
  Union Pacific Railroad
  Distributed Engineering  Architecture
  Project Engineer



  **

  This email and any attachments may contain information that is 
 confidential and/or privileged for the sole use of the intended recipient.  
 Any use, review, disclosure, copying, distribution or reliance by others, 
 and any forwarding of this email or its contents, without the express 
 permission of the sender is strictly prohibited by law.  If you are not the 
 intended recipient, please contact the sender immediately, delete the 
 e-mail and destroy all copies.

  **


 --

  Gluster-users mailing list
  Gluster-users@gluster.org

  http://supercolony.gluster.org/mailman/listinfo/gluster-users



 --
 Sent from my Android device with K-9 Mail. Please excuse my brevity.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd dead but subsys locked

2014-02-25 Thread Mingfan Lu
I have tried a clean installation and a upgradation from 3.3, I have seen
the same problem.
Of coz, I rebooted.




On Wed, Feb 26, 2014 at 1:20 AM, Khoi Mai khoi...@up.com wrote:

 When you tried gluster3.4.2-1. did you mean you upgraded it in place while
 glusterd was running?  Are you missing glusterfs-libs, meaning it didn't
 upgrade with all your other glusterfs packages?  Lastly, did you reboot?


 Khoi Mai
 Union Pacific Railroad
 Distributed Engineering  Architecture
 Project Engineer


 **

 This email and any attachments may contain information that is
 confidential and/or privileged for the sole use of the intended recipient.
 Any use, review, disclosure, copying, distribution or reliance by others,
 and any forwarding of this email or its contents, without the express
 permission of the sender is strictly prohibited by law. If you are not the
 intended recipient, please contact the sender immediately, delete the
 e-mail and destroy all copies.
 **

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterd dead but subsys locked

2014-02-24 Thread Mingfan Lu
I have trid the lastest glusterfs 3.4.2
but I found that I could start the service by *service glusterd status*,
and all volumes are up.
while I ran service glusterd status
it report the glusterd is stopped.
but when I called service glusterd status, I got
glusterd dead but subsys locked
I found that /var/lock/subsys/glusterd existed while all brick processes
were still alive.

I don't think I saw the bug
https://bugzilla.redhat.com/show_bug.cgi?id=960476
for I saw the lock file.

Any comments?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] self-heal is not trigger and data incosistency?

2014-02-17 Thread Mingfan Lu
Hi,
  We saw such a issue.
   One client (fuse mount) updated one file, then the other client (also
fuse mount) copied the same file while the reader found that the copied
file was out-of-dated.
   If the reader ran ls command to list the entries of the directory where
the target file in,then it could copy the latest one.
   Two clients's version:
  glusterfs-3.3.0-1
   The server's version is glusterfs 3.3.0.5rhs

   I remember that 3.3 could suport automatic self-heal in the first
lookup, when calling cp, it should trigger the self-heal to get the
lastest file, but why not?

   Any comments? I could try provide enough information what you need.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] self-heal is not trigger and data incosistency?

2014-02-17 Thread Mingfan Lu
Hi,
  We saw such a issue.
   One client (fuse mount) updated one file, then the other client (also
fuse mount) copied the same file while the reader found that the copied
file was out-of-dated.
   If the reader ran ls command to list the entries of the directory where
the target file in,then it could copy the latest one.
   Two clients's version:
  glusterfs-3.3.0-1
   The server's version is glusterfs 3.3.0.5rhs

   I remember that 3.3 could suport automatic self-heal in the first
lookup, when calling cp, it should trigger the self-heal to get the
lastest file, but why not?

   Any comments? I could try provide enough information what you need.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] client and server's version

2014-02-17 Thread Mingfan Lu
If my server is upgraded to 3.4 while many clients still use 3.3,
is there any problem? or I should update all clients.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] self-heal is not trigger and data incosistency?

2014-02-17 Thread Mingfan Lu
/done/2014-02-18.done
[2014-02-18 10:43:52.959485] I [afr-common.c:1340:afr_launch_self_heal]
0-search-prod-replicate-4: background  meta-data data self-heal triggered.
path:
/index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0/restriction_info,
reason: lookup detected pending operations
[2014-02-18 10:43:53.177817] I [afr-self-heal-data.c:712:afr_sh_data_fix]
0-search-prod-replicate-4: no active sinks for performing self-heal on file
/index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0/restriction_info
[2014-02-18 10:43:53.240084] I
[afr-self-heal-common.c:2159:afr_self_heal_completion_cbk]
0-search-prod-replicate-4: background  meta-data data self-heal completed
on
/index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0/restriction_info
[2014-02-18 10:58:07.853097] I [afr-common.c:1340:afr_launch_self_heal]
0-search-prod-replicate-4: background  meta-data self-heal triggered. path:
/index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0, reason:
lookup detected pending operations
[2014-02-18 10:58:07.948421] I
[afr-self-heal-common.c:2159:afr_self_heal_completion_cbk]
0-search-prod-replicate-4: background  meta-data self-heal completed on
/index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0
[2014-02-18 10:58:09.136128] I [afr-common.c:1340:afr_launch_self_heal]
0-search-prod-replicate-4: background  meta-data self-heal triggered. path:
/index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0, reason:
lookup detected pending operations
[2014-02-18 10:58:09.232103] I
[afr-self-heal-common.c:2159:afr_self_heal_completion_cbk]
0-search-prod-replicate-4: background  meta-data self-heal completed on
/index_pipeline_searchengine/second_leaf/data/2014-02-18/shard0



On Tue, Feb 18, 2014 at 2:10 PM, Mingfan Lu mingfan...@gmail.com wrote:

 Hi,
   We saw such a issue.
One client (fuse mount) updated one file, then the other client (also
 fuse mount) copied the same file while the reader found that the copied
 file was out-of-dated.
If the reader ran ls command to list the entries of the directory where
 the target file in,then it could copy the latest one.
Two clients's version:
   glusterfs-3.3.0-1
The server's version is glusterfs 3.3.0.5rhs

I remember that 3.3 could suport automatic self-heal in the first
 lookup, when calling cp, it should trigger the self-heal to get the
 lastest file, but why not?

Any comments? I could try provide enough information what you need.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] client and server's version

2014-02-17 Thread Mingfan Lu
thanks. I will try.


On Tue, Feb 18, 2014 at 3:04 PM, Vijay Bellur vbel...@redhat.com wrote:

 On 02/18/2014 11:40 AM, Mingfan Lu wrote:

 If my server is upgraded to 3.4 while many clients still use 3.3,
 is there any problem? or I should update all clients.


 3.4 and 3.3 are protocol compatible. We have not observed anything that
 would cause problems running 3.3 clients with 3.4 servers or vice-versa.

 Regards,
 Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] thread model of glusterfs brick server?

2014-02-10 Thread Mingfan Lu
I found the tool pstack is what I need. thanks


On Sat, Feb 8, 2014 at 5:13 PM, Mingfan Lu mingfan...@gmail.com wrote:

 use pstree to get the threads of a brick server process
 I got something like below, could we know which threads are io-threads
 which are threads to run self-heal? how about others?
 (just according to the tid and know the sequence when to create)

 [root@10.121.56.105 ~]# pstree -p 6226
 glusterfsd(6226)ââ¬â{glusterfsd}(6227)
  ââ{glusterfsd}(6228)
  ââ{glusterfsd}(6229)
  ââ{glusterfsd}(6230)
  ââ{glusterfsd}(6243)
  ââ{glusterfsd}(6244)
  ââ{glusterfsd}(6247)
  ââ{glusterfsd}(6262)
  ââ{glusterfsd}(6314)
  ââ{glusterfsd}(6315)
  ââ{glusterfsd}(6406)
  ââ{glusterfsd}(6490)
  ââ{glusterfsd}(6491)
  ââ{glusterfsd}(6493)
  ââ{glusterfsd}(6494)
  ââ{glusterfsd}(6506)
  ââ{glusterfsd}(6531)
  ââ{glusterfsd}(6532)
  ââ{glusterfsd}(6536)
  ââ{glusterfsd}(6539)
  ââ{glusterfsd}(6540)
  ââ{glusterfsd}(9127)
  ââ{glusterfsd}(22470)
  ââ{glusterfsd}(22471)
  ââ{glusterfsd}(22472)
  ââ{glusterfsd}(22473)
  ââ{glusterfsd}(22474)
  ââ{glusterfsd}(22475)
  ââ{glusterfsd}(22476)
  ââ{glusterfsd}(23217)
  ââ{glusterfsd}(23218)
  ââ{glusterfsd}(23219)
  ââ{glusterfsd}(23220)
  ââ{glusterfsd}(23221)
  ââ{glusterfsd}(23222)
  ââ{glusterfsd}(23223)
  ââ{glusterfsd}(23328)
  ââ{glusterfsd}(23329)

 my volume is:

 Volume Name: prodvol
 Type: Distributed-Replicate
 Volume ID: f3fc24b3-23c7-430d-8ab1-81a646b1ce06
 Status: Started
 Number of Bricks: 17 x 3 = 51
 Transport-type: tcp
 Bricks:
 ...
 Options Reconfigured:
 performance.io-thread-count: 32
 auth.allow: *,10.121.48.244,10.121.48.82
 features.limit-usage: /:400TB
 features.quota: on
 server.allow-insecure: on
 features.quota-timeout: 5



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] thread model of glusterfs brick server?

2014-02-08 Thread Mingfan Lu
use pstree to get the threads of a brick server process
I got something like below, could we know which threads are io-threads
which are threads to run self-heal? how about others?
(just according to the tid and know the sequence when to create)

[root@10.121.56.105 ~]# pstree -p 6226
glusterfsd(6226)ââ¬â{glusterfsd}(6227)
 ââ{glusterfsd}(6228)
 ââ{glusterfsd}(6229)
 ââ{glusterfsd}(6230)
 ââ{glusterfsd}(6243)
 ââ{glusterfsd}(6244)
 ââ{glusterfsd}(6247)
 ââ{glusterfsd}(6262)
 ââ{glusterfsd}(6314)
 ââ{glusterfsd}(6315)
 ââ{glusterfsd}(6406)
 ââ{glusterfsd}(6490)
 ââ{glusterfsd}(6491)
 ââ{glusterfsd}(6493)
 ââ{glusterfsd}(6494)
 ââ{glusterfsd}(6506)
 ââ{glusterfsd}(6531)
 ââ{glusterfsd}(6532)
 ââ{glusterfsd}(6536)
 ââ{glusterfsd}(6539)
 ââ{glusterfsd}(6540)
 ââ{glusterfsd}(9127)
 ââ{glusterfsd}(22470)
 ââ{glusterfsd}(22471)
 ââ{glusterfsd}(22472)
 ââ{glusterfsd}(22473)
 ââ{glusterfsd}(22474)
 ââ{glusterfsd}(22475)
 ââ{glusterfsd}(22476)
 ââ{glusterfsd}(23217)
 ââ{glusterfsd}(23218)
 ââ{glusterfsd}(23219)
 ââ{glusterfsd}(23220)
 ââ{glusterfsd}(23221)
 ââ{glusterfsd}(23222)
 ââ{glusterfsd}(23223)
 ââ{glusterfsd}(23328)
 ââ{glusterfsd}(23329)

my volume is:

Volume Name: prodvol
Type: Distributed-Replicate
Volume ID: f3fc24b3-23c7-430d-8ab1-81a646b1ce06
Status: Started
Number of Bricks: 17 x 3 = 51
Transport-type: tcp
Bricks:
...
Options Reconfigured:
performance.io-thread-count: 32
auth.allow: *,10.121.48.244,10.121.48.82
features.limit-usage: /:400TB
features.quota: on
server.allow-insecure: on
features.quota-timeout: 5
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] very High CPU load of brick servers while write performance is very slow

2014-02-07 Thread Mingfan Lu
CPU load in some of brick servers are very high and write performance is
very slow.
dd one file to the volume, the result is only 10+KB/sec

any comments?

more infomation 
Volume Name: prodvolume
Type: Distributed-Replicate
Volume ID: f3fc24b3-23c7-430d-8ab1-81a646b1ce06
Status: Started
Number of Bricks: 17 x 3 = 51  (I have 51 servers)
Transport-type: tcp
Bricks:
 
Options Reconfigured:
performance.io-thread-count: 32
auth.allow: *,10.121.48.244,10.121.48.82
features.limit-usage: /:400TB
features.quota: on
server.allow-insecure: on
features.quota-timeout: 5


most of cpu utilization from system/kernel mode

top - 14:47:13 up 219 days, 23:36,  2 users,  load average: 17.76, 20.98,
24.74
Tasks: 493 total,   1 running, 491 sleeping,   0 stopped,   1 zombie
Cpu(s):  8.2%us, 49.0%sy,  0.0%ni, 42.2%id,  0.1%wa,  0.0%hi,  0.4%si,
0.0%st
Mem:  132112276k total, 131170760k used,   941516k free,71224k buffers
Swap:  4194296k total,   867216k used,  3327080k free, 110888216k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
* 6226 root  20   0 2677m 496m 2268 S 1183.4  0.4  89252:09 glusterfsd*
27994 root  20   0 1691m  77m 2000 S 111.6  0.1 324333:47 glusterfsd
14169 root  20   0 14.9g  23m 1984 S 51.3  0.0   3700:30 glusterfsd
20582 root  20   0 2129m 1.4g 1708 S 12.6  1.1 198:03.53 glusterfs
24528 root  20   0 000 S  6.3  0.0  14:18.60 flush-8:16
17717 root  20   0 21416  11m 8268 S  5.0  0.0  14:51.18 oprofiled


use perf top -p 6226, are casusd by spin_lock

Events: 49K cycles
 72.51%  [kernel]  [k] _spin_lock
  4.00%  libpthread-2.12.so[.] pthread_mutex_lock
  2.63%  [kernel]  [k] _spin_unlock_irqrestore
  1.61%  libpthread-2.12.so[.] pthread_mutex_unlock
  1.59%  [unknown] [.] 0xff600157
  1.57%  [xfs] [k] xfs_inobt_get_rec
  1.41%  [xfs] [k] xfs_btree_increment
  1.27%  [xfs] [k] xfs_btree_get_rec
  1.17%  libpthread-2.12.so[.] __lll_lock_wait
  0.96%  [xfs] [k] _xfs_buf_find
  0.95%  [xfs] [k] xfs_btree_get_block
  0.88%  [kernel]  [k] copy_user_generic_string
  0.50%  [xfs] [k] xfs_dialloc
  0.48%  [xfs] [k] xfs_btree_rec_offset
  0.47%  [xfs] [k] xfs_btree_readahead
  0.41%  [kernel]  [k] futex_wait_setup
  0.41%  [kernel]  [k] futex_wake
  0.35%  [kernel]  [k] system_call_after_swapgs
  0.33%  [xfs] [k] xfs_btree_rec_addr
  0.30%  [kernel]  [k] __link_path_walk
  0.29%  io-threads.so.0.0.0   [.] __iot_dequeue
  0.29%  io-threads.so.0.0.0   [.] iot_worker
  0.25%  [kernel]  [k] __d_lookup
  0.21%  libpthread-2.12.so[.] __lll_unlock_wake
  0.20%  [kernel]  [k] get_futex_key
  0.18%  [kernel]  [k] hash_futex
  0.17%  [kernel]  [k] do_futex
  0.15%  [kernel]  [k] thread_return
  0.15%  libpthread-2.12.so[.] pthread_spin_lock
  0.14%  libc-2.12.so  [.] _int_malloc
  0.14%  [kernel]  [k] sys_futex
  0.14%  [kernel]  [k] wake_futex
  0.14%  [kernel]  [k] _atomic_dec_and_lock
  0.12%  [kernel]  [k] kmem_cache_free
  0.12%  [xfs] [k] xfs_trans_buf_item_match
  0.12%  [xfs] [k] xfs_btree_check_sblock
  0.11%  libc-2.12.so  [.] vfprintf
  0.11%  [kernel]  [k] futex_wait
  0.11%  [kernel]  [k] kmem_cache_alloc
  0.09%  [kernel]  [k] acl_permission_check

use oprifile, I found the cpu are almost caused breakdown into:

CPU: Intel Sandy Bridge microarchitecture, speed 2000.02 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit
mask of 0x00 (No unit mask) count 10
samples  %linenr info image name   app
name symbol name
---
288683303 41.2321  clocksource.c:828   vmlinux
vmlinux * sysfs_show_available_clocksources*
  288683303 100.000  clocksource.c:828   vmlinux
vmlinux  sysfs_show_available_clocksources [self]
---
203797076 29.1079  clocksource.c:236   vmlinux
vmlinux  *clocksource_mark_unstable*
  203797076 100.000  clocksource.c:236   vmlinux
vmlinux  clocksource_mark_unstable [self]

Re: [Gluster-users] help, all latency comes from FINODELK

2014-02-06 Thread Mingfan Lu
Today, when I ran statedump, node22 still crashed and I got the pending
frames

[2014-02-07 13:32:52.356386] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f81ca048a3e]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/features/marker.so(marker_lookup+0x300)
[0x7f81ca25e170]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/features/marker.so(mq_req_xattr+0x3c)
[0x7f81ca2676ec]))) 0-marker: invalid argument: loc-parent
pending frames:

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2014-02-07 13:32:52
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.3.0.5rhs_iqiyi_7
/lib64/libc.so.6[0x35ef232920]
/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/protocol/server.so(ltable_dump+0x54)[0x7f81c9e130b4]
/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/protocol/server.so(server_inode+0xc7)[0x7f81c9e13597]
/usr/lib64/libglusterfs.so.0(gf_proc_dump_xlator_info+0x112)[0x30630443f2]
/usr/lib64/libglusterfs.so.0(gf_proc_dump_info+0x4b2)[0x3063044bc2]
/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xd2)[0x405cc2]
/lib64/libpthread.so.0[0x35ef607851]
/lib64/libc.so.6(clone+0x6d)[0x35ef2e767d]



On Thu, Jan 30, 2014 at 3:14 PM, Pranith Kumar Karampuri 
pkara...@redhat.com wrote:

 Could you give us the back trace

 Pranith
 - Original Message -
  From: Mingfan Lu mingfan...@gmail.com
  To: Pranith Kumar Karampuri pkara...@redhat.com
  Cc: haiwei.xie-soulinfo haiwei@soulinfo.com, 
 Gluster-users@gluster.org List gluster-users@gluster.org
  Sent: Thursday, January 30, 2014 12:43:44 PM
  Subject: Re: [Gluster-users] help, all latency comes from FINODELK
 
  When I triedto execute the statedump, the brick server of the BAD node
  crashed.
 
 
 
  On Wed, Jan 29, 2014 at 8:13 PM, Pranith Kumar Karampuri 
  pkara...@redhat.com wrote:
 
   Could you take statedump of bricks and get that information please.
   You can use
  
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Monitor_Workload-Performing_Statedump.html
   for taking statedumps.
  
   Is this same as https://bugzilla.redhat.com/show_bug.cgi?id=1056276that
   you raised for? If yes could you attach the files to that bug so that
 I get
   the notifications immediately.
  
   Pranith
   - Original Message -
From: Mingfan Lu mingfan...@gmail.com
To: haiwei.xie-soulinfo haiwei@soulinfo.com
Cc: Gluster-users@gluster.org List gluster-users@gluster.org
Sent: Tuesday, January 28, 2014 7:44:14 AM
Subject: Re: [Gluster-users] help, all latency comes from FINODELK
   
About 200+ clients
   
How to print print lock info in bricks, print lock requst info in
 afr 
   dht?
thanks.
   
   
   
On Tue, Jan 28, 2014 at 9:51 AM, haiwei.xie-soulinfo 
haiwei@soulinfo.com  wrote:
   
   
   
hi,
looks FINODELK deak lock. how many clients, and nfs or fuse?
Maybe the best way is to print lock info in bricks, print lock requst
   info in
afr  dht.
   
 all my clients hang when they creating dir


 On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu 
 mingfan...@gmail.com 
 wrote:

  I ran glusterfs volume profile my_volume info, I got some thing
   likes:
 
  0.00 0.00 us 0.00 us 0.00 us 30 FORGET
  0.00 0.00 us 0.00 us 0.00 us 185
  RELEASE
  0.00 0.00 us 0.00 us 0.00 us 11
  RELEASEDIR
  0.00 66.50 us 54.00 us 79.00 us 2
  SETATTR
  0.00 44.83 us 25.00 us 94.00 us 6
  READDIR
  0.00 57.55 us 28.00 us 95.00 us 11
  OPENDIR
  0.00 325.00 us 279.00 us 371.00 us 2
  RENAME
  0.00 147.80 us 84.00 us 206.00 us 5
  LINK
  0.00 389.50 us 55.00 us 724.00 us 2
  READDIRP
  0.00 164.25 us 69.00 us 287.00 us 8
  UNLINK
  0.00 37.46 us 18.00 us 87.00 us 50
  FSTAT
  0.00 70.32 us 29.00 us 210.00 us 37
  GETXATTR
  0.00 77.75 us 42.00 us 216.00 us 55
  SETXATTR
  0.00 36.39 us 11.00 us 147.00 us 119
  FLUSH
  0.00 51.22 us 24.00 us 139.00 us 275
  OPEN
  0.00 180.14 us 84.00 us 457.00 us 96
  XATTROP
  0.00 3847.20 us 231.00 us 18218.00 us 5
  MKNOD
  0.00 70.08 us 15.00 us 6539.00 us 342
  ENTRYLK
  0.00 10338.86 us 184.00 us 34813.00 us 7
  CREATE
  0.00 896.65 us 12.00 us 83103.00 us 235
  INODELK
  0.00 187.86 us 50.00 us 668.00 us 1526
  WRITE
  0.00 40.66 us 13.00 us 400.00 us 10400
  STATFS
  0.00 313.13 us 66.00 us 2142.00 us 2049
  FXATTROP
  0.00 2794.97 us 26.00 us 78048.00 us 295
  READ
  0.00 24469.82 us 206.00 us 176157.00 us 34
  MKDIR
  0.00 40.49 us 13.00 us 507.00 us 21420
  STAT
  0.00 190.90 us 40.00 us 330032.00 us 45820
  LOOKUP
  100.00 72004815.62

Re: [Gluster-users] help, all latency comes from FINODELK

2014-02-05 Thread Mingfan Lu
: Mingfan Lu mingfan...@gmail.com
  To: Pranith Kumar Karampuri pkara...@redhat.com
  Cc: haiwei.xie-soulinfo haiwei@soulinfo.com, 
 Gluster-users@gluster.org List gluster-users@gluster.org
  Sent: Thursday, January 30, 2014 12:43:44 PM
  Subject: Re: [Gluster-users] help, all latency comes from FINODELK
 
  When I triedto execute the statedump, the brick server of the BAD node
  crashed.
 
 
 
  On Wed, Jan 29, 2014 at 8:13 PM, Pranith Kumar Karampuri 
  pkara...@redhat.com wrote:
 
   Could you take statedump of bricks and get that information please.
   You can use
  
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Monitor_Workload-Performing_Statedump.html
   for taking statedumps.
  
   Is this same as https://bugzilla.redhat.com/show_bug.cgi?id=1056276that
   you raised for? If yes could you attach the files to that bug so that
 I get
   the notifications immediately.
  
   Pranith
   - Original Message -
From: Mingfan Lu mingfan...@gmail.com
To: haiwei.xie-soulinfo haiwei@soulinfo.com
Cc: Gluster-users@gluster.org List gluster-users@gluster.org
Sent: Tuesday, January 28, 2014 7:44:14 AM
Subject: Re: [Gluster-users] help, all latency comes from FINODELK
   
About 200+ clients
   
How to print print lock info in bricks, print lock requst info in
 afr 
   dht?
thanks.
   
   
   
On Tue, Jan 28, 2014 at 9:51 AM, haiwei.xie-soulinfo 
haiwei@soulinfo.com  wrote:
   
   
   
hi,
looks FINODELK deak lock. how many clients, and nfs or fuse?
Maybe the best way is to print lock info in bricks, print lock requst
   info in
afr  dht.
   
 all my clients hang when they creating dir


 On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu 
 mingfan...@gmail.com 
 wrote:

  I ran glusterfs volume profile my_volume info, I got some thing
   likes:
 
  0.00 0.00 us 0.00 us 0.00 us 30 FORGET
  0.00 0.00 us 0.00 us 0.00 us 185
  RELEASE
  0.00 0.00 us 0.00 us 0.00 us 11
  RELEASEDIR
  0.00 66.50 us 54.00 us 79.00 us 2
  SETATTR
  0.00 44.83 us 25.00 us 94.00 us 6
  READDIR
  0.00 57.55 us 28.00 us 95.00 us 11
  OPENDIR
  0.00 325.00 us 279.00 us 371.00 us 2
  RENAME
  0.00 147.80 us 84.00 us 206.00 us 5
  LINK
  0.00 389.50 us 55.00 us 724.00 us 2
  READDIRP
  0.00 164.25 us 69.00 us 287.00 us 8
  UNLINK
  0.00 37.46 us 18.00 us 87.00 us 50
  FSTAT
  0.00 70.32 us 29.00 us 210.00 us 37
  GETXATTR
  0.00 77.75 us 42.00 us 216.00 us 55
  SETXATTR
  0.00 36.39 us 11.00 us 147.00 us 119
  FLUSH
  0.00 51.22 us 24.00 us 139.00 us 275
  OPEN
  0.00 180.14 us 84.00 us 457.00 us 96
  XATTROP
  0.00 3847.20 us 231.00 us 18218.00 us 5
  MKNOD
  0.00 70.08 us 15.00 us 6539.00 us 342
  ENTRYLK
  0.00 10338.86 us 184.00 us 34813.00 us 7
  CREATE
  0.00 896.65 us 12.00 us 83103.00 us 235
  INODELK
  0.00 187.86 us 50.00 us 668.00 us 1526
  WRITE
  0.00 40.66 us 13.00 us 400.00 us 10400
  STATFS
  0.00 313.13 us 66.00 us 2142.00 us 2049
  FXATTROP
  0.00 2794.97 us 26.00 us 78048.00 us 295
  READ
  0.00 24469.82 us 206.00 us 176157.00 us 34
  MKDIR
  0.00 40.49 us 13.00 us 507.00 us 21420
  STAT
  0.00 190.90 us 40.00 us 330032.00 us 45820
  LOOKUP
  100.00 72004815.62 us 8.00 us 5783044563.00 us 3994
  FINODELK
 
  what happend?
 
   
   
   
   
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
  
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-29 Thread Mingfan Lu
When I triedto execute the statedump, the brick server of the BAD node
crashed.



On Wed, Jan 29, 2014 at 8:13 PM, Pranith Kumar Karampuri 
pkara...@redhat.com wrote:

 Could you take statedump of bricks and get that information please.
 You can use
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Monitor_Workload-Performing_Statedump.html
 for taking statedumps.

 Is this same as https://bugzilla.redhat.com/show_bug.cgi?id=1056276 that
 you raised for? If yes could you attach the files to that bug so that I get
 the notifications immediately.

 Pranith
 - Original Message -
  From: Mingfan Lu mingfan...@gmail.com
  To: haiwei.xie-soulinfo haiwei@soulinfo.com
  Cc: Gluster-users@gluster.org List gluster-users@gluster.org
  Sent: Tuesday, January 28, 2014 7:44:14 AM
  Subject: Re: [Gluster-users] help, all latency comes from FINODELK
 
  About 200+ clients
 
  How to print print lock info in bricks, print lock requst info in afr 
 dht?
  thanks.
 
 
 
  On Tue, Jan 28, 2014 at 9:51 AM, haiwei.xie-soulinfo 
  haiwei@soulinfo.com  wrote:
 
 
 
  hi,
  looks FINODELK deak lock. how many clients, and nfs or fuse?
  Maybe the best way is to print lock info in bricks, print lock requst
 info in
  afr  dht.
 
   all my clients hang when they creating dir
  
  
   On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu  mingfan...@gmail.com 
   wrote:
  
I ran glusterfs volume profile my_volume info, I got some thing
 likes:
   
0.00 0.00 us 0.00 us 0.00 us 30 FORGET
0.00 0.00 us 0.00 us 0.00 us 185
RELEASE
0.00 0.00 us 0.00 us 0.00 us 11
RELEASEDIR
0.00 66.50 us 54.00 us 79.00 us 2
SETATTR
0.00 44.83 us 25.00 us 94.00 us 6
READDIR
0.00 57.55 us 28.00 us 95.00 us 11
OPENDIR
0.00 325.00 us 279.00 us 371.00 us 2
RENAME
0.00 147.80 us 84.00 us 206.00 us 5
LINK
0.00 389.50 us 55.00 us 724.00 us 2
READDIRP
0.00 164.25 us 69.00 us 287.00 us 8
UNLINK
0.00 37.46 us 18.00 us 87.00 us 50
FSTAT
0.00 70.32 us 29.00 us 210.00 us 37
GETXATTR
0.00 77.75 us 42.00 us 216.00 us 55
SETXATTR
0.00 36.39 us 11.00 us 147.00 us 119
FLUSH
0.00 51.22 us 24.00 us 139.00 us 275
OPEN
0.00 180.14 us 84.00 us 457.00 us 96
XATTROP
0.00 3847.20 us 231.00 us 18218.00 us 5
MKNOD
0.00 70.08 us 15.00 us 6539.00 us 342
ENTRYLK
0.00 10338.86 us 184.00 us 34813.00 us 7
CREATE
0.00 896.65 us 12.00 us 83103.00 us 235
INODELK
0.00 187.86 us 50.00 us 668.00 us 1526
WRITE
0.00 40.66 us 13.00 us 400.00 us 10400
STATFS
0.00 313.13 us 66.00 us 2142.00 us 2049
FXATTROP
0.00 2794.97 us 26.00 us 78048.00 us 295
READ
0.00 24469.82 us 206.00 us 176157.00 us 34
MKDIR
0.00 40.49 us 13.00 us 507.00 us 21420
STAT
0.00 190.90 us 40.00 us 330032.00 us 45820
LOOKUP
100.00 72004815.62 us 8.00 us 5783044563.00 us 3994
FINODELK
   
what happend?
   
 
 
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] write performance is not good

2014-01-28 Thread Mingfan Lu
Hi,
  I have a  distributed and replica=3 volume (not to use stripe ) in a
cluster. I used dd to write 120 files to test. I foundthe write performane
of some files are much lower than others. all these BAD files are stored
in the same three brick servers for replication (I called node1 node2 node3)

e.g the bad write performance could be 10MBps while good performance could
be 150Mbps+

there are no problems about raid and networks.
If i stopped node1  node2, the write performance of BAD files are the
similar to (even better) GOOD ones.

One thing I must metion is  the raids of node1 and node2 are reformated for
some reason, there are many self-heal activities to restore files in node1
and node2.
Is the BAD write performance caused by aggresive self-heal?
How could I slow down the self-heal?
Any advise?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] write performance is not good

2014-01-28 Thread Mingfan Lu
P.S.
  we use single 1Mbps NIC. I found the networks bandwidth is not a
issue.


On Tue, Jan 28, 2014 at 4:49 PM, Mingfan Lu mingfan...@gmail.com wrote:


 Hi,
   I have a  distributed and replica=3 volume (not to use stripe ) in a
 cluster. I used dd to write 120 files to test. I foundthe write performane
 of some files are much lower than others. all these BAD files are stored
 in the same three brick servers for replication (I called node1 node2 node3)

 e.g the bad write performance could be 10MBps while good performance could
 be 150Mbps+

 there are no problems about raid and networks.
 If i stopped node1  node2, the write performance of BAD files are the
 similar to (even better) GOOD ones.

 One thing I must metion is  the raids of node1 and node2 are reformated
 for some reason, there are many self-heal activities to restore files in
 node1 and node2.
 Is the BAD write performance caused by aggresive self-heal?
 How could I slow down the self-heal?
 Any advise?



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] write performance is not good

2014-01-28 Thread Mingfan Lu
In the client's log, I found:

[2014-01-28 17:54:36.839220] I [afr-self-heal-data.c:712:afr_sh_data_fix]
0-sh-ugc1-mams-replicate-7: no active sinks for performing self-heal on
file /fytest/46
[2014-01-28 17:55:05.251490] I [afr-self-heal-data.c:712:afr_sh_data_fix]
0-sh-ugc1-mams-replicate-7: no active sinks for performing self-heal on
file /fytest/49

the /fytest/46  /fytest/49 are BAD files.

What does no active sinks for performing means?




On Tue, Jan 28, 2014 at 4:56 PM, Dan Mons dm...@cuttingedge.com.au wrote:

 Is your write single or multi-threaded?

 If it's single threaded, try writing your files across as many threads
 as possible, and see what the performance improvement is like.

 -Dan
 
 Dan Mons
 Skunk Works
 Cutting Edge
 http://cuttingedge.com.au


 On 28 January 2014 18:49, Mingfan Lu mingfan...@gmail.com wrote:
 
  Hi,
I have a  distributed and replica=3 volume (not to use stripe ) in a
  cluster. I used dd to write 120 files to test. I foundthe write
 performane
  of some files are much lower than others. all these BAD files are
 stored
  in the same three brick servers for replication (I called node1 node2
 node3)
 
  e.g the bad write performance could be 10MBps while good performance
 could
  be 150Mbps+
 
  there are no problems about raid and networks.
  If i stopped node1  node2, the write performance of BAD files are the
  similar to (even better) GOOD ones.
 
  One thing I must metion is  the raids of node1 and node2 are reformated
 for
  some reason, there are many self-heal activities to restore files in
 node1
  and node2.
  Is the BAD write performance caused by aggresive self-heal?
  How could I slow down the self-heal?
  Any advise?
 
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] write performance is not good

2014-01-28 Thread Mingfan Lu
I unmounted and remounted, it seems there is no BAD results.
Interesting.


On Tue, Jan 28, 2014 at 6:00 PM, Mingfan Lu mingfan...@gmail.com wrote:

 In the client's log, I found:

 [2014-01-28 17:54:36.839220] I [afr-self-heal-data.c:712:afr_sh_data_fix]
 0-sh-ugc1-mams-replicate-7: no active sinks for performing self-heal on
 file /fytest/46
 [2014-01-28 17:55:05.251490] I [afr-self-heal-data.c:712:afr_sh_data_fix]
 0-sh-ugc1-mams-replicate-7: no active sinks for performing self-heal on
 file /fytest/49

 the /fytest/46  /fytest/49 are BAD files.

 What does no active sinks for performing means?




 On Tue, Jan 28, 2014 at 4:56 PM, Dan Mons dm...@cuttingedge.com.auwrote:

 Is your write single or multi-threaded?

 If it's single threaded, try writing your files across as many threads
 as possible, and see what the performance improvement is like.

 -Dan
 
 Dan Mons
 Skunk Works
 Cutting Edge
 http://cuttingedge.com.au


 On 28 January 2014 18:49, Mingfan Lu mingfan...@gmail.com wrote:
 
  Hi,
I have a  distributed and replica=3 volume (not to use stripe ) in a
  cluster. I used dd to write 120 files to test. I foundthe write
 performane
  of some files are much lower than others. all these BAD files are
 stored
  in the same three brick servers for replication (I called node1 node2
 node3)
 
  e.g the bad write performance could be 10MBps while good performance
 could
  be 150Mbps+
 
  there are no problems about raid and networks.
  If i stopped node1  node2, the write performance of BAD files are the
  similar to (even better) GOOD ones.
 
  One thing I must metion is  the raids of node1 and node2 are reformated
 for
  some reason, there are many self-heal activities to restore files in
 node1
  and node2.
  Is the BAD write performance caused by aggresive self-heal?
  How could I slow down the self-heal?
  Any advise?
 
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster client crash

2014-01-27 Thread Mingfan Lu
One of our client (3.3.0.5) crashed when writing data, the log is:

pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(LOOKUP)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)

patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash: 2014-01-27 15:36:32
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.3.0.5rhs
/lib64/libc.so.6[0x32c5a32920]
/lib64/libc.so.6(gsignal+0x35)[0x32c5a328a5]
/lib64/libc.so.6(abort+0x175)[0x32c5a34085]
/lib64/libc.so.6[0x32c5a707b7]
/lib64/libc.so.6[0x32c5a760e6]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(+0x42be)[0x7f79a63012be]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(wb_sync_cbk+0xa0)[0x7f79a6307ab0]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/quota.so(quota_writev_cbk+0xed)[0x7f79a651864d]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/cluster/distribute.so(dht_writev_cbk+0x14f)[0x7f79a6753aaf]
/usr/lib64/glusterfs/3.3.0.5rhs/xlator/protocol/client.so(client3_1_writev_cbk+0x600)[0x7f79a6995340]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x31b020f4f5]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x31b020fdb0]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x31b020aeb8]
/usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f79a79d4784]
/usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f79a79d4867]
/usr/lib64/libglusterfs.so.0[0x31afe3e4e4]
/usr/sbin/glusterfs(main+0x590)[0x407420]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x32c5a1ecdd]
/usr/sbin/glusterfs[0x404289]
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster client crash

2014-01-27 Thread Mingfan Lu
the volume is distributed (replication = 1)


On Mon, Jan 27, 2014 at 4:01 PM, Mingfan Lu mingfan...@gmail.com wrote:

 One of our client (3.3.0.5) crashed when writing data, the log is:

 pending frames:
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(LOOKUP)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)

 patchset: git://git.gluster.com/glusterfs.git
 signal received: 6
 time of crash: 2014-01-27 15:36:32
 configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 3.3.0.5rhs
 /lib64/libc.so.6[0x32c5a32920]
 /lib64/libc.so.6(gsignal+0x35)[0x32c5a328a5]
 /lib64/libc.so.6(abort+0x175)[0x32c5a34085]
 /lib64/libc.so.6[0x32c5a707b7]
 /lib64/libc.so.6[0x32c5a760e6]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(+0x42be)[0x7f79a63012be]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(wb_sync_cbk+0xa0)[0x7f79a6307ab0]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/quota.so(quota_writev_cbk+0xed)[0x7f79a651864d]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/cluster/distribute.so(dht_writev_cbk+0x14f)[0x7f79a6753aaf]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/protocol/client.so(client3_1_writev_cbk+0x600)[0x7f79a6995340]
 /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x31b020f4f5]
 /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x31b020fdb0]
 /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x31b020aeb8]

 /usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f79a79d4784]

 /usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f79a79d4867]
 /usr/lib64/libglusterfs.so.0[0x31afe3e4e4]
 /usr/sbin/glusterfs(main+0x590)[0x407420]
 /lib64/libc.so.6(__libc_start_main+0xfd)[0x32c5a1ecdd]
 /usr/sbin/glusterfs[0x404289]

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
I ran glusterfs volume profile my_volume info, I got some thing likes:

  0.00   0.00 us   0.00 us   0.00 us 30  FORGET
  0.00   0.00 us   0.00 us   0.00 us185
RELEASE
  0.00   0.00 us   0.00 us   0.00 us 11
RELEASEDIR
  0.00  66.50 us  54.00 us  79.00 us  2
SETATTR
  0.00  44.83 us  25.00 us  94.00 us  6
READDIR
  0.00  57.55 us  28.00 us  95.00 us 11
OPENDIR
  0.00 325.00 us 279.00 us 371.00 us  2
RENAME
  0.00 147.80 us  84.00 us 206.00 us  5
LINK
  0.00 389.50 us  55.00 us 724.00 us  2
READDIRP
  0.00 164.25 us  69.00 us 287.00 us  8
UNLINK
  0.00  37.46 us  18.00 us  87.00 us 50
FSTAT
  0.00  70.32 us  29.00 us 210.00 us 37
GETXATTR
  0.00  77.75 us  42.00 us 216.00 us 55
SETXATTR
  0.00  36.39 us  11.00 us 147.00 us119
FLUSH
  0.00  51.22 us  24.00 us 139.00 us275
OPEN
  0.00 180.14 us  84.00 us 457.00 us 96
XATTROP
  0.003847.20 us 231.00 us   18218.00 us  5
MKNOD
  0.00  70.08 us  15.00 us6539.00 us342
ENTRYLK
  0.00   10338.86 us 184.00 us   34813.00 us  7
CREATE
  0.00 896.65 us  12.00 us   83103.00 us235
INODELK
  0.00 187.86 us  50.00 us 668.00 us   1526
WRITE
  0.00  40.66 us  13.00 us 400.00 us  10400
STATFS
  0.00 313.13 us  66.00 us2142.00 us   2049
FXATTROP
  0.002794.97 us  26.00 us   78048.00 us295
READ
  0.00   24469.82 us 206.00 us  176157.00 us 34
MKDIR
  0.00  40.49 us  13.00 us 507.00 us  21420
STAT
  0.00 190.90 us  40.00 us  330032.00 us  45820
LOOKUP
100.00 72004815.62 us   8.00 us 5783044563.00 us   3994
FINODELK

what happend?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
all my clients hang when they creating dir


On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu mingfan...@gmail.com wrote:

 I ran glusterfs volume profile my_volume info, I got some thing likes:

   0.00   0.00 us   0.00 us   0.00 us 30  FORGET
   0.00   0.00 us   0.00 us   0.00 us185
 RELEASE
   0.00   0.00 us   0.00 us   0.00 us 11
 RELEASEDIR
   0.00  66.50 us  54.00 us  79.00 us  2
 SETATTR
   0.00  44.83 us  25.00 us  94.00 us  6
 READDIR
   0.00  57.55 us  28.00 us  95.00 us 11
 OPENDIR
   0.00 325.00 us 279.00 us 371.00 us  2
 RENAME
   0.00 147.80 us  84.00 us 206.00 us  5
 LINK
   0.00 389.50 us  55.00 us 724.00 us  2
 READDIRP
   0.00 164.25 us  69.00 us 287.00 us  8
 UNLINK
   0.00  37.46 us  18.00 us  87.00 us 50
 FSTAT
   0.00  70.32 us  29.00 us 210.00 us 37
 GETXATTR
   0.00  77.75 us  42.00 us 216.00 us 55
 SETXATTR
   0.00  36.39 us  11.00 us 147.00 us119
 FLUSH
   0.00  51.22 us  24.00 us 139.00 us275
 OPEN
   0.00 180.14 us  84.00 us 457.00 us 96
 XATTROP
   0.003847.20 us 231.00 us   18218.00 us  5
 MKNOD
   0.00  70.08 us  15.00 us6539.00 us342
 ENTRYLK
   0.00   10338.86 us 184.00 us   34813.00 us  7
 CREATE
   0.00 896.65 us  12.00 us   83103.00 us235
 INODELK
   0.00 187.86 us  50.00 us 668.00 us   1526
 WRITE
   0.00  40.66 us  13.00 us 400.00 us  10400
 STATFS
   0.00 313.13 us  66.00 us2142.00 us   2049
 FXATTROP
   0.002794.97 us  26.00 us   78048.00 us295
 READ
   0.00   24469.82 us 206.00 us  176157.00 us 34
 MKDIR
   0.00  40.49 us  13.00 us 507.00 us  21420
 STAT
   0.00 190.90 us  40.00 us  330032.00 us  45820
 LOOKUP
 100.00 72004815.62 us   8.00 us 5783044563.00 us   3994
 FINODELK

 what happend?

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
About 200+ clients

How to print  print lock info in bricks, print lock requst info in afr 
dht?
thanks.



On Tue, Jan 28, 2014 at 9:51 AM, haiwei.xie-soulinfo 
haiwei@soulinfo.com wrote:


 hi,
looks FINODELK deak lock. how many clients, and nfs or fuse?
 Maybe the best way is to print lock info in bricks, print lock requst info
 in afr  dht.

  all my clients hang when they creating dir
 
 
  On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu mingfan...@gmail.com
 wrote:
 
   I ran glusterfs volume profile my_volume info, I got some thing likes:
  
 0.00   0.00 us   0.00 us   0.00 us 30
  FORGET
 0.00   0.00 us   0.00 us   0.00 us185
   RELEASE
 0.00   0.00 us   0.00 us   0.00 us 11
   RELEASEDIR
 0.00  66.50 us  54.00 us  79.00 us  2
   SETATTR
 0.00  44.83 us  25.00 us  94.00 us  6
   READDIR
 0.00  57.55 us  28.00 us  95.00 us 11
   OPENDIR
 0.00 325.00 us 279.00 us 371.00 us  2
   RENAME
 0.00 147.80 us  84.00 us 206.00 us  5
   LINK
 0.00 389.50 us  55.00 us 724.00 us  2
   READDIRP
 0.00 164.25 us  69.00 us 287.00 us  8
   UNLINK
 0.00  37.46 us  18.00 us  87.00 us 50
   FSTAT
 0.00  70.32 us  29.00 us 210.00 us 37
   GETXATTR
 0.00  77.75 us  42.00 us 216.00 us 55
   SETXATTR
 0.00  36.39 us  11.00 us 147.00 us119
   FLUSH
 0.00  51.22 us  24.00 us 139.00 us275
   OPEN
 0.00 180.14 us  84.00 us 457.00 us 96
   XATTROP
 0.003847.20 us 231.00 us   18218.00 us  5
   MKNOD
 0.00  70.08 us  15.00 us6539.00 us342
   ENTRYLK
 0.00   10338.86 us 184.00 us   34813.00 us  7
   CREATE
 0.00 896.65 us  12.00 us   83103.00 us235
   INODELK
 0.00 187.86 us  50.00 us 668.00 us   1526
   WRITE
 0.00  40.66 us  13.00 us 400.00 us  10400
   STATFS
 0.00 313.13 us  66.00 us2142.00 us   2049
   FXATTROP
 0.002794.97 us  26.00 us   78048.00 us295
   READ
 0.00   24469.82 us 206.00 us  176157.00 us 34
   MKDIR
 0.00  40.49 us  13.00 us 507.00 us  21420
   STAT
 0.00 190.90 us  40.00 us  330032.00 us  45820
   LOOKUP
   100.00 72004815.62 us   8.00 us 5783044563.00 us   3994
   FINODELK
  
   what happend?
  



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] could I delete stale files in .glusterfs/indicies/xattrop

2014-01-26 Thread Mingfan Lu
I found in the brick servers,  .glusterfs/indicies/xattrop of one volume
has many stale files (  260,000 ,most of them are created 2 month ago),
could I delete them direcly?

Another questions, how there stale files be left? I think when a file is
created or self-healed, the files should be removed.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] could I disable updatedb in brick server?

2014-01-26 Thread Mingfan Lu
we have lots of (really) files in our gluser brick servers and every day,
we will generate lots, the number of files increase very quickly. could I
disable updatedb in brick servers? if that, glusterfs servers will be
impacted?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] could I disable updatedb in brick server?

2014-01-26 Thread Mingfan Lu
thanks, I will try this.


On Sun, Jan 26, 2014 at 7:23 PM, James purplei...@gmail.com wrote:

 On Sun, Jan 26, 2014 at 6:13 AM, Mingfan Lu mingfan...@gmail.com wrote:
  we have lots of (really) files in our gluser brick servers and every
 day, we
  will generate lots, the number of files increase very quickly. could I
  disable updatedb in brick servers? if that, glusterfs servers will be
  impacted?
 Yes, read man 8 updatedb. This only stops 'locate' from being useful.
 It won't impact gluster.
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] intresting issue of replication and self-heal

2014-01-25 Thread Mingfan Lu
more hints:

I found node23  node24 have many files in
.glusterfs/indices/xattrop

there should be some problem? who could give some suggestions to resolve it?


On Thu, Jan 23, 2014 at 5:04 PM, Mingfan Lu mingfan...@gmail.com wrote:

 I profiled node22, I found that most latency comes from setxattr, where
 node23  node22 comes from lookup and locks. any one could help?

  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of
 calls Fop
  -   ---   ---   ---   
 
   0.00   0.00 us   0.00 us   0.00 us2437540
 FORGET
   0.00   0.00 us   0.00 us   0.00 us 252684
 RELEASE
   0.00   0.00 us   0.00 us   0.00 us2226292
 RELEASEDIR
   0.00  38.00 us  37.00 us  40.00 us  4
 FGETXATTR
   0.00  66.16 us  15.00 us   13139.00 us596
 GETXATTR
   0.00 239.14 us  58.00 us  126477.00 us   1967
 LINK
   0.00  51.85 us  14.00 us8298.00 us  19045
 STAT
   0.00 165.50 us   9.00 us  212057.00 us  20544
 READDIR
   0.001827.92 us 184.00 us  150298.00 us   2084
 RENAME
   0.00  49.14 us  12.00 us5908.00 us 189019
 STATFS
   0.00  84.63 us  14.00 us   96016.00 us 163405
 READ
   0.00   29968.76 us 156.00 us 1073902.00 us   3115
 CREATE
   0.001340.25 us   6.00 us 7415357.00 us 248141
 FLUSH
   0.001616.76 us  32.00 us 13865122.00 us 229190
 FTRUNCATE
   0.011807.58 us  19.00 us 55480776.00 us
 249569OPEN
   0.011875.11 us  10.00 us 8842171.00 us 465197
 FSTAT
   0.05  393296.28 us  52.00 us 56856581.00 us   9057
 UNLINK
   0.07   32291.01 us 192.00 us 9638107.00 us 156081
 RMDIR
   0.08   18339.18 us 140.00 us 5313885.00 us 337862
 MKNOD
   0.092904.39 us  18.00 us 51724741.00 us2226290
 OPENDIR
   0.154708.15 us  27.00 us 55115760.00 us2334864
 SETXATTR
   0.188965.91 us  68.00 us 26465968.00 us1513280
 FXATTROP
   0.213465.29 us  74.00 us 58580783.00 us4506602
 XATTROP
   0.284801.16 us  44.00 us 49643138.00 us4436847
 READDIRP
   0.375935.92 us   7.00 us 56449083.00 us4611760
 ENTRYLK
   1.024226.58 us  33.00 us 63494729.00 us   18092335
 WRITE
   1.502734.50 us   6.00 us 185109908.00 us   40971541
 INODELK
   4.75  348602.30 us   5.00 us 2185602946.00 us1019332
 FINODELK


 * 14.98   33957.49 us  14.00 us 59261447.00 us   32998211
 LOOKUP  26.30  807063.74 us 150.00 us 68086266.00 us
 2438422   MKDIR 49.95  457402.30 us  20.00 us 67894186.00
 67894186.00 us8171751 SETATTR*

 Duration: 353678 seconds
Data Read: 21110920120 bytes
 Data Written: 2338403381483 bytes

 here is  node23
  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of
 calls Fop
  -   ---   ---   ---   
 
   0.00   0.00 us   0.00 us   0.00 us   22125898
 FORGET
   0.00   0.00 us   0.00 us   0.00 us   89286732
 RELEASE
   0.00   0.00 us   0.00 us   0.00 us   32865496
 RELEASEDIR
   0.00  35.50 us  23.00 us  48.00 us  2
 FGETXATTR
   0.00 164.04 us  29.00 us  749181.00 us  39320
 FTRUNCATE
   0.00 483.71 us   8.00 us 2688755.00 us
 39288  LK
   0.00 419.61 us  48.00 us 2183971.00 us 274939
 LINK
   0.00 970.55 us 145.00 us 2471745.00 us 293435
 RENAME
   0.001346.63 us  35.00 us 4462970.00 us 243238
 SETATTR
   0.01 285.51 us  25.00 us 2588685.00 us3459436
 SETXATTR
   0.03 323.11 us   5.00 us 2074581.00 us6977304
 READDIR
   0.05   12200.60 us  84.00 us 3943421.00 us 287979
 RMDIR
   0.07 592.75 us   7.00 us 3592073.00 us8129847
 STAT
   0.076938.50 us  49.00 us 3268036.00 us 705818
 UNLINK
   0.08   19468.78 us 149.00 us 3664022.00 us 276310
 MKNOD
   0.09 763.31 us   8.00 us 3396903.00 us8731725
 STATFS
   0.091715.79 us   4.00 us 5626912.00 us3902746
 FLUSH
   0.104614.74 us   9.00 us 5835691.00 us1574923
 FSTAT
   0.101189.55 us  13.00 us 6043163.00 us6129885
 OPENDIR
   0.10   19729.66 us 131.00 us 4112832.00 us 376286
 CREATE
   0.13 328.26 us  24.00 us 2410049.00 us   29091424
 WRITE
   0.202107.64 us  10.00 us 5765196.00 us6675496
 GETXATTR
   0.285317.38 us  14.00 us 7549301.00 us

Re: [Gluster-users] intresting issue of replication and self-heal

2014-01-23 Thread Mingfan Lu
   25765615
XATTROP
 11.74   12896.99 us   4.00 us 2141920969.00 us   64590600
FINODELK
 15.43   11171.78 us   5.00 us 909115697.00 us   98040443
ENTRYLK
 25.46   12945.21 us   5.00 us 110968164.00 us  139545956
INODELK
 39.919656.48 us  10.00 us 8137517.00 us  293268060
LOOKUP

here is node24

 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
Fop
 -   ---   ---   ---   

  0.00   0.00 us   0.00 us   0.00 us   22124594
FORGET
  0.00   0.00 us   0.00 us   0.00 us   89290582
RELEASE
  0.00   0.00 us   0.00 us   0.00 us   26657287
RELEASEDIR
  0.00  47.00 us  47.00 us  47.00 us  1
FGETXATTR
  0.00 308.67 us   8.00 us 1405672.00 us
39285  LK
  0.00 745.82 us  32.00 us 1690066.00 us  86586
FTRUNCATE
  0.00 388.58 us  49.00 us 1348668.00 us 274927
LINK
  0.001008.11 us 158.00 us 2443763.00 us 293423
RENAME
  0.011094.49 us  31.00 us 2857159.00 us 290615
SETATTR
  0.02 304.24 us  24.00 us 2878581.00 us3506688
SETXATTR
  0.03 279.83 us   5.00 us 3716543.00 us6977266
READDIR
  0.05   10919.43 us  83.00 us 5075633.00 us 287979
RMDIR
  0.05 692.45 us  12.00 us 3951452.00 us4692109
OPENDIR
  0.06 465.87 us   6.00 us 3726826.00 us8238785
STAT
  0.071187.15 us  14.00 us 5361516.00 us3626802
GETXATTR
  0.076308.14 us  50.00 us 4281153.00 us 705476
UNLINK
  0.07   16729.47 us 148.00 us 3238674.00 us 276299
MKNOD
  0.08 553.69 us   8.00 us 2721668.00 us8744855
STATFS
  0.091462.59 us   4.00 us 5488045.00 us3903587
FLUSH
  0.10   16979.85 us 130.00 us 3471136.00 us 376279
CREATE
  0.124818.36 us   9.00 us 6101767.00 us1577172
FSTAT
  0.15 315.32 us  24.00 us 3801518.00 us   29090837
WRITE
  0.192539.98 us  48.00 us 4657386.00 us4586952
READDIRP
  0.233794.04 us  15.00 us 6487700.00 us3798788
OPEN
  0.37 393.76 us  10.00 us 3284611.00 us   58491958
READ
  0.881524.40 us  60.00 us 7456834.00 us   36097324
FXATTROP
  1.634429.64 us  72.00 us 7194041.00 us   22984938
XATTROP
  1.74   31485.11 us 143.00 us 4705647.00 us3458000
MKDIR
  2.082010.98 us   4.00 us 7669056.00 us   64626004
FINODELK
 18.35   11708.39 us   4.00 us 7193745.00 us   98037767
ENTRYLK
 31.62   14170.24 us   5.00 us 7194060.00 us  139544869
INODELK
 41.949273.78 us  10.00 us 7193886.00 us  282853490
LOOKUP



On Wed, Jan 22, 2014 at 12:05 PM, Mingfan Lu mingfan...@gmail.com wrote:

 I have a volume (distribute-replica (*3)), today i found an interesting
 problem

 node22 node23 and node24 are the replica-7 from client A
 but the annoying thing is when I create dir or write file from client to
 replica-7,

  date;dd if=/dev/zero of=49 bs=1MB count=120
 Wed Jan 22 11:51:41 CST 2014
 120+0 records in
 120+0 records out
 12000 bytes (120 MB) copied, 1.96257 s, 61.1 MB/s

 but I could only find node23  node24 have the find
 ---
 node23,node24
 ---
 /mnt/xfsd/test-volume/test/49

 in clientA, I use find command

 I use another machine as client B, and mount the test volume (newly
 mounted)
 to run* find /mnt/xfsd/test-volume/test/49*

 from Client A, the  three nodes have the file now.

 ---
 node22,node23.node24
 ---
 /mnt/xfsd/test-volume/test/49

 but in Client A, I delete the file /mnt/xfsd/test-volume/test/49, node22
 still have the file in brick.

 ---
 node22
 ---
 /mnt/xfsd/test-volume/test/49

 but if i delete the new created files from Client B )
 my question is why node22 have no newly created/write dirs/files? I have
 to use find to trigger the self-heal to fix that?

 from ClientA's log, I find something like:

  I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-test-volume-replicate-7:
 no active sinks for performing self-heal on file /test/49

 It is harmless for it is information level?

 I also see something like:
 [2014-01-19 10:23:48.422757] E
 [afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk]
 0-test-volume-replicate-7: Non Blocking entrylks failed for
 /test/video/2014/01.
 [2014-01-19 10:23:48.423042] E
 [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk]
 0-test-volume-replicate-7: background  entry self-heal failed on
 /test/video/2014/01





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] failed to create directories

2014-01-22 Thread Mingfan Lu
Thanks for Justin's reply. But my client is not 32-bit



On Thu, Jan 23, 2014 at 2:45 AM, Justin Dossey j...@podomatic.com wrote:

 This is a long shot, but the inode not found message makes me wonder
 whether your client is 32-bit while your server is 64-bit.  If the client
 is using 32-bit inode numbers but the server is using 64-bit inode numbers,
 you could have something like this happen, I think.

 If your client is 32-bit but the GlusterFS server is 64-bit, could you
 confirm that the volume is mounted with the enable-ino32 option?


 On Tue, Jan 21, 2014 at 7:37 PM, Mingfan Lu mingfan...@gmail.com wrote:

 Hi,

 I failed to create directories (using python os.makedirs) occationally,
 the following is an example.

 When 00:01:56, my application try to create the directories
 /mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/ ,but finally,
 my application failed write the file to the directories. I use ls to find
 that the directories doesn't exist, that means the application failed to
 create the directories.

 From the glusterfs client's log, I found those pasted below, I could not
 understand why? any could help to figure out the root cause? thanks

 *=== Application's log ===*
 2014-01-22 00:01:56 - FileUtil.createDir.83 - INFO : create an download
 dir named /mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/
 。。。
 2014-01-22 00:02:04 - CmdUtil.executeCmd.34 - ERROR : The command:dd
 if=/data/tmp/480/7073760/f73624801c0887a9db20ce600cce312c.mkv
 *of=/mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/*5ebc87b8-82b5-11e3-8262-b8ca3a608648
 bs=1M execute faily. Error info is dd: opening
 `/mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/5ebc87b8-82b5-11e3-8262-b8ca3a608648':*
 No such file or directory*
 . Return status code: 1

 *=== Glusterfs client 's log ===*

 [2014-01-22 00:01:58.331699] W [client3_1-fops.c:327:client3_1_mkdir_cbk]
 0-sh-ugc1-mams-client-23: remote operation failed: File exists. Path:
 /production/video/2014/01/22/a38 (----)
 [2014-01-22 00:01:58.331897] W [client3_1-fops.c:327:client3_1_mkdir_cbk]
 0-sh-ugc1-mams-client-22: remote operation failed: File exists. Path:
 /production/video/2014/01/22/a38 (----)
 [2014-01-22 00:01:58.678114] W [fuse-bridge.c:255:fuse_entry_cbk]
 0-glusterfs-fuse: 89189949: MKDIR() /production/video/2014/01/22/a38
 returning inode 0
 [2014-01-22 00:01:58.701931] W [inode.c:914:inode_lookup]
 (--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/debug/io-stats.so(io_stats_mkdir_cbk+0x1b0)
 [0x7f26c539e4f0]
 (--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/mount/fuse.so(+0xf39c)
 [0x7f26c974739c]
 (--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/mount/fuse.so(+0xf25b)
 [0x7f26c974725b]))) 0-fuse: inode not found


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users




 --
 Justin Dossey
 CTO, PodOmatic


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] failed to create directories

2014-01-21 Thread Mingfan Lu
Hi,

I failed to create directories (using python os.makedirs) occationally, the
following is an example.

When 00:01:56, my application try to create the directories
/mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/ ,but finally,
my application failed write the file to the directories. I use ls to find
that the directories doesn't exist, that means the application failed to
create the directories.

From the glusterfs client's log, I found those pasted below, I could not
understand why? any could help to figure out the root cause? thanks

*=== Application's log ===*
2014-01-22 00:01:56 - FileUtil.createDir.83 - INFO : create an download dir
named /mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/
。。。
2014-01-22 00:02:04 - CmdUtil.executeCmd.34 - ERROR : The command:dd
if=/data/tmp/480/7073760/f73624801c0887a9db20ce600cce312c.mkv
*of=/mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/*5ebc87b8-82b5-11e3-8262-b8ca3a608648
bs=1M execute faily. Error info is dd: opening
`/mnt/upload/sh_ugc/production/video/2014/01/22/a38/466/351/5ebc87b8-82b5-11e3-8262-b8ca3a608648':*
No such file or directory*
. Return status code: 1

*=== Glusterfs client 's log ===*

[2014-01-22 00:01:58.331699] W [client3_1-fops.c:327:client3_1_mkdir_cbk]
0-sh-ugc1-mams-client-23: remote operation failed: File exists. Path:
/production/video/2014/01/22/a38 (----)
[2014-01-22 00:01:58.331897] W [client3_1-fops.c:327:client3_1_mkdir_cbk]
0-sh-ugc1-mams-client-22: remote operation failed: File exists. Path:
/production/video/2014/01/22/a38 (----)
[2014-01-22 00:01:58.678114] W [fuse-bridge.c:255:fuse_entry_cbk]
0-glusterfs-fuse: 89189949: MKDIR() /production/video/2014/01/22/a38
returning inode 0
[2014-01-22 00:01:58.701931] W [inode.c:914:inode_lookup]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/debug/io-stats.so(io_stats_mkdir_cbk+0x1b0)
[0x7f26c539e4f0]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/mount/fuse.so(+0xf39c)
[0x7f26c974739c]
(--/usr/lib64/glusterfs/3.3.0.5rhs_iqiyi_7/xlator/mount/fuse.so(+0xf25b)
[0x7f26c974725b]))) 0-fuse: inode not found
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] intresting issue of replication and self-heal

2014-01-21 Thread Mingfan Lu
I have a volume (distribute-replica (*3)), today i found an interesting
problem

node22 node23 and node24 are the replica-7 from client A
but the annoying thing is when I create dir or write file from client to
replica-7,

 date;dd if=/dev/zero of=49 bs=1MB count=120
Wed Jan 22 11:51:41 CST 2014
120+0 records in
120+0 records out
12000 bytes (120 MB) copied, 1.96257 s, 61.1 MB/s

but I could only find node23  node24 have the find
---
node23,node24
---
/mnt/xfsd/test-volume/test/49

in clientA, I use find command

I use another machine as client B, and mount the test volume (newly mounted)
to run* find /mnt/xfsd/test-volume/test/49*

from Client A, the  three nodes have the file now.

---
node22,node23.node24
---
/mnt/xfsd/test-volume/test/49

but in Client A, I delete the file /mnt/xfsd/test-volume/test/49, node22
still have the file in brick.

---
node22
---
/mnt/xfsd/test-volume/test/49

but if i delete the new created files from Client B )
my question is why node22 have no newly created/write dirs/files? I have to
use find to trigger the self-heal to fix that?

from ClientA's log, I find something like:

 I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-test-volume-replicate-7: no
active sinks for performing self-heal on file /test/49

It is harmless for it is information level?

I also see something like:
[2014-01-19 10:23:48.422757] E
[afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk]
0-test-volume-replicate-7: Non Blocking entrylks failed for
/test/video/2014/01.
[2014-01-19 10:23:48.423042] E
[afr-self-heal-common.c:2160:afr_self_heal_completion_cbk]
0-test-volume-replicate-7: background  entry self-heal failed on
/test/video/2014/01
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] intresting issue of replication and self-heal

2014-01-21 Thread Mingfan Lu
I have a volume (distribute-replica (*3)), today i found an interesting
problem

node22 node23 and node24 are the replica-7 from client A
but the annoying thing is when I create dir or write file from client to
replica-7,

 date;dd if=/dev/zero of=49 bs=1MB count=120
Wed Jan 22 11:51:41 CST 2014
120+0 records in
120+0 records out
12000 bytes (120 MB) copied, 1.96257 s, 61.1 MB/s

but I could only find node23  node24 have the find
---
node23,node24
---
/mnt/xfsd/test-volume/test/49

in clientA, I use find command

I use another machine as client B, and mount the test volume (newly mounted)
to run* find /mnt/xfsd/test-volume/test/49*

from Client A, the  three nodes have the file now.

---
node22,node23.node24
---
/mnt/xfsd/test-volume/test/49

but in Client A, I delete the file /mnt/xfsd/test-volume/test/49, node22
still have the file in brick.

---
node22
---
/mnt/xfsd/test-volume/test/49

but if i delete the new created files from Client B )
my question is why node22 have no newly created/write dirs/files? I have to
use find to trigger the self-heal to fix that?

from ClientA's log, I find something like:

 I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-test-volume-replicate-7: no
active sinks for performing self-heal on file /test/49

It is harmless for it is information level?

I also saw something like:
[2014-01-19 10:23:48.422757] E
[afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk]
0-test-volume-replicate-7: Non Blocking entrylks failed for
/test/video/2014/01.
[2014-01-19 10:23:48.423042] E
[afr-self-heal-common.c:2160:afr_self_heal_completion_cbk]
0-test-volume-replicate-7: background  entry self-heal failed on
/test/video/2014/01

Is is relative to this issue?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] lots of repeative errors in logs

2013-12-23 Thread Mingfan Lu
I saw lots of logs, any thoughts?

[2013-12-24 11:35:19.659143] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x300)
[0x7f99420dd170]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(mq_req_xattr+0x3c)
[0x7f99420e66ec]))) 0-marker: invalid argument: loc-parent
[2013-12-24 11:35:19.659754] W
[marker-quota.c:2047:mq_inspect_directory_xattr] 0-sh-ugc1-mams-marker:
cannot add a new contribution node
[2013-12-24 11:35:19.664271] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x300)
[0x7f99420dd170]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(mq_req_xattr+0x3c)
[0x7f99420e66ec]))) 0-marker: invalid argument: loc-parent
[2013-12-24 11:35:19.664869] W
[marker-quota.c:2047:mq_inspect_directory_xattr] 0-sh-ugc1-mams-marker:
cannot add a new contribution node
[2013-12-24 11:35:19.677499] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x300)
[0x7f99420dd170]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(mq_req_xattr+0x3c)
[0x7f99420e66ec]))) 0-marker: invalid argument: loc-parent
[2013-12-24 11:35:19.678113] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x300)
[0x7f99420dd170]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(mq_req_xattr+0x3c)
[0x7f99420e66ec]))) 0-marker: invalid argument: loc-parent
[2013-12-24 11:35:19.678607] W
[marker-quota.c:2047:mq_inspect_directory_xattr] 0-sh-ugc1-mams-marker:
cannot add a new contribution node
[2013-12-24 11:35:19.693958] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x300)
[0x7f99420dd170]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(mq_req_xattr+0x3c)
[0x7f99420e66ec]))) 0-marker: invalid argument: loc-parent
[2013-12-24 11:35:19.694629] W
[marker-quota.c:2047:mq_inspect_directory_xattr] 0-sh-ugc1-mams-marker:
cannot add a new contribution node
[2013-12-24 11:35:19.695582] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x300)
[0x7f99420dd170]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(mq_req_xattr+0x3c)
[0x7f99420e66ec]))) 0-marker: invalid argument: loc-parent
[2013-12-24 11:35:19.706590] E
[marker-quota-helper.c:230:mq_dict_set_contribution]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/debug/io-stats.so(io_stats_lookup+0x13e)
[0x7f9941ec7a3e]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(marker_lookup+0x300)
[0x7f99420dd170]
(--/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/marker.so(mq_req_xattr+0x3c)
[0x7f99420e66ec]))) 0-marker: invalid argument: loc-parent
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] How to set auth.allow using hostname?

2013-12-03 Thread Mingfan Lu
I tried to set auth.allow using hostnames (not IPs) of clients
such as
gluster volume set VOLUME auth.allow hostname1,hostname2,hostname3

But the clients could not mount the volume
If I use IPs, it defintely works.
But for my clients use DHCP, so I don't think using IPs is a good idea for
they could be changed but hostnames of them wouldn't.

Any comments?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to set auth.allow using hostname?

2013-12-03 Thread Mingfan Lu
In the gluster nodes, I could ping the clients using the hostnames. So, DNS
wouldn't be the root cause.
the clients could also resolve the hostnames (ping is ok)



On Wed, Dec 4, 2013 at 11:52 AM, Sharuzzaman Ahmat Raslan 
sharuzza...@gmail.com wrote:

 How is your DNS setting?

 Could your client resolve the hostname?


 On Wed, Dec 4, 2013 at 11:49 AM, Mingfan Lu mingfan...@gmail.com wrote:

 I tried to set auth.allow using hostnames (not IPs) of clients
 such as
 gluster volume set VOLUME auth.allow hostname1,hostname2,hostname3

 But the clients could not mount the volume
 If I use IPs, it defintely works.
 But for my clients use DHCP, so I don't think using IPs is a good idea
 for they could be changed but hostnames of them wouldn't.

 Any comments?


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users




 --
 Sharuzzaman Ahmat Raslan

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to set auth.allow using hostname?

2013-12-03 Thread Mingfan Lu
host(a.b.c.d) I got,
Host a.b.c.d.in-addr.arpa. not found: 3(NXDOMAIN)

a.b.c.d stands for the ip address of my client.


On Wed, Dec 4, 2013 at 1:18 PM, Cool cool...@hotmail.com wrote:

  Wild guess -

 How about DNS revert delegation? Try host IP to see if it can be
 mapped back to hostname1 then there should be something wrong, otherwise
 you have DNS problem.

 -C.B.

 P.S. I guessed because I believe whenever a connection comes in, the
 server does not know anything other than IP and port.

 On 12/3/2013 8:38 PM, Mingfan Lu wrote:

  In the gluster nodes, I could ping the clients using the hostnames. So,
 DNS wouldn't be the root cause.
  the clients could also resolve the hostnames (ping is ok)



 On Wed, Dec 4, 2013 at 11:52 AM, Sharuzzaman Ahmat Raslan 
 sharuzza...@gmail.com wrote:

  How is your DNS setting?

  Could your client resolve the hostname?


 On Wed, Dec 4, 2013 at 11:49 AM, Mingfan Lu mingfan...@gmail.com wrote:

   I tried to set auth.allow using hostnames (not IPs) of clients
 such as
  gluster volume set VOLUME auth.allow hostname1,hostname2,hostname3

 But the clients could not mount the volume
  If I use IPs, it defintely works.
  But for my clients use DHCP, so I don't think using IPs is a good idea
 for they could be changed but hostnames of them wouldn't.

  Any comments?


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users




 --
 Sharuzzaman Ahmat Raslan




 ___
 Gluster-users mailing 
 listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users




 --
http://www.avast.com/

 This email is free from viruses and malware because avast! 
 Antivirushttp://www.avast.com/protection is active.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to set auth.allow using hostname?

2013-12-03 Thread Mingfan Lu
the change for the bug seems more reasonable, that get the ips for hostname
in server side then comparet with the incoming client's ip. So, we don't
need the DNS reverse lookup for the incoming client's IP.
But the fix is well tested and ready to backport?
https://bugzilla.redhat.com/show_bug.cgi?id=915153


On Wed, Dec 4, 2013 at 2:07 PM, shwetha spand...@redhat.com wrote:

  Refer to bug : https://bugzilla.redhat.com/show_bug.cgi?id=915153

 On 12/04/2013 11:11 AM, Cool wrote:

 Means you'd better use IP instead of host name, or you have to ask your
 DNS administrator to setup reverse DNS (PTR record) for those IPs, which
 may involve your upstream ISP, or even more complicated than that ...

 http://en.wikipedia.org/wiki/Reverse_DNS_lookup

 - C.B.
 On 12/3/2013 9:24 PM, Mingfan Lu wrote:


 host(a.b.c.d) I got,
 Host a.b.c.d.in-addr.arpa. not found: 3(NXDOMAIN)

  a.b.c.d stands for the ip address of my client.


  On Wed, Dec 4, 2013 at 1:18 PM, Cool cool...@hotmail.com wrote:

  Wild guess -

 How about DNS revert delegation? Try host IP to see if it can be
 mapped back to hostname1 then there should be something wrong, otherwise
 you have DNS problem.

 -C.B.

 P.S. I guessed because I believe whenever a connection comes in, the
 server does not know anything other than IP and port.

 On 12/3/2013 8:38 PM, Mingfan Lu wrote:

  In the gluster nodes, I could ping the clients using the hostnames. So,
 DNS wouldn't be the root cause.
  the clients could also resolve the hostnames (ping is ok)



 On Wed, Dec 4, 2013 at 11:52 AM, Sharuzzaman Ahmat Raslan 
 sharuzza...@gmail.com wrote:

  How is your DNS setting?

  Could your client resolve the hostname?


 On Wed, Dec 4, 2013 at 11:49 AM, Mingfan Lu mingfan...@gmail.comwrote:

   I tried to set auth.allow using hostnames (not IPs) of clients
 such as
  gluster volume set VOLUME auth.allow hostname1,hostname2,hostname3

 But the clients could not mount the volume
  If I use IPs, it defintely works.
  But for my clients use DHCP, so I don't think using IPs is a good idea
 for they could be changed but hostnames of them wouldn't.

  Any comments?


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users




 --
 Sharuzzaman Ahmat Raslan




 ___
 Gluster-users mailing 
 listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users




 --
 http://www.avast.com/

 This email is free from viruses and malware because avast! 
 Antivirushttp://www.avast.com/protection is active.





 --
 http://www.avast.com/

 This email is free from viruses and malware because avast! 
 Antivirushttp://www.avast.com/protection is active.



 ___
 Gluster-users mailing 
 listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] strange file size is 0 with 777 permission.

2013-11-21 Thread Mingfan Lu
I have a volume named upload.
I upload some kind of file to the volume
and then use mv to rename the file
after that, I found the file size is 0 with 777 permission.

 volume information ==
 gluster volume info upload

Volume Name: upload
Type: Distributed-Replicate
Volume ID: 6220fd5f-635c-44fb-a627-55dc796d5d1f
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.10.135.23:/mnt/xfsd/upload
Brick2: 10.10.135.24:/mnt/xfsd/upload
Brick3: 10.10.135.25:/mnt/xfsd/upload
Brick4: 10.10.135.26:/mnt/xfsd/upload
Brick5: 10.10.135.27:/mnt/xfsd/upload
Brick6: 10.10.135.28:/mnt/xfsd/upload
Options Reconfigured:
performance.quick-read: off
performance.read-ahead: off
server.allow-insecure: on
features.limit-usage: /:22TB
features.quota: enable
nfs.enable-ino32: on
features.quota-timeout: 5

 example =

-rwxrwxrwx   1 root root0 Nov 21 17:23
b9003ee83d28e6b8057e7d57813986fb.mp4

10.10.135.23
/mnt/xfsd/upload/bj/operation/video/2013/11/21/f7/b9003ee83d28e6b8057e7d57813986fb.mp4
does not exist!

10.10.135.24
trusted.glusterfs.dht.linkto=upload-replicate-2
trusted.glusterfs.gfid=764531a81e3040098a423bc226c8c2fc
-T 2 root root 0 Nov 21 17:23
/mnt/xfsd/upload/bj/operation/video/2013/11/21/f7/b9003ee83d28e6b8057e7d57813986fb.mp4
-T 2 root root 0 Nov 21 17:23
/mnt/xfsd/upload/.glusterfs/76/45/764531a8-1e30-4009-8a42-3bc226c8c2fc

10.10.135.25
/mnt/xfsd/upload/bj/operation/video/2013/11/21/f7/b9003ee83d28e6b8057e7d57813986fb.mp4
does not exist!

10.10.135.26
/mnt/xfsd/upload/bj/operation/video/2013/11/21/f7/b9003ee83d28e6b8057e7d57813986fb.mp4
does not exist!

10.10.135.27
trusted.glusterfs.gfid=764531a81e3040098a423bc226c8c2fc
-rw-r--r-- 2 root root 1685321 Nov 21 17:23
/mnt/xfsd/upload/bj/operation/video/2013/11/21/f7/b9003ee83d28e6b8057e7d57813986fb.mp4
-rw-r--r-- 2 root root 1685321 Nov 21 17:23
/mnt/xfsd/upload/.glusterfs/76/45/764531a8-1e30-4009-8a42-3bc226c8c2fc

10.10.135.28
trusted.glusterfs.gfid=764531a81e3040098a423bc226c8c2fc
-rw-r--r-- 2 root root 1685321 Nov 21 17:23
/mnt/xfsd/upload/bj/operation/video/2013/11/21/f7/b9003ee83d28e6b8057e7d57813986fb.mp4
-rw-r--r-- 2 root root 1685321 Nov 21 17:23
/mnt/xfsd/upload/.glusterfs/76/45/764531a8-1e30-4009-8a42-3bc226c8c2fc

even I make it 10.10.135.23 as offline.
I still get such issue.

Could anyone help?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users