Re: [Gluster-users] XFS and MD RAID
On Mon, Sep 10, 2012 at 06:43:44PM +0100, Brian Candler wrote: It has been running fine for the last 7 hours or so. I have purposely sent some dd reads to the two failed drives in the server - I see the errors on those drives in dmesg, but activity on the remaining drives has not been affected. (The remaining drives are in an md raid0 array loaded by running 4 instances of bonnie++) Tomorrow, when I have physical access to the machine, I'll try hot-plugging the two failed drives as well. Good news: - hot plugging the two bad drives while I/O access is going on to the other drives is fine. The other drives continue to work without a hitch. - Even removing the two bad drives while dd'ing from them was fine. I/O to the other drives was also unaffected. So this patch looks good to me. I hope it can find its way to the production Ubuntu kernel soon. Regards, Brian. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] XFS and MD RAID
On Tue, Sep 11, 2012 at 09:51:28AM +0100, Brian Candler wrote: So this patch looks good to me. I hope it can find its way to the production Ubuntu kernel soon. FYI, I have opened a bug report at https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1049013 ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] libgfapi directory operation supported?
On 09/10/2012 03:01 PM, 符永涛 wrote: Dear gluster experts, I'm trying to use glusterfs libgfapi to write glusterfs application. But I fail to find any directory related functions in libgfapi. Any one gives me a clue? Yes, thats because libgfapi is new introduction to the codebase, and as of now, it contains only few 'fops' which are absolutely required for KVM integration. The directory operations and few more file based operations are still pending. (ref: api/src/glfs.c) Regards, Amar ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] Lock during volume profile
Hello, I am facing problem with volume profile that works fine until it get locked for unknown reason. It is locked for only one peer, I can still execute profile command from other peers. This is what I have found in logs of affected peer: [2012-09-11 13:48:19.154215] I [glusterd-handler.c:1838:glusterd_handle_cli_profile_volume] 0-management: Received volume profile req for volume Lookups [2012-09-11 13:48:19.154288] E [glusterd-utils.c:277:glusterd_lock] 0-glusterd: Unable to get lock for uuid: 66de8a84-c801-45e5-a93d-afdb08535d4e, lock held by: 66de8a84-c801-45e5-a93d-afdb08535d4e [2012-09-11 13:48:19.154318] E [glusterd-handler.c:453:glusterd_op_txn_begin] 0-management: Unable to acquire local lock, ret: -1 Mentioned uuid is ID of the peer. Only way to fix this is to restart glusterd. Is anyone else facing the same issue or does anyone know why it's caused and how to fix this? Filip ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Lock during volume profile
One more thing I have learned - it isn't only locked, whole peer is disconnected from gluster cluster! :-( I can run gluster commands from it but other peers doesn't see the affected one or see it as disconnected. Hostname: 10.17.35.249 Uuid: 66de8a84-c801-45e5-a93d-afdb08535d4e State: Peer in Cluster (Disconnected) 2012/9/11 Filip Pytloun filip.pytl...@gooddata.com Hello, I am facing problem with volume profile that works fine until it get locked for unknown reason. It is locked for only one peer, I can still execute profile command from other peers. This is what I have found in logs of affected peer: [2012-09-11 13:48:19.154215] I [glusterd-handler.c:1838:glusterd_handle_cli_profile_volume] 0-management: Received volume profile req for volume Lookups [2012-09-11 13:48:19.154288] E [glusterd-utils.c:277:glusterd_lock] 0-glusterd: Unable to get lock for uuid: 66de8a84-c801-45e5-a93d-afdb08535d4e, lock held by: 66de8a84-c801-45e5-a93d-afdb08535d4e [2012-09-11 13:48:19.154318] E [glusterd-handler.c:453:glusterd_op_txn_begin] 0-management: Unable to acquire local lock, ret: -1 Mentioned uuid is ID of the peer. Only way to fix this is to restart glusterd. Is anyone else facing the same issue or does anyone know why it's caused and how to fix this? Filip ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine
I meant the other way. I meant to say that 3.2.0 must also have the issue (since you mentioned 3.3.0 in the subject) Avati On Tue, Sep 11, 2012 at 6:27 AM, Manhong Dai da...@umich.edu wrote: Hi Avati, Thanks a lot for your help! It is good to know that 3.2.x doesn't have this problem. So the worst scenario for me is to re-install it with the latest 3.2.*. I hope my life won't be that miserable. Best, Manhong On Mon, 2012-09-10 at 20:53 -0700, Anand Avati wrote: Also, I find it very suspect that 3.2.x did not have the same behavior! Avati On Mon, Sep 10, 2012 at 8:53 PM, Anand Avati anand.av...@gmail.com wrote: This is a limitation of the 'handle' nature of FUSE filesystems. You will have to set a lower entry-timeout (mount option) to fix this problem. Avati On Mon, Sep 10, 2012 at 5:13 PM, Dai, Manhong da...@umich.edu wrote: Hi Avati, Thanks a lot! In my case, the application that tries to create a new file is not inside the folder. I write a simple bash scrip to demo this problem. #!/bin/bash FOLDER=/home/mengf_lab/daimh/temp/testdir for ((i=0; i100; i++)) do echo ###$i### ssh mengf-n1 rm -r $FOLDER; mkdir $FOLDER seq 10 |split -l 1 - $FOLDER/a. done And its output is ###0### ###1### split: /home/mengf_lab/daimh/temp/testdir/a.aa: No such file or directory ###2### split: /home/mengf_lab/daimh/temp/testdir/a.aa: No such file or directory ###3### ###4### Best, Manhong __ From: Anand Avati [anand.av...@gmail.com] Sent: Monday, September 10, 2012 5:25 PM To: Dai, Manhong Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] A problem with gluster 3.3.0 and Sun Grid Engine On Mon, Sep 10, 2012 at 8:30 AM, Manhong Dai da...@umich.edu wrote: Hi, We got a huge problem on our sun grid engine cluster with glusterfs 3.3.0. Could somebody help me? Based on my understanding, if a folder is removed and recreated on other client node, a program that tries to create a new file under the folder fails very often. Is the directory deleted and recreated by another client/mount while the application which attempts to create the file stays cd'ed inside the directory? Can you try to confirm if this is the pattern? Avati ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] 3.3 Quota Problems
I found this only happens when I mount the volume with the acl option. Otherwise, I don't get the no limit-set option provided message, and things works fine. Not even needing the remount the volume to see changes to quota. ... ling On 09/10/2012 07:15 PM, Ling Ho wrote: I found these in the client log: [2012-09-10 18:56:07.728206] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed [2012-09-10 18:56:08.751162] D [glusterfsd-mgmt.c:1441:is_graph_topology_equal] 0-glusterfsd-mgmt: graphs are equal [2012-09-10 18:56:08.751175] D [glusterfsd-mgmt.c:1495:glusterfs_volfile_reconfigure] 0-glusterfsd-mgmt: Only options have changed in the new graph [2012-09-10 18:56:08.751192] D [options.c:925:xlator_reconfigure_rec] 0-ana03-dht: reconfigured [2012-09-10 18:56:08.751200] I [quota.c:3085:quota_parse_limits] 0-ana03-quota: no limit-set option provided ... ling On 09/10/2012 03:01 PM, Ling Ho wrote: I am trying to use directory quota in our environment and face two problems: 1. When new quota is set on a directory, it doesn't take effect until the volume is remounted on the client. This is a major inconvenience. 2. If I add a new quota, quota stops working on the client. This is how to reproduce problem #2. I have these directories under my volume ana03: ling ling/testdir # gluster volume quota ana03 limit-usage /ling 20GB I could write into the directory ling till it is over 20GB and it gives me a Disk quota exceeded error. However, if I then set quota for ling/testdir, without remonting the volume on the client # gluster volume quota ana03 limit-usage /ling/testdir 2GB Not only I can write more than 2GB under ling/testdir, I could now write more than 20GB under ling. Remounting the volume on the client fixes everything. I am using glusterfs-3.3.0-1 on both the client and server. The server is running RHEL6.3, and client running RHEL5.8. Any idea about these problem, and if there is a fix? Thanks. ... ling ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] libgfapi directory operation supported?
Hi Amar, Thank you for sharing the info. We're really expecting it. 2012/9/11 Amar Tumballi ama...@redhat.com On 09/10/2012 03:01 PM, 符永涛 wrote: Dear gluster experts, I'm trying to use glusterfs libgfapi to write glusterfs application. But I fail to find any directory related functions in libgfapi. Any one gives me a clue? Yes, thats because libgfapi is new introduction to the codebase, and as of now, it contains only few 'fops' which are absolutely required for KVM integration. The directory operations and few more file based operations are still pending. (ref: api/src/glfs.c) Regards, Amar -- 符永涛 ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users