Re: [Gluster-devel] Two annoying bugs in 3.2.5

2011-12-16 Thread Pranith Kumar K
On 12/16/2011 07:57 PM, Emmanuel Dreyfus wrote: Hi I have two annoying bugs in 3.2.5. I wonder if there is a patch to test, or a change to backport from master in order to address them 1) Adding a brick or rebalancing a volume changes inodes, and therefore wreak processes that were working on f

Re: [Gluster-devel] Two annoying bugs in 3.2.5

2011-12-20 Thread Pranith Kumar K
On 12/17/2011 12:40 AM, Emmanuel Dreyfus wrote: Pranith Kumar K wrote: 2) When using AFR, if a peer goes down, processes that have I/O pending will se an error. Just retrying the same operation is fine, but that is a bit furstrating. Emmanuel, Could you give the test case for the afr

Re: [Gluster-devel] replicate background threads

2012-03-13 Thread Pranith Kumar K
On 03/13/2012 07:52 PM, Ian Latter wrote: Hello, Well we've been privy to our first true error in Gluster now, and we're not sure of the cause. The SaturnI machine with 1Gbyte of RAM was exhausting its memory and crashing and we saw core dumps on SaturnM and MMC. Replacing the SaturnI h

Re: [Gluster-devel] replicate background threads

2012-03-13 Thread Pranith Kumar K
On 03/13/2012 07:52 PM, Ian Latter wrote: Hello, Well we've been privy to our first true error in Gluster now, and we're not sure of the cause. The SaturnI machine with 1Gbyte of RAM was exhausting its memory and crashing and we saw core dumps on SaturnM and MMC. Replacing the SaturnI h

Re: [Gluster-devel] replicate background threads

2012-03-13 Thread Pranith Kumar K
number. I didn't know this. Is there a queue length for what is yet to be handled by the background self heal count? If so, can it also be adjusted? - Original Message - From: "Pranith Kumar K" To: "Ian Latter" Subject: Re: [Gluster-devel] replicate background

Re: [Gluster-devel] glusterfs3.2.7 split brain on a server, while it's normal on another server

2013-01-09 Thread Pranith Kumar K
On 01/09/2013 11:03 AM, Song wrote: Hi, We have a glusterfs clusters, version is 3.2.7. The volume info is as below: Volume Name: gfs1 Type: Distributed-Replicate Status: Started Number of Bricks: 94 x 3 = 282 Transport-type: tcp We native mount the volume in all cluster servers. When w

Re: [Gluster-devel] Need review for client-reopen changes

2013-01-09 Thread Pranith Kumar K
On 01/07/2013 04:46 PM, Raghavendra Gowdappa wrote: Pranith, This comment is on the second patch. While the implementation looks fine, I've some concerns related to the idea itself. Consider following situation with a replicate volume of two subvolumes: 1. process 1 (p1) acquires a mandatory

[Gluster-devel] Afr handling dir fop failures on all nodes gracefully.

2013-01-09 Thread Pranith Kumar K
hi, Attaching the steps to re-create the issue. As part of Entry transaction, before performing create/mknod/mkdir/rmdir/unlink/link/symlink/rename fops, afr takes appropriate entry locks and then performs pre-op. If the fop fails on all nodes then the changelog leaves the directory in

Re: [Gluster-devel] ownership of link file changed to root:root after reboot brick service

2013-01-16 Thread Pranith Kumar K
On 01/16/2013 12:23 PM, huangql wrote: Dear all, I have encountered a strange problem that the ownership of linkto file was changed to root:root after we reboot server. And I reproduced the problem many times. For example: ff_2 First, the new created file: ff_2 [root@test02 ~]# ll /test

Re: [Gluster-devel] Glusterfs distributed volume stops a long time with bailing out frame type(GlusterFS 3.2.7) op(INODELK(29)

2013-01-22 Thread Pranith Kumar K
On 01/23/2013 08:33 AM, Song wrote: Hi, When application access a directory of glusterfs volume, glusterfs client spend about 1 hour, from 19:44:05 to 20:44:25. The gluster volume is DHT + AFR, native mount to /xmail/gfs1 Access(/xmail/gfs1/xmail_dedup/gfs1_000/011/204/) The following mess

Re: [Gluster-devel] Need review for client-reopen changes

2013-01-22 Thread Pranith Kumar K
On 01/09/2013 03:51 PM, Pranith Kumar K wrote: On 01/07/2013 04:46 PM, Raghavendra Gowdappa wrote: Pranith, This comment is on the second patch. While the implementation looks fine, I've some concerns related to the idea itself. Consider following situation with a replicate volume o

[Gluster-devel] Regarding marking of data/mdata changelog in entry self-heal

2013-01-30 Thread Pranith Kumar K
hi Avati, In entry self-heal of a directory, for every new file that is created, data/metadata changelogs are marked on the source file to indicate pending data/metadata self-heal from source file to the new file that is created as part of this self-heal. But this data/metadata changelog

Re: [Gluster-devel] glusterfs(3.2.7) hang when making the same dir at the same time

2013-01-31 Thread Pranith Kumar K
On 02/01/2013 11:44 AM, Song wrote: I have compared the source code of features/locks between 3.2.7 and 3.3.1. I find mutex lock is added in various functions, such as get_domain function. Are these changes updated to 3.2.7 branch? *From:*Anand Avati [mailto:anand.av...@gmail.com] *Sent:*

Re: [Gluster-devel] Regarding marking of data/mdata changelog in entry self-heal

2013-02-03 Thread Pranith Kumar K
On 01/31/2013 10:14 AM, Pranith Kumar K wrote: hi Avati, In entry self-heal of a directory, for every new file that is created, data/metadata changelogs are marked on the source file to indicate pending data/metadata self-heal from source file to the new file that is created as part of

[Gluster-devel] Dynamic disabling of eager-locking based on number of fds

2013-02-11 Thread Pranith Kumar K
hi, Problem: When there are multiple fds writing to same file with eager-lock enabled, the fd which acquires the eager-lock waits for post-op-delay secs before doing the unlock. Because of this all other fds opened on the file face extra delay when performing writes. Eager-locking, post-o

Re: [Gluster-devel] Dynamic disabling of eager-locking based on number of fds

2013-02-11 Thread Pranith Kumar K
On 02/12/2013 08:43 AM, Anand Avati wrote: On Mon, Feb 11, 2013 at 7:02 PM, Pranith Kumar K <mailto:pkara...@redhat.com>> wrote: hi, Problem: When there are multiple fds writing to same file with eager-lock enabled, the fd which acquires the eager-lock

[Gluster-devel] Eager-lock and nfs graph generation

2013-02-11 Thread Pranith Kumar K
hi, Please note that this is a case in theory and I did not run into such situation, but I feel it is important to address this. Configuration with 'Eager-lock on" and "write-behind off" should not be allowed as it leads to lock synchronization problems which lead to data in-consistency among r

Re: [Gluster-devel] Eager-lock and nfs graph generation

2013-02-19 Thread Pranith Kumar K
eager-lock code itself rather than depend on write-behind :| Avati On Mon, Feb 11, 2013 at 10:07 PM, Anand Avati <mailto:anand.av...@gmail.com>> wrote: On Mon, Feb 11, 2013 at 9:32 PM, Pranith Kumar K mailto:pkara...@redhat.com>> wrote: hi, Please note that

Re: [Gluster-devel] Eager-lock and nfs graph generation

2013-02-19 Thread Pranith Kumar K
On 02/20/2013 07:03 AM, Anand Avati wrote: On Tue, Feb 19, 2013 at 5:12 PM, Anand Avati <mailto:anand.av...@gmail.com>> wrote: On Tue, Feb 19, 2013 at 3:59 AM, Pranith Kumar K mailto:pkara...@redhat.com>> wrote: On 02/19/2013 11:26 AM, Anand Avati wrote:

[Gluster-devel] regarding volume type change

2013-02-22 Thread Pranith Kumar K
hi, Could you let us know where is this feature used. I mean the use cases it addresses. Pranith. ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Eager-lock and nfs graph generation

2013-02-25 Thread Pranith Kumar K
fs xlators. Is it ok if we use xdata instead of flags to convey that write-behind took care of overlaps? Pranith On Tue, Feb 19, 2013 at 7:20 PM, Anand Avati <mailto:anand.av...@gmail.com>> wrote: On Tue, Feb 19, 2013 at 6:11 PM, Pranith Kumar K mailto:pkara...@redhat.com>

Re: [Gluster-devel] regarding volume type change

2013-02-25 Thread Pranith Kumar K
On 02/23/2013 01:40 AM, Anand Avati wrote: On Fri, Feb 22, 2013 at 5:39 AM, Pranith Kumar K <mailto:pkara...@redhat.com>> wrote: hi, Could you let us know where is this feature used. I mean the use cases it addresses. Can you be more specific about the question? N

[Gluster-devel] Debugging enhancements for inode/entry lks

2013-02-26 Thread Pranith Kumar K
hi, From discussions with avati, we are thinking of addressing the following issues in locks xlator. At the moment to find which mount performed inode/entry lks, the process needs to be attached to gdb and the connection address needs to be de-referenced to figure out the actual

[Gluster-devel] Duplicate entries with rebalance

2013-02-28 Thread Pranith Kumar K
EXPECT "" get_dup_files $M0/dir6 EXPECT "" get_dup_files $M0/dir7 EXPECT "" get_dup_files $M0/dir8 EXPECT "" get_dup_files $M0/dir9 EXPECT "" get_dup_files $M0/dir10 Pranith. >From 39b180fa54a2adbdf55d78454514b9a403a2

Re: [Gluster-devel] unlock failed on 1 unlock

2013-03-14 Thread Pranith Kumar K
On 03/14/2013 10:57 AM, Emmanuel Dreyfus wrote: With 3.4.0alpha2, ca processremains stuck on a FUSE operation on the client. glusterfs client log says: [2013-03-14 00:40:42.269307] W [client-rpc-fops.c:1588:client3_3_finodelk_cbk] 0-gfs33-client-3: remote operation failed: Invalid argu