Re: [Gluster-devel] Fw: Re: Corvid gluster testing

2014-08-06 Thread Pranith Kumar Karampuri
/homegfs Brick2: gfsib01b.corvidtec.com:/data/brick01b/homegfs Brick3: gfsib01a.corvidtec.com:/data/brick02a/homegfs Brick4: gfsib01b.corvidtec.com:/data/brick02b/homegfs/ David -- Forwarded Message -- From: Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara

Re: [Gluster-devel] Fw: Re: Corvid gluster testing

2014-08-06 Thread Pranith Kumar Karampuri
On 08/07/2014 06:48 AM, Anand Avati wrote: On Wed, Aug 6, 2014 at 6:05 PM, Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: We checked this performance with plain distribute as well and on nfs it gave 25 minutes where as on nfs it gave around 90

Re: [Gluster-devel] Fw: Re: Corvid gluster testing

2014-08-06 Thread Pranith Kumar Karampuri
PM, Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: On 08/07/2014 06:48 AM, Anand Avati wrote: On Wed, Aug 6, 2014 at 6:05 PM, Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: We checked this performance

[Gluster-devel] regarding fuse mount crash on graph-switch

2014-08-06 Thread Pranith Kumar Karampuri
hi, Could you guys review http://review.gluster.com/#/c/8402. This fixes crash reported by JoeJulian. We are yet to find why fd-migration failed. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] regarding fuse mount crash on graph-switch

2014-08-06 Thread Pranith Kumar Karampuri
lookup. Anyway this is a different change compared to the one I sent in this patch. Pranith. Thanks! On Wed, Aug 6, 2014 at 9:16 PM, Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: hi, Could you guys review http://review.gluster.com/#/c/8402

Re: [Gluster-devel] Automated split-brain resolution

2014-08-07 Thread Pranith Kumar Karampuri
On 08/07/2014 02:05 PM, Ravishankar N wrote: Manual resolution of split-brains [1] has been a tedious task involving understanding and modifying AFR's changelog extended attributes. To simplify and to an extent automate this task, we are proposing a new CLI command with which the user can

Re: [Gluster-devel] Automated split-brain resolution

2014-08-12 Thread Pranith Kumar Karampuri
On 08/12/2014 11:29 AM, Harshavardhana wrote: This is a standard problem where there are split-brains in distributed systems. For example even in git there are cases where it gives up asking users to fix the file i.e. merge conflicts. If the user doesn't want split-brains they should move to

Re: [Gluster-devel] [PATCH] fix possible array out of bound

2014-08-12 Thread Pranith Kumar Karampuri
hi, Welcome to glusterfs :-). Could you please follow the instructions here to send the patch. http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow Pranith. On 08/12/2014 08:52 AM, Ruoyu wrote: fix dangerous usage of volname because strncpy does not always

Re: [Gluster-devel] [Gluster-users] Our plan to get bugs fixed quicker, and features implemented sooner

2014-08-21 Thread Pranith Kumar Karampuri
CC new gluster-devel@gluster.org mailing list. Pranith On 08/21/2014 12:42 PM, Lalatendu Mohanty wrote: I hope the subject line have increased your curiosity to go through the email :). As a community, we are looking for contributors for GlusterFS bug triage and hopefully this mail will give

Re: [Gluster-devel] glfs_creat this method hang up

2014-08-27 Thread Pranith Kumar Karampuri
Guys who work with glfs_*, could you guys reply to this question. Pranith On 08/27/2014 03:16 PM, ABC-new wrote: hi~: while i run the glusterfs example via libgfapi, gcc -c glusterfs_example -o glfs -luuid the method glfs_creat hang up. I want to generate the uuid for

Re: [Gluster-devel] [Gluster-users] Proposal for GlusterD-2.0

2014-09-06 Thread Pranith Kumar Karampuri
On 09/05/2014 03:51 PM, Kaushal M wrote: GlusterD performs the following functions as the management daemon for GlusterFS: - Peer membership management - Maintains consistency of configuration data across nodes (distributed configuration store) - Distributed command execution (orchestration)

Re: [Gluster-devel] How to fix wrong telldir/seekdir usage

2014-09-13 Thread Pranith Kumar Karampuri
On 09/14/2014 12:32 AM, Emmanuel Dreyfus wrote: In 1lrx1si.n8tms1igmi5pm%m...@netbsd.org I explained why NetBSD currently fails self-heald.t, but since the subjet is burried deep in a thread, it might be worth starting a new one to talk about how to fix. In 3 places within glusterfs code

Re: [Gluster-devel] How to fix wrong telldir/seekdir usage

2014-09-14 Thread Pranith Kumar Karampuri
On 09/14/2014 10:39 AM, Emmanuel Dreyfus wrote: Pranith Kumar Karampuri pkara...@redhat.com wrote: Just to make sure I understand the problem, the issue is happening because self-heal-daemon uses anonymous fds to perform readdirs? i.e. there is no explicit opendir on the directory. Everytime

Re: [Gluster-devel] How to fix wrong telldir/seekdir usage

2014-09-14 Thread Pranith Kumar Karampuri
On 09/14/2014 10:41 PM, Emmanuel Dreyfus wrote: 'Pranith Kumar Karampuri pkara...@redhat.com wrote: I can do that. That will teach me about that anonymous fd. Reading the code it seems afr-self-heald.c code does opendir and use the fd for readdir syncop, which suggest underlying xlator

[Gluster-devel] how do you debug ref leaks?

2014-09-17 Thread Pranith Kumar Karampuri
hi, Till now the only method I used to find ref leaks effectively is to find what operation is causing ref leaks and read the code to find if there is a ref-leak somewhere. Valgrind doesn't solve this problem because it is reachable memory from inode-table etc. I am just wondering if

Re: [Gluster-devel] how do you debug ref leaks?

2014-09-17 Thread Pranith Kumar Karampuri
be helpful is allocator info for generic objects like dict, inode, fd etc. That way we wouldn't have to sift through large amount of code. Could you elaborate the idea please. Pranith - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Gluster Devel gluster-devel

Re: [Gluster-devel] how do you debug ref leaks?

2014-09-18 Thread Pranith Kumar Karampuri
On 09/18/2014 03:13 PM, Niels de Vos wrote: On Thu, Sep 18, 2014 at 07:43:00AM +0530, Pranith Kumar Karampuri wrote: hi, Till now the only method I used to find ref leaks effectively is to find what operation is causing ref leaks and read the code to find if there is a ref-leak somewhere

Re: [Gluster-devel] how do you debug ref leaks?

2014-09-18 Thread Pranith Kumar Karampuri
On 09/18/2014 07:48 PM, Shyam wrote: On 09/17/2014 10:13 PM, Pranith Kumar Karampuri wrote: hi, Till now the only method I used to find ref leaks effectively is to find what operation is causing ref leaks and read the code to find if there is a ref-leak somewhere. Valgrind doesn't solve

Re: [Gluster-devel] how do you debug ref leaks?

2014-09-18 Thread Pranith Kumar Karampuri
processname) - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Shyam srang...@redhat.com, gluster-devel@gluster.org Sent: Thursday, September 18, 2014 11:34:28 AM Subject: Re: [Gluster-devel] how do you debug ref leaks? On 09/18/2014 07:48 PM, Shyam wrote: On 09/17

Re: [Gluster-devel] Invalid DIR * usage in quota xlator

2014-10-12 Thread Pranith Kumar Karampuri
On 10/13/2014 09:45 AM, Emmanuel Dreyfus wrote: Emmanuel Dreyfus m...@netbsd.org wrote: Erratum: it happens because it attempts to seekdir to the offset obtained for last record. But I still have to find what code is sending the request. This is a find(1) probably forked by quota-crawld

Re: [Gluster-devel] Invalid DIR * usage in quota xlator

2014-10-13 Thread Pranith Kumar Karampuri
On 10/13/2014 01:14 PM, Emmanuel Dreyfus wrote: On Mon, Oct 13, 2014 at 12:39:33PM +0530, Pranith Kumar Karampuri wrote: Op_errno is valid only if 'op_ret 0'. so that doesn't say much. After the last readdir call with op_ret 0, there will be one more readdir call for which op_ret will come

Re: [Gluster-devel] Invalid DIR * usage in quota xlator

2014-10-13 Thread Pranith Kumar Karampuri
On 10/13/2014 02:27 PM, Emmanuel Dreyfus wrote: On Mon, Oct 13, 2014 at 01:42:38PM +0530, Pranith Kumar Karampuri wrote: No bug here, just suboptimal behavior, both in glusterfs and NetBSD FUSE. oh!, but shouldn't it get op_ret = 0 instead of op_ret -1, op_errno EINVAL? It happens because

Re: [Gluster-devel] Invalid DIR * usage in quota xlator

2014-10-13 Thread Pranith Kumar Karampuri
On 10/13/2014 02:37 PM, Pranith Kumar Karampuri wrote: On 10/13/2014 02:27 PM, Emmanuel Dreyfus wrote: On Mon, Oct 13, 2014 at 01:42:38PM +0530, Pranith Kumar Karampuri wrote: No bug here, just suboptimal behavior, both in glusterfs and NetBSD FUSE. oh!, but shouldn't it get op_ret = 0

Re: [Gluster-devel] Invalid DIR * usage in quota xlator

2014-10-13 Thread Pranith Kumar Karampuri
On 10/13/2014 02:45 PM, Emmanuel Dreyfus wrote: On Mon, Oct 13, 2014 at 02:37:12PM +0530, Pranith Kumar Karampuri wrote: I am not aware of backend filesystems that much, may be someone with that knowledge can comment here, what happens when new entries are created in the directory after

Re: [Gluster-devel] if/else coding style :-)

2014-10-13 Thread Pranith Kumar Karampuri
On 10/13/2014 07:27 PM, Shyam wrote: On 10/13/2014 08:01 AM, Pranith Kumar Karampuri wrote: hi, Why are we moving away from this coding style?: if (x) { /*code*/ } else { /* code */ } This patch (in master) introduces the same and explains why, commit

Re: [Gluster-devel] if/else coding style :-)

2014-10-13 Thread Pranith Kumar Karampuri
On 10/13/2014 07:43 PM, Shyam wrote: On 10/13/2014 10:08 AM, Pranith Kumar Karampuri wrote: On 10/13/2014 07:27 PM, Shyam wrote: On 10/13/2014 08:01 AM, Pranith Kumar Karampuri wrote: hi, Why are we moving away from this coding style?: if (x) { /*code*/ } else { /* code

Re: [Gluster-devel] glusterfs replica volume self heal lots of small file very very slow!!why?how to improve?

2014-10-15 Thread Pranith Kumar Karampuri
On 10/08/2014 02:15 PM, justgluste...@gmail.com wrote: Hi all: I do the following test: I create a glusterfs replica volume (replica count is 2 ) with two server node(server A and server B),use XFS as the underlying filesystem, then mount the volume in client node, then, I shut

Re: [Gluster-devel] glusterfs replica volume self heal lots of small file very very slow!!why?how to improve?

2014-10-15 Thread Pranith Kumar Karampuri
On 10/08/2014 07:50 PM, Joe Julian wrote: To the author: You're cross posting user questions in the devel mailing list. You're not asking development questions. Please don't do that. To Pranith et al: On 10/8/2014 1:45 AM, justgluste...@gmail.com wrote: * then I config :*

Re: [Gluster-devel] valgrind logs for glusterfs-3.4 memory leak

2014-10-17 Thread Pranith Kumar Karampuri
hi Kaleb, I went through the logs. I don't see anything significant. What is the test case that recreates the mem-leak? May be I can try it on my setup and get back to you? Pranith On 10/15/2014 08:57 PM, Kaleb S. KEITHLEY wrote: As mentioned in the Gluster Community Meeting on irc

Re: [Gluster-devel] memory leaks

2014-11-04 Thread Pranith Kumar Karampuri
On 11/04/2014 03:30 PM, Anders Blomdell wrote: On 2014-11-04 10:38, Emmanuel Dreyfus wrote: Hi FWIW, there are still memory leaks in glusterfs 3.6.0. My favourite test is building NetBSD on a replicated volume, and it fails because the machine runs out of swap. After building for 14 hours

[Gluster-devel] new spurious regressions

2014-11-08 Thread Pranith Kumar Karampuri
hi, The following tests keep failing spuriously nowadays. I CCed glusterd folks and original author(Kritika) and Last change author (Emmanuel). You can check http://build.gluster.org/job/rackspace-regression-2GB-triggered/2497/consoleFull for full logs. volume create: patchy: failed:

Re: [Gluster-devel] new spurious regressions

2014-11-09 Thread Pranith Kumar Karampuri
On 11/10/2014 01:04 AM, Emmanuel Dreyfus wrote: Justin Clift jus...@gluster.org wrote: I've just used that page to disconnect slave25, so you're fine to investigate there (same login credentials as before). Please reconnect it when you're done. :) Since I could spot nothing from, I

Re: [Gluster-devel] new spurious regressions

2014-11-09 Thread Pranith Kumar Karampuri
On 11/10/2014 10:58 AM, Emmanuel Dreyfus wrote: Pranith Kumar Karampuri pkara...@redhat.com wrote: Since I could spot nothing from, I reconnected it. I will try by submitting a change with set -x for that script. It was consistently happening with my change just on regression machine. So I

Re: [Gluster-devel] [Gluster-users] info heal-failed shown as gfid

2014-11-10 Thread Pranith Kumar Karampuri
On 11/10/2014 11:21 AM, Vijay Bellur wrote: On 11/08/2014 08:19 AM, Peter Auyeung wrote: I have a node down while gfs still open for writing. Got tons of heal-failed on a replicated volume showing as gfid. Tried gfid-resolver and got the following: # ./gfid-resolver.sh /brick02/gfs/

Re: [Gluster-devel] Regression testing report: Gluster v3.6.1 on CentOS 6.6

2014-11-12 Thread Pranith Kumar Karampuri
Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: On 11/11/2014 03:13 PM, Kiran Patil wrote: Test Summary Report -- ./tests/basic/quota-anon-fd-nfs.t(Wstat: 0 Tests: 16 Failed: 1) Failed test: 16 This is a spurious

Re: [Gluster-devel] IMPORTANT - Adding further volume types to our smoke tests

2014-11-12 Thread Pranith Kumar Karampuri
On 11/13/2014 03:23 AM, Justin Clift wrote: Hi all, At the moment, our smoke tests in Jenkins only run on a replicated volume. Extending that out to other volume types should (in theory :) help catch other simple gotchas. Xavi has put together a patch for doing just this, which I'd like to

Re: [Gluster-devel] quota and snapshot testcase failure (zfs on CentOS 6.6)

2014-11-18 Thread Pranith Kumar Karampuri
On 11/12/2014 04:52 PM, Kiran Patil wrote: I have create zpool with name d and mnt and they appear in filesystem as follows. d on /d type zfs (rw,xattr) mnt on /mnt type zfs (rw,xattr) Debug enabled output of quota.t testcase is at http://ur1.ca/irbt1. CC vijaikumar On Wed, Nov 12, 2014

Re: [Gluster-devel] quota and snapshot testcase failure (zfs on CentOS 6.6)

2014-11-18 Thread Pranith Kumar Karampuri
On 11/19/2014 10:30 AM, Atin Mukherjee wrote: On 11/18/2014 10:35 PM, Pranith Kumar Karampuri wrote: On 11/12/2014 04:52 PM, Kiran Patil wrote: I have create zpool with name d and mnt and they appear in filesystem as follows. d on /d type zfs (rw,xattr) mnt on /mnt type zfs (rw,xattr

Re: [Gluster-devel] How to resolve gfid (and .glusterfs symlink) for a deleted file

2014-11-21 Thread Pranith Kumar Karampuri
On 11/21/2014 09:04 PM, Nux! wrote: Hi, I deleted a file by mistake in a brick. I never managed to find out its gfid so now I have a rogue symlink in .glusterfs pointing to it (if I got how it works). Any way I can discover which is this file and get rid of it? symlinks exist in .glusterfs

Re: [Gluster-devel] How to resolve gfid (and .glusterfs symlink) for a deleted file

2014-11-21 Thread Pranith Kumar Karampuri
On 11/21/2014 09:50 PM, Ben England wrote: Nux, Those thousands of entries all would match -links 2 but not -links 1 The only entry in .glusterfs that would match is the entry where you deleted the file from the brick. That's how hardlinks work - when you create a regular file, the link

Re: [Gluster-devel] How to resolve gfid (and .glusterfs symlink) for a deleted file

2014-11-21 Thread Pranith Kumar Karampuri
On 11/22/2014 12:10 PM, Pranith Kumar Karampuri wrote: On 11/21/2014 09:50 PM, Ben England wrote: Nux, Those thousands of entries all would match -links 2 but not -links 1 The only entry in .glusterfs that would match is the entry where you deleted the file from the brick. That's how

Re: [Gluster-devel] failed heal

2015-02-03 Thread Pranith Kumar Karampuri
On 02/02/2015 03:34 AM, David F. Robinson wrote: I have several files that gluster says it cannot heal. I deleted the files from all of the bricks (/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full heal using 'gluster volume heal homegfs full'. Even after the full heal,

Re: [Gluster-devel] 3.6.2 volume heal

2015-02-02 Thread Pranith Kumar Karampuri
On 02/03/2015 12:13 PM, Raghavendra Bhat wrote: On Monday 02 February 2015 09:07 PM, David F. Robinson wrote: I upgraded one of my bricks from 3.6.1 to 3.6.2 and I can no longer do a 'gluster volume heal homegfs info'. It hangs and never returns any information. I was trying to ensure that

Re: [Gluster-devel] missing files

2015-02-05 Thread Pranith Kumar Karampuri
19 2014 CURRENT STANDARD ARMORING.one -- Original Message -- From: Xavier Hernandez xhernan...@datalab.es To: David F. Robinson david.robin...@corvidtec.com; Benjamin Turner bennytu...@gmail.com; Pranith Kumar Karampuri pkara...@redhat.com Cc: gluster-us...@gluster.org gluster-us

Re: [Gluster-devel] Improvement of eager locking

2015-01-15 Thread Pranith Kumar Karampuri
On 01/15/2015 10:53 PM, Xavier Hernandez wrote: Hi, currently eager locking is implemented by checking the open-fd-count special xattr for each write. If there's more than one open on the same file, eager locking is disabled to avoid starvation. This works quite well for file writes, but

Re: [Gluster-devel] Problems with ec/nfs.t in regression tests

2015-02-12 Thread Pranith Kumar Karampuri
On 02/12/2015 08:15 PM, Xavier Hernandez wrote: I've made some more investigation and the problem seems worse. It seems that NFS sends a huge amount of requests without waiting for answers (I've had more than 1400 requests ongoing). Probably there will be many factors that can influence on

Re: [Gluster-devel] Problems with ec/nfs.t in regression tests

2015-02-12 Thread Pranith Kumar Karampuri
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote: On 02/12/2015 08:15 PM, Xavier Hernandez wrote: I've made some more investigation and the problem seems worse. It seems that NFS sends a huge amount of requests without waiting for answers (I've had more than 1400 requests ongoing

Re: [Gluster-devel] missing files

2015-02-12 Thread Pranith Kumar Karampuri
On 02/12/2015 03:05 PM, Pranith Kumar Karampuri wrote: On 02/12/2015 09:14 AM, Justin Clift wrote: On 12 Feb 2015, at 03:02, Shyam srang...@redhat.com wrote: On 02/11/2015 08:28 AM, David F. Robinson wrote: My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB

Re: [Gluster-devel] missing files

2015-02-12 Thread Pranith Kumar Karampuri
On 02/12/2015 09:14 AM, Justin Clift wrote: On 12 Feb 2015, at 03:02, Shyam srang...@redhat.com wrote: On 02/11/2015 08:28 AM, David F. Robinson wrote: My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB and it took the tar extraction from 1-minute to 7-minutes.

Re: [Gluster-devel] Problems with ec/nfs.t in regression tests

2015-02-12 Thread Pranith Kumar Karampuri
On 02/13/2015 12:07 AM, Niels de Vos wrote: On Thu, Feb 12, 2015 at 11:39:51PM +0530, Pranith Kumar Karampuri wrote: On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote: On 02/12/2015 08:15 PM, Xavier Hernandez wrote: I've made some more investigation and the problem seems worse. It seems

Re: [Gluster-devel] How can we prevent GlusterFS packaging installation/update issues in future?

2015-02-19 Thread Pranith Kumar Karampuri
or upgrade/installation process better. On Thu, Feb 19, 2015 at 12:26:33PM +0530, Pranith Kumar Karampuri wrote: https://bugzilla.redhat.com/show_bug.cgi?id=1113778 https://bugzilla.redhat.com/show_bug.cgi?id=1191176 How can we make the process of giving good packages for things other than RPMs

Re: [Gluster-devel] v3.6.2

2015-01-26 Thread Pranith Kumar Karampuri
On 01/26/2015 09:41 PM, Justin Clift wrote: On 26 Jan 2015, at 14:50, David F. Robinson david.robin...@corvidtec.com wrote: I have a server with v3.6.2 from which I cannot mount using NFS. The FUSE mount works, however, I cannot get the NFS mount to work. From /var/log/message: Jan 26

Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-26 Thread Pranith Kumar Karampuri
On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote: On 01/26/2015 09:41 PM, Justin Clift wrote: On 26 Jan 2015, at 14:50, David F. Robinson david.robin...@corvidtec.com wrote: I have a server with v3.6.2 from which I cannot mount using NFS. The FUSE mount works, however, I cannot get

[Gluster-devel] Some thing is wrong with review.gluster.org

2015-02-01 Thread Pranith Kumar Karampuri
hi, I get following errors, when I try to do git fetch pk1@localhost - ~/workspace/gerrit-repo (ec-notify-1) 15:48:44 :( ⚡ git fetch git rebase origin/master fatal: internal server error remote: internal server error fatal: protocol error: bad pack header What is worrisome is that when I click

Re: [Gluster-devel] Error in dht-common.c?

2015-02-01 Thread Pranith Kumar Karampuri
On 01/31/2015 10:49 PM, Dennis Schafroth wrote: Hi when compiling dht-common.c with clang (on mac, but I dont think that matters) some warnings seem to reveal an error: CC dht-common.lo *dht-common.c:2997:57: **warning: **size argument in 'strncmp' call is a comparison

Re: [Gluster-devel] Some thing is wrong with review.gluster.org

2015-02-01 Thread Pranith Kumar Karampuri
Thanks vijay, it works now. Pranith On 02/01/2015 04:27 PM, Vijay Bellur wrote: On 02/01/2015 11:24 AM, Pranith Kumar Karampuri wrote: hi, I get following errors, when I try to do git fetch pk1@localhost - ~/workspace/gerrit-repo (ec-notify-1) 15:48:44 :( ⚡ git fetch git rebase origin/master

Re: [Gluster-devel] bit rot

2015-01-07 Thread Pranith Kumar Karampuri
On 01/07/2015 12:48 PM, Raghavendra Bhat wrote: Hi, As per the design dicussion it was mentioned that, there will be one BitD running per node which will take care of all the bricks of all the volumes running on that node. But, here once thing that becomes important is doing graph changes

Re: [Gluster-devel] GlusterFS 4.0 Call For Participation

2015-02-11 Thread Pranith Kumar Karampuri
On 02/10/2015 03:42 AM, Jeff Darcy wrote: Interest in 4.0 seems to be increasing. So is developer activity, but all of the developers involved in 4.0 are stretched a bit thin. As a result, some sub-projects still don't have anyone who's working on them often enough to make significant

Re: [Gluster-devel] failed heal

2015-02-04 Thread Pranith Kumar Karampuri
are not compatible with versions = 3.5.3 and 3.6.1 that is the reason. From 3.5.4 and releases =3.6.2 it should work fine. Pranith David -- Original Message -- From: Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com To: David F. Robinson david.robin...@corvidtec.com

Re: [Gluster-devel] About split-brain-resolution.t

2015-03-30 Thread Pranith Kumar Karampuri
On 03/30/2015 06:01 PM, Emmanuel Dreyfus wrote: On Mon, Mar 30, 2015 at 05:44:23PM +0530, Pranith Kumar Karampuri wrote: Problem here is that ' inode_forget' is coming even before it gets to inspect the file. We initially thought we should 'ref' the inode when the user specifies the choice

Re: [Gluster-devel] About split-brain-resolution.t

2015-03-30 Thread Pranith Kumar Karampuri
On 03/30/2015 06:34 PM, Emmanuel Dreyfus wrote: Pranith Kumar Karampuri pkara...@redhat.com wrote: Since spb_choice is not saved as an attribute for the file on the bricks, it cannot be recovered when the context is reallocated. Either that save feature has been forgotten, or going

Re: [Gluster-devel] Responsibilities and expectations of our maintainers

2015-03-28 Thread Pranith Kumar Karampuri
On 03/28/2015 02:08 PM, Emmanuel Dreyfus wrote: Pranith Kumar Karampuri pkara...@redhat.com wrote: Emmanuel, What can we do to make it vote -2 when it fails? Things will automatically fall in place if it gives -2. I will do this once I will have recovered. The changelog change

Re: [Gluster-devel] About split-brain-resolution.t

2015-03-28 Thread Pranith Kumar Karampuri
On 03/28/2015 09:51 PM, Emmanuel Dreyfus wrote: Hi I see split-brain-resolution.t uses attribute replica.split-brain-choice to choose what replica should be used. This attribute is not in privilegied space (trusted. prefixed). Is it on purpose? Yes, these are used as internal commands to make

Re: [Gluster-devel] Regression host hung on tests/basic/afr/split-brain-healing.t

2015-02-26 Thread Pranith Kumar Karampuri
On 02/26/2015 02:54 AM, Justin Clift wrote: Anyone have an interest in a regression test VM that's (presently) hung on tests/basic/afr/split-brain-healing.t? Likely to be a spurious error. I can either reboot the VM and put it back into service, or I can leave it for someone to log into and

Re: [Gluster-devel] Spurious failure report for master branch - 2015-03-03

2015-03-06 Thread Pranith Kumar Karampuri
On 03/04/2015 09:57 AM, Justin Clift wrote: Ran 20 x regression tests on our GlusterFS master branch code as of a few hours ago, commit 95d5e60afb29aedc29909340e7564d54a6a247c2. 5 of them were successful (25%), 15 of them failed in various ways (75%). We need to get this down to about 5% or

Re: [Gluster-devel] Responsibilities and expectations of our maintainers

2015-03-27 Thread Pranith Kumar Karampuri
On 03/25/2015 07:18 PM, Emmanuel Dreyfus wrote: On Wed, Mar 25, 2015 at 02:04:10PM +0100, Niels de Vos wrote: 1. Who is going to maintain the new features? 2. Maintainers should be active in responding to users 3. What about reported bugs, there is the Bug Triaging in place? 4. Maintainers

Re: [Gluster-devel] crypt xlator bug

2015-04-02 Thread Pranith Kumar Karampuri
On 04/02/2015 12:27 AM, Raghavendra Talur wrote: On Wed, Apr 1, 2015 at 10:34 PM, Justin Clift jus...@gluster.org mailto:jus...@gluster.org wrote: On 1 Apr 2015, at 10:57, Emmanuel Dreyfus m...@netbsd.org mailto:m...@netbsd.org wrote: Hi crypt.t was recently broken

Re: [Gluster-devel] crypt xlator bug

2015-04-02 Thread Pranith Kumar Karampuri
On 04/02/2015 07:27 PM, Raghavendra Bhat wrote: On Thursday 02 April 2015 05:50 PM, Jeff Darcy wrote: I think, crypt xlator should do a mem_put of local after doing STACK_UNWIND like other xlators which also use mem_get for local (such as AFR). I am suspecting crypt not doing mem_put might be

Re: [Gluster-devel] crypt xlator bug

2015-04-03 Thread Pranith Kumar Karampuri
On 04/01/2015 03:27 PM, Emmanuel Dreyfus wrote: Hi crypt.t was recently broken in NetBSD regression. The glusterfs returns a node with file type invalid to FUSE, and that breaks the test. After running a git bisect, I found the offending commit after which this behavior appeared:

[Gluster-devel] cluster syncop framework

2015-04-21 Thread Pranith Kumar Karampuri
hi, For implementing directory healing in ec I needed to generalize the cluster syncop implementation done in afr-v2 which makes things easy for implementing something like self-heal. The patch is at http://review.gluster.org/10240 Please feel free to let me know your comments.

Re: [Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-01 Thread Pranith Kumar Karampuri
in 5-10 minutes. Very consistent failure :-) Pranith Regards, Nithya - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Shyam srang...@redhat.com, Raghavendra Gowdappa rgowd...@redhat.com, Nithya Balachandran nbala...@redhat.com, Susant Palai spa

[Gluster-devel] spurious failures in tests/basic/afr/sparse-file-self-heal.t

2015-05-01 Thread Pranith Kumar Karampuri
hi, As per the etherpad: https://public.pad.fsfe.org/p/gluster-spurious-failures * tests/basic/afr/sparse-file-self-heal.t (Wstat: 0 Tests: 64 Failed: 35) * Failed tests: 1-6, 11, 20-30, 33-34, 36, 41, 50-61, 64 * Happens in master (Mon 30th March - git commit id

Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Pranith Kumar Karampuri
Seems like glusterd failure from the looks of it: +glusterd folks. Running tests in file ./tests/basic/cdc.t volume delete: patchy: failed: Another transaction is in progress for patchy. Please try again after sometime. [18:16:40] ./tests/basic/cdc.t .. not ok 52 not ok 53 Got Started instead

Re: [Gluster-devel] spurious failures in tests/basic/afr/sparse-file-self-heal.t

2015-05-02 Thread Pranith Kumar Karampuri
On 05/02/2015 10:14 AM, Krishnan Parthasarathi wrote: If glusterd itself fails to come up, of course the test will fail :-). Is it still happening? Pranith, Did you get a chance to see glusterd logs and find why glusterd didn't come up? Please paste the relevant logs in this thread. No :-(.

[Gluster-devel] Spurious test failure in tests/bugs/distribute/bug-1122443.t

2015-05-01 Thread Pranith Kumar Karampuri
hi, Found the reason for this too: ok 8 not ok 9 Got in instead of completed FAILED COMMAND: completed remove_brick_status_completed_field patchy pranithk-laptop:/d/backends/patchy0 volume remove-brick commit: failed: use 'force' option as migration is in progress not ok 10 FAILED

[Gluster-devel] Need help with snapshot regression failures

2015-05-03 Thread Pranith Kumar Karampuri
hi Rajesh/Avra, I do not have good understanding of snapshot, so couldn't investigate any of the snapshot related spurious failures present in https://public.pad.fsfe.org/p/gluster-spurious-failures. Could you guys help out? Pranith ___

Re: [Gluster-devel] Need help with snapshot regression failures

2015-05-04 Thread Pranith Kumar Karampuri
for around half hour. Pranith Regards, Avra On 05/04/2015 11:27 AM, Pranith Kumar Karampuri wrote: hi Rajesh/Avra, I do not have good understanding of snapshot, so couldn't investigate any of the snapshot related spurious failures present in https://public.pad.fsfe.org/p/gluster-spurious

[Gluster-devel] regarding spurious failure tests/bugs/snapshot/bug-1162498.t

2015-05-03 Thread Pranith Kumar Karampuri
hi Vijai, I am not sure if you are maintaining this now, but git blame gives your name, so sending the mail to you. Could you please take a look at http://build.gluster.org/job/rackspace-regression-2GB-triggered/8148/consoleFull where the failure happened. If someone else is looking

[Gluster-devel] ec spurious regression failures

2015-05-01 Thread Pranith Kumar Karampuri
hi, I see that ec tests are failing because of 'df -h' test failure. It failing because dh -h fails on stale quota aux mount from the looks of it. df: `/var/run/gluster/patchy': Transport endpoint is not connected - [07:38:37] ./tests/basic/ec/ec-3-1.t .. not ok 11

Re: [Gluster-devel] NetBSD regression status upate

2015-04-29 Thread Pranith Kumar Karampuri
On 04/30/2015 08:44 AM, Emmanuel Dreyfus wrote: Hi Here is NetBSD regression status update for broken tests: - tests/basic/afr/split-brain-resolution.t Anuradha Talur is working on it, the change being still under review http://review.gluster.org/10134 - tests/basic/ec/ This works but with

Re: [Gluster-devel] NetBSD regression status upate

2015-04-29 Thread Pranith Kumar Karampuri
On 04/30/2015 10:02 AM, Emmanuel Dreyfus wrote: Pranith Kumar Karampuri pkara...@redhat.com wrote: On a related note, I see glupy is failing spuriously as well: Know anything about it? glupy.t used to work and was broken quite recenly. My investigation led to a free on an invalid pointer

[Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-01 Thread Pranith Kumar Karampuri
hi, I see the following logs when the failure happens: [2015-05-01 10:37:44.157477] E [dht-helper.c:900:dht_migration_complete_check_task] 0-patchy-dht: (null): failed to get the 'linkto' xattr No data avai lable [2015-05-01 10:37:44.157504] W [fuse-bridge.c:2190:fuse_readv_cbk]

Re: [Gluster-devel] Regression test failures - Call for Action

2015-05-04 Thread Pranith Kumar Karampuri
On 05/05/2015 12:58 AM, Justin Clift wrote: On 4 May 2015, at 08:06, Vijay Bellur vbel...@redhat.com wrote: Hi All, There has been a spate of regression test failures (due to broken tests or race conditions showing up) in the recent past [1] and I am inclined to block 3.7.0 GA along with

Re: [Gluster-devel] Regression test failures - Call for Action

2015-05-04 Thread Pranith Kumar Karampuri
On 05/05/2015 06:12 AM, Pranith Kumar Karampuri wrote: On 05/05/2015 12:58 AM, Justin Clift wrote: On 4 May 2015, at 08:06, Vijay Bellur vbel...@redhat.com wrote: Hi All, There has been a spate of regression test failures (due to broken tests or race conditions showing up) in the recent

Re: [Gluster-devel] Regression test failures - Call for Action

2015-05-04 Thread Pranith Kumar Karampuri
On 05/05/2015 08:10 AM, Jeff Darcy wrote: Jeff's patch failed again with same problem: http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console Wouldn't have expected anything different. This one looks like a problem in the Jenkins/Gerrit infrastructure. Sorry for the

[Gluster-devel] spurious failure in quota-nfs.t

2015-05-04 Thread Pranith Kumar Karampuri
hi Vijai/Sachin, http://build.gluster.org/job/rackspace-regression-2GB-triggered/8268/console Doesn't seem like an obvious failure. Know anything about it? Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] ec spurious regression failures

2015-05-04 Thread Pranith Kumar Karampuri
On 05/05/2015 08:39 AM, Pranith Kumar Karampuri wrote: Vijai/Sachin, Did you get a chance to work on this? http://review.gluster.com/10166 failed Just now again in ec because http://review.gluster.org/10069 is merged yesterday which can lead to same problem. I sent http

[Gluster-devel] spurious failure in tests/geo-rep/georep-rsync-changelog.t

2015-05-04 Thread Pranith Kumar Karampuri
hi, Doesn't seem like obvious failure. It does say there is version mismatch, I wonder how? Could you look into it. Gluster version mismatch between master and slave. Geo-replication session between master and slave21.cloud.gluster.org::slave does not exist. [08:27:15]

Re: [Gluster-devel] Regression test failures - Call for Action

2015-05-04 Thread Pranith Kumar Karampuri
Just saw two more failures in the same place for netbsd regressions. I am ignoring NetBSD status for the test fixes for now. I am not sure how this needs to be fixed. Please help! Pranith On 05/05/2015 07:17 AM, Pranith Kumar Karampuri wrote: On 05/05/2015 06:12 AM, Pranith Kumar Karampuri

Re: [Gluster-devel] ec spurious regression failures

2015-05-04 Thread Pranith Kumar Karampuri
into this. Pranith On 05/01/2015 11:30 AM, Pranith Kumar Karampuri wrote: hi, I see that ec tests are failing because of 'df -h' test failure. It failing because dh -h fails on stale quota aux mount from the looks of it. df: `/var/run/gluster/patchy': Transport endpoint is not connected

[Gluster-devel] bitrot spurious test tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t

2015-05-04 Thread Pranith Kumar Karampuri
hi, I fixed it along with the patch on which this test failed @http://review.gluster.org/10391. Letting everyone know in case they face the same issue. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] spurious failure in tests/geo-rep/georep-rsync-changelog.t

2015-05-04 Thread Pranith Kumar Karampuri
. Is there any scenario the above command gives empty string? Thanks and Regards, Kotresh H R - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Aravinda Vishwanathapura Krishna Murthy avish...@redhat.com, Kotresh Hiremath Ravishankar khire...@redhat.com Cc: Gluster

Re: [Gluster-devel] spurious failures in tests/basic/volume-snapshot-clone.t

2015-05-04 Thread Pranith Kumar Karampuri
-clone.t (Wstat: 0 Tests: 41 Failed: 3) Failed tests: 36, 38, 40 Pranith Regards, Avra On 05/05/2015 09:01 AM, Pranith Kumar Karampuri wrote: hi Avra/Rajesh, Any update on this test? * tests/basic/volume-snapshot-clone.t * http

Re: [Gluster-devel] spurious failures in tests/basic/volume-snapshot-clone.t

2015-05-04 Thread Pranith Kumar Karampuri
On 05/05/2015 10:48 AM, Avra Sengupta wrote: On 05/05/2015 10:43 AM, Pranith Kumar Karampuri wrote: On 05/05/2015 10:32 AM, Avra Sengupta wrote: Hi, As already discussed, if you encounter this or any other snapshot tests, it would be great to provide the regression run instance so that we

[Gluster-devel] spurious regression status

2015-05-05 Thread Pranith Kumar Karampuri
hi, Please backport the patches that fix spurious regressions to 3.7 as well. This is the status of regressions now: * ./tests/bugs/quota/bug-1035576.t (Wstat: 0 Tests: 24 Failed: 2) * Failed tests: 20-21 *

Re: [Gluster-devel] core while running tests/bugs/snapshot/bug-1112559.t

2015-05-05 Thread Pranith Kumar Karampuri
Looping in kotresh and aravinda Pranith On 05/06/2015 08:39 AM, Jeff Darcy wrote: Could you please look at this issue: http://build.gluster.org/job/rackspace-regression-2GB-triggered/8456/consoleFull I looked at this one for a while. It looks like a brick failed to start because

[Gluster-devel] new test failure in tests/basic/mount-nfs-auth.t

2015-05-05 Thread Pranith Kumar Karampuri
Niels, Any ideas? http://build.gluster.org/job/rackspace-regression-2GB-triggered/8462/consoleFull mount.nfs: access denied by server while mounting slave46.cloud.gluster.org:/patchy mount.nfs: access denied by server while mounting slave46.cloud.gluster.org:/patchy mount.nfs: access

[Gluster-devel] New glusterd crash, at least consistent on my laptop

2015-05-03 Thread Pranith Kumar Karampuri
Execute the following command on replicate volume: root@pranithk-laptop - ~ 17:23:02 :( ⚡ gluster v set r2 cluster.client-log-level 0 Connection failed. Please check if gluster daemon is operational. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x0038e480c860 in

[Gluster-devel] spurious failure in tests/bugs/cli/bug-1087487.t

2015-05-05 Thread Pranith Kumar Karampuri
Gaurav, Please look into http://build.gluster.org/job/rackspace-regression-2GB-triggered/8409/console Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] spurious regression failure in tests/bugs/quota/inode-quota.t

2015-05-07 Thread Pranith Kumar Karampuri
hi, http://build.gluster.org/job/rackspace-regression-2GB-triggered/8621/consoleFull failed regression. Could you please look into it Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org

<    1   2   3   4   5   6   7   8   >