Re: [Gluster-devel] read-only xlator: ro_open and O_APPEND

2015-03-12 Thread Anand Avati
We should probably check O_APPEND in open(). However the check in writev() is more important, since open() may not always be issued as in NFS. Thanks On Tue, 10 Mar 2015 at 23:52 Milind Changire wrote: > The ro_open(...) handler checks for O_WRONLY and O_RDWR flags but not the > O_APPEND flag.

Re: [Gluster-devel] RDMA: Patch to make use of pre registered memory

2015-02-09 Thread Anand Avati
On Sun Feb 08 2015 at 10:16:27 PM Ben England wrote: > Avati, I'm all for your zero-copy RDMA API proposal, but I have a concern > about your proposed zero-copy fop below... > > - Original Message - > > From: "Anand Avati" > > To: "Mohammed R

Re: [Gluster-devel] Integrating liburcu source into the glusterfs source tree

2015-02-02 Thread Anand Avati
Apologies for the top post. Adopting RCU is a good step. Some questions and thoughts - Does urcu work on non Linux systems, netbsd? IIRC there were Linux specific permissions on the rcu patent? Maybe only for the kernel? Would be good to confirm. Glusterd is a good place for the first prototype

Re: [Gluster-devel] RDMA: Patch to make use of pre registered memory

2015-01-23 Thread Anand Avati
Couple of comments - 1. rdma can register init/fini functions (via pointers) into iobuf_pool. Absolutely no need to introduce rdma dependency into libglusterfs. 2. It might be a good idea to take a holistic approach towards zero-copy with libgfapi + RDMA, rather than a narrow goal of "use pre-reg

Re: [Gluster-devel] managing of THIS

2015-01-23 Thread Anand Avati
The problem you describe is very specific to glfs_new(), and not to gfapi in general. I guess we can handle this in glfs_new by initializing an appropriate value into THIS (save old_THIS and restore it before returning from glfs_new). That should avoid the need for all those new macros? Thanks On

Re: [Gluster-devel] Reg. multi thread epoll NetBSD failures

2015-01-23 Thread Anand Avati
Since all of epoll code and its multithreading is under ifdefs, netbsd should just continue working as single threaded poll unaffected by the patch. If netbsd kqueue supports single shot event delivery and edge triggered notification, we could have an equivalent implantation on netbsd too. Even if

Re: [Gluster-devel] ctime weirdness

2015-01-14 Thread Anand Avati
I don't think the problem is with the handling of SETATTR in either NetBSD or Linux. I am guessing NetBSD FUSE is _using_ SETATTR to update atime upon open? Linux FUSE just leaves it to the backend filesystem to update atime. Whenever there is a SETATTR fop, ctime is _always_ bumped. Thanks On Mo

Re: [Gluster-devel] Suggestion needed to make use of iobuf_pool as rdma buffer.

2015-01-14 Thread Anand Avati
On Tue Jan 13 2015 at 11:57:53 PM Mohammed Rafi K C wrote: > > On 01/14/2015 12:11 AM, Anand Avati wrote: > > 3) Why not have a separate iobuf pool for RDMA? > > > Since every fops are using the default iobuf_pool, if we go with another > iobuf_pool dedicated to rdma, we

Re: [Gluster-devel] Suggestion needed to make use of iobuf_pool as rdma buffer.

2015-01-13 Thread Anand Avati
3) Why not have a separate iobuf pool for RDMA? On Tue Jan 13 2015 at 6:30:09 AM Mohammed Rafi K C wrote: > Hi All, > > When using RDMA protocol, we need to register the buffer which is going > to send through rdma with rdma device. In fact, it is a costly > operation, and a performance killer i

Re: [Gluster-devel] Order of server-side xlators

2015-01-12 Thread Anand Avati
Valid questions. access-control had to be as close to posix as possible in its first implementation (to minimize the cost of the STAT calls originated by it), but since the introduction of posix-acl there are no extra STAT calls, and given the later introduction of quota, it certainly makes sense t

Re: [Gluster-devel] mandatory lock

2015-01-08 Thread Anand Avati
Or use an rsync style .filename.rand tempfile, write the new version of the file, and rename that to filename. On Thu Jan 08 2015 at 12:21:18 PM Anand Avati wrote: > Ideally you want the clients to coordinate among themselves. Note that > this feature cannot be implemented foo

Re: [Gluster-devel] mandatory lock

2015-01-08 Thread Anand Avati
Ideally you want the clients to coordinate among themselves. Note that this feature cannot be implemented foolproof (theoretically) in a system that supports NFSv3. On Thu Jan 08 2015 at 8:57:48 AM Harmeet Kalsi wrote: > Hi Anand, that was spot on. Any idea if there will be development on this >

Re: [Gluster-devel] Appending time to snap name in USS

2015-01-08 Thread Anand Avati
It would be convenient if the time is appended to the snap name on the fly (when receiving list of snap names from glusterd?) so that the timezone application can be dynamic (which is what users would expect). Thanks On Thu Jan 08 2015 at 3:21:15 AM Poornima Gurusiddaiah wrote: > Hi, > > Window

Re: [Gluster-devel] mandatory lock

2015-01-08 Thread Anand Avati
Note that the mandatory locks available in the locks translator is just the mandatory extensions for posix locks - at least one of the apps must be using locks to begin with. What Harmeet is asking for is something different - automatic exclusive access to edit files. i.e, if one app has opened a f

Re: [Gluster-devel] Readdir d_off encoding

2014-12-23 Thread Anand Avati
Please review http://review.gluster.org/9332/, as it undoes the introduction of itransform on d_off in AFR. This does not solve DHT-over-DHT or other future use cases, but at least fixes the regression in 3.6.x. Thanks On Tue Dec 23 2014 at 10:34:41 AM Anand Avati wrote: > Using GFID does

Re: [Gluster-devel] Readdir d_off encoding

2014-12-23 Thread Anand Avati
Using GFID does not work for d_off. The GFID represents and inode, and a d_off represents a directory entry. Therefore using GFID as an alternative to d_off breaks down when you have hardlinks for the same inode in a single directory. On Tue Dec 23 2014 at 2:20:34 AM Xavier Hernandez wrote: > On

Re: [Gluster-devel] patches for 3.6.2

2014-12-23 Thread Anand Avati
Please include a backport of http://review.gluster.org/#/c/9332/ (after merge) as it fixes the ext4 issue by undoing the change which introduced the incompatibility. On Mon Dec 22 2014 at 9:43:07 PM Raghavendra Bhat wrote: > On Tuesday 23 December 2014 11:09 AM, Atin Mukherjee wrote: > > Can you

Re: [Gluster-devel] Readdir d_off encoding

2014-12-16 Thread Anand Avati
On Tue Dec 16 2014 at 8:46:48 AM Shyam wrote: > On 12/15/2014 09:06 PM, Anand Avati wrote: > > Replies inline > > > > On Mon Dec 15 2014 at 12:46:41 PM Shyam > <mailto:srang...@redhat.com>> wrote: > > > > With the changes present in [1] and [2],

Re: [Gluster-devel] Readdir d_off encoding

2014-12-15 Thread Anand Avati
Replies inline On Mon Dec 15 2014 at 12:46:41 PM Shyam wrote: > With the changes present in [1] and [2], > > A short explanation of the change would be, we encode the subvol ID in > the d_off, losing 'n + 1' bits in case the high order n+1 bits of the > underlying xlator returned d_off is not fr

Re: [Gluster-devel] pthread_mutex misusage in glusterd_op_sm

2014-11-26 Thread Anand Avati
This is indeed a misuse. A very similar bug used to be there in io-threads, but we have moved to using pthread_cond over there since a while. To fix this problem we could use a pthread_mutex/pthread_cond pair + a boolean flag in place of the misused mutex. Or, we could just declare gd_op_sm_lock a

Re: [Gluster-devel] EHT / DHT

2014-11-25 Thread Anand Avati
On Tue Nov 25 2014 at 2:13:43 PM Jan H Holtzhausen wrote: > Yes we have deduplication at the filesystem layer > As things stand, it is not possible to achieve what you are looking for with DHT. By using regexes, you can influence the placement of files relative to other filenames *only within th

Re: [Gluster-devel] EHT / DHT

2014-11-25 Thread Anand Avati
Unless there is some sort of de-duplication under the covers happening in the brick, or the files are hardlinks to each other, there is no cache benefit whatsoever by having identical files placed on the same server. Thanks, Avati On Tue Nov 25 2014 at 12:59:25 PM Jan H Holtzhausen wrote: > As

Re: [Gluster-devel] Single layout at root (Was EHT / DHT)

2014-11-25 Thread Anand Avati
On Tue Nov 25 2014 at 1:28:59 PM Shyam wrote: > On 11/12/2014 01:55 AM, Anand Avati wrote: > > > > > > On Tue, Nov 11, 2014 at 1:56 PM, Jeff Darcy > <mailto:jda...@redhat.com>> wrote: > > > > (Personally I would have > > done this by

Re: [Gluster-devel] EHT / DHT

2014-11-11 Thread Anand Avati
On Tue, Nov 11, 2014 at 1:56 PM, Jeff Darcy wrote: > (Personally I would have > done this by "mixing in" the parent GFID to the hash calculation, but > that alternative was ignored.) > Actually when DHT was implemented, the concept of GFID did not (yet) exist. Due to backward compatibility it h

Re: [Gluster-devel] Chaning position of md-cache in xlator graph

2014-10-21 Thread Anand Avati
On Tue, Oct 21, 2014 at 2:58 AM, Raghavendra Gowdappa wrote: > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1138970 Posted a comment in the BZ ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/list

Re: [Gluster-devel] Invalid DIR * usage in quota xlator

2014-10-15 Thread Anand Avati
On Tue, Oct 14, 2014 at 7:22 PM, Emmanuel Dreyfus wrote: > J. Bruce Fields wrote: > > > Is the result on non-Linux really to fail any readdir using an offset > > not returned from the current open? > > Yes, but thatnon-Linux behabvior is POSIX compliant. Linux just happens > to do more than the

Re: [Gluster-devel] if/else coding style :-)

2014-10-13 Thread Anand Avati
On Mon, Oct 13, 2014 at 2:00 PM, Shyam wrote: > (apologies, last one on the metrics from me :), as I believe it is more > about style than actual numbers at a point) > > _maybe_ this is better, and it is pretty close to call now ;) > > find -name '*.c' | xargs grep else | wc -l > 3719 > find -nam

Re: [Gluster-devel] How to fix wrong telldir/seekdir usage

2014-09-13 Thread Anand Avati
How does the NetBSD nfs server provide stable directory offsets, for the NFS client to resume reading from at a later point in time? Very similar problems are present in that scenario and it might be helpful to see what approaches are taken there (which are probably more tried and tested) Thanks

Re: [Gluster-devel] [fuse-devel] Feature proposal - FS-Cache support in FUSE

2014-09-11 Thread Anand Avati
On Thu, Sep 11, 2014 at 3:26 AM, Miklos Szeredi wrote: > On Wed, Sep 10, 2014 at 6:14 PM, Anand Avati wrote: > > > > > This is something beyond the libfuse API. Unless the kernel/user ABI is > > modified to present the entire 128bits with each call (i.e, both nodeID + &

Re: [Gluster-devel] [fuse-devel] Feature proposal - FS-Cache support in FUSE

2014-09-10 Thread Anand Avati
On Wed, Sep 10, 2014 at 7:20 AM, Miklos Szeredi wrote: > > >> This would be a challenge with any FUSE based filesystem which has > >> persistent filehandles larger than 64bit. > > Fuse does provide a total of 128bits of file handle identification > with nodeID + generation, with some of that numbe

Re: [Gluster-devel] Feature proposal - FS-Cache support in FUSE

2014-09-02 Thread Anand Avati
On Mon, Sep 1, 2014 at 6:07 AM, Vimal A R wrote: > Hello fuse-devel / fs-cache / gluster-devel lists, > > I would like to propose the idea of implementing FS-Cache support in the > fuse kernel module, which I am planning to do as part of my UG university > course. This proposal is by no means fin

Re: [Gluster-devel] patches missing in git even after gerrit says they are merged

2014-08-22 Thread Anand Avati
Whole of /review seems to be owned by gerrit already. Can someone re-post/remerge the patches? That would be the simplest. Are the patches available in the github mirror? On Fri, Aug 22, 2014 at 12:01 AM, Justin Clift wrote: > We *might* need to chown all of /review/ directory down first too,

Re: [Gluster-devel] in dict.c, this gets replace by environment

2014-08-13 Thread Anand Avati
Can you post a bt full? Thanks On Wed, Aug 13, 2014 at 9:35 PM, Emmanuel Dreyfus wrote: > Just in case someone has an idea why this happens: From time to time on > NetBSD, gluster randomly crashes because "this" gets replaced by process > environment. Here is an example: > > #3 0xbb75afb2 in

Re: [Gluster-devel] how does meta xlator work?

2014-08-13 Thread Anand Avati
On Wed, Aug 13, 2014 at 8:55 PM, Emmanuel Dreyfus wrote: > Anand Avati wrote: > > > That may / may not work well in practice depending on the number of > > concurrent apps working on a file. > > I am not sure what could make a FS decide that for the same file, one >

Re: [Gluster-devel] how does meta xlator work?

2014-08-13 Thread Anand Avati
On Tue, Aug 12, 2014 at 9:58 AM, Emmanuel Dreyfus wrote: > On Mon, Aug 11, 2014 at 09:53:19PM -0700, Anand Avati wrote: > > If FUSE implements proper direct_io semantics (somewhat like how O_DIRECT > > flag is handled) and allows the mode to be enabled by the FS in open_cbk, &

Re: [Gluster-devel] Transparent encryption in GlusterFS: Implications on manageability

2014-08-13 Thread Anand Avati
+1 for all the points. On Wed, Aug 13, 2014 at 11:22 AM, Jeff Darcy wrote: > > I.1 Generating the master volume key > > > > > > Master volume key should be generated by user on the trusted machine. > > Recommendations on master key generation provided at section 6.2 of > > the manpa

Re: [Gluster-devel] how does meta xlator work?

2014-08-12 Thread Anand Avati
On Mon, Aug 11, 2014 at 10:36 PM, Emmanuel Dreyfus wrote: > Anand Avati wrote: > > > If FUSE implements proper direct_io semantics (somewhat like how O_DIRECT > > flag is handled) and allows the mode to be enabled by the FS in open_cbk, > > then I guess such a special

Re: [Gluster-devel] how does meta xlator work?

2014-08-11 Thread Anand Avati
On Mon, Aug 11, 2014 at 9:14 PM, Emmanuel Dreyfus wrote: > Anand Avati wrote: > > > In meta all file sizes are 0 (since the contents of the inode are > generated > > dynamically on open()/read(), size is unknown during lookup() -- just > like > > /proc). And there

Re: [Gluster-devel] how does meta xlator work?

2014-08-11 Thread Anand Avati
On Mon, Aug 11, 2014 at 7:37 PM, Harshavardhana wrote: > > But there is something I don't get withthe fix: > > - the code forces direct IO if (state->flags & O_ACCMODE) != O_RDONLY), > > but here the file is open read/only, hence I would expect fuse xlator to > > do nothing special > > direct_io_

Re: [Gluster-devel] how does meta xlator work?

2014-08-11 Thread Anand Avati
My guess would be that direct_io_mode works differently on *BSD. In Linux (and it appears in OS/X as well), the VFS takes hint from the file size (returned in lookup/stat) to limit itself from not read()ing beyond that offset. So if a file size is returned 0 in lookup, read() is never received even

Re: [Gluster-devel] Fw: Re: Corvid gluster testing

2014-08-07 Thread Anand Avati
David, Is it possible to profile the app to understand the block sizes used for performing write() (using strace, source code inspection etc)? The block sizes reported by gluster volume profile is measured on the server side and is subject to some aggregation by the client side write-behind xlator.

Re: [Gluster-devel] regarding fuse mount crash on graph-switch

2014-08-06 Thread Anand Avati
Can you add more logging to the fd migration failure path as well please (errno and possibly other details)? Thanks! On Wed, Aug 6, 2014 at 9:16 PM, Pranith Kumar Karampuri wrote: > hi, >Could you guys review http://review.gluster.com/#/c/8402. This fixes > crash reported by JoeJulian. We

Re: [Gluster-devel] Fw: Re: Corvid gluster testing

2014-08-06 Thread Anand Avati
PM, Pranith Kumar Karampuri wrote: > > On 08/07/2014 06:48 AM, Anand Avati wrote: > > > > > On Wed, Aug 6, 2014 at 6:05 PM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> We checked this performance with plain distribute as well and on nfs i

Re: [Gluster-devel] Fw: Re: Corvid gluster testing

2014-08-06 Thread Anand Avati
n you please state it more clearly? Thanks > I was wondering if any of you guys know what could contribute to this > difference. > > Pranith > > On 08/07/2014 01:33 AM, Anand Avati wrote: > > Seems like heavy FINODELK contention. As a diagnostic step, can you try >

Re: [Gluster-devel] Fw: Re: Corvid gluster testing

2014-08-06 Thread Anand Avati
Seems like heavy FINODELK contention. As a diagnostic step, can you try disabling eager-locking and check the write performance again (gluster volume set $name cluster.eager-lock off)? On Tue, Aug 5, 2014 at 11:44 AM, David F. Robinson < david.robin...@corvidtec.com> wrote: > Forgot to attach p

Re: [Gluster-devel] regarding resolution for fuse/server

2014-08-01 Thread Anand Avati
There are subtle differences between fuse and server. In fuse the inode table does not use LRU pruning, so expected inodes are guaranteed to be cached. For e.g, when mkdir() FOP arrives, fuse would have already checked with a lookup and the kernel guarantees another thread would not have created mk

Re: [Gluster-devel] Reuse of frame?

2014-07-28 Thread Anand Avati
previous fop. > > > > On Mon, Jul 28, 2014 at 12:19 PM, Anand Avati wrote: > > call frames and stacks are re-used from a mem-pool. So pointers might > > repeat. Can you describe your use case a little more in detail, just to > be > > sure? > > > > >

Re: [Gluster-devel] Reuse of frame?

2014-07-28 Thread Anand Avati
call frames and stacks are re-used from a mem-pool. So pointers might repeat. Can you describe your use case a little more in detail, just to be sure? On Mon, Jul 28, 2014 at 11:27 AM, Matthew McKeen wrote: > Is it true that different fops will always have a different frame > (i.e. different fr

Re: [Gluster-devel] Better organization for code documentation [Was: Developer Documentation for datastructures in gluster]

2014-07-22 Thread Anand Avati
On Tue, Jul 22, 2014 at 7:35 AM, Kaushal M wrote: > Hey everyone, > > While I was writing the documentation for the options framework, I > thought up of a way to better organize the code documentation we are > creating now. I've posted a patch for review that implements this > organization. [1] >

[Gluster-devel] Fwd: Re: can not build glusterfs3.5.1 on solaris because of missing sys/cdefs.h

2014-07-16 Thread Anand Avati
Copying gluster-devel@ Thanks for reporting Michael. I guess we need to forward port that old change. Can you please send out a patch to gerrit? Thanks! On 7/16/14, 2:36 AM, 马忠 wrote: > Hi Avati, > >I tried to build the latest glusterfs 3.5.1 on solaris11.1, but > it stopped because o

Re: [Gluster-devel] inode linking in GlusterFS NFS server

2014-07-07 Thread Anand Avati
On Mon, Jul 7, 2014 at 12:48 PM, Raghavendra Bhat wrote: > > Hi, > > As per my understanding nfs server is not doing inode linking in readdirp > callback. Because of this there might be some errors while dealing with > virtual inodes (or gfids). As of now meta, gfid-access and snapview-server > (

Re: [Gluster-devel] triggers for sending inode forgets

2014-07-04 Thread Anand Avati
On Fri, Jul 4, 2014 at 8:17 PM, Pranith Kumar Karampuri wrote: > > On 07/05/2014 08:17 AM, Anand Avati wrote: > > > > > On Fri, Jul 4, 2014 at 7:03 PM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> hi, >> I work on glusterfs an

Re: [Gluster-devel] triggers for sending inode forgets

2014-07-04 Thread Anand Avati
On Fri, Jul 4, 2014 at 7:03 PM, Pranith Kumar Karampuri wrote: > hi, > I work on glusterfs and was debugging a memory leak. Need your help in > figuring out if something is done properly or not. > When a file is looked up for the first time in gluster through fuse, > gluster remembers the par

[Gluster-devel] [PATCH] fuse: ignore entry-timeout on LOOKUP_REVAL

2014-06-26 Thread Anand Avati
finally the ESTALE is going back to the application. Fix: if revalidation is happening with LOOKUP_REVAL, then ignore entry-timeout and always do the up-call. Signed-off-by: Anand Avati --- fs/fuse/dir.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/fs/fuse/dir.c b/f

Re: [Gluster-devel] [Gluster-users] Addition of GlusterFS Port Maintainers

2014-06-24 Thread Anand Avati
On Tue, Jun 24, 2014 at 10:43 AM, Justin Clift wrote: > On 24/06/2014, at 6:34 PM, Vijay Bellur wrote: > > Hi All, > > > > Since there has been traction for ports of GlusterFS to other unix > distributions, we thought of adding maintainers for the various ports that > are around. I am glad to ann

Re: [Gluster-devel] Rolling upgrades from glusterfs 3.4 to 3.5

2014-06-12 Thread Anand Avati
On Thu, Jun 12, 2014 at 10:47 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > On 06/12/2014 11:16 PM, Anand Avati wrote: > > > > > On Thu, Jun 12, 2014 at 10:39 AM, Ravishankar N > wrote: > >> On 06/12/2014 08:19 PM, Justin Clift wrote: >&g

Re: [Gluster-devel] Rolling upgrades from glusterfs 3.4 to 3.5

2014-06-12 Thread Anand Avati
On Thu, Jun 12, 2014 at 10:39 AM, Ravishankar N wrote: > On 06/12/2014 08:19 PM, Justin Clift wrote: > >> On 12/06/2014, at 2:22 PM, Ravishankar N wrote: >> >> >>> But we will still hit the problem when rolling upgrade is performed >>> from 3.4 to 3.5, unless the clients are also upgraded to 3.

Re: [Gluster-devel] Rolling upgrades from glusterfs 3.4 to 3.5

2014-06-12 Thread Anand Avati
On Thu, Jun 12, 2014 at 10:33 AM, Vijay Bellur wrote: > On 06/12/2014 06:52 PM, Ravishankar N wrote: > >> Hi Vijay, >> >> Since glusterfs 3.5, posix_lookup() sends ESTALE instead of ENOENT [1] >> when when a parent gfid (entry) is not present on the brick . In a >> replicate set up, this causes a

Re: [Gluster-devel] Bootstrapping glusterfsiostat

2014-06-06 Thread Anand Avati
On Fri, Jun 6, 2014 at 10:13 AM, Vipul Nayyar wrote: > Hello, > > I'm Vipul and I'll be working on a tool called glusterfsiostat under GSOC > this summer with KP as my mentor. Based on our discussion, the plan for the > future is to build an initial working version of the tool in python and > imp

Re: [Gluster-devel] Regarding doing away with refkeeper in locks xlator

2014-06-05 Thread Anand Avati
On Thu, Jun 5, 2014 at 10:56 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > On 06/06/2014 10:47 AM, Pranith Kumar Karampuri wrote: > > > On 06/06/2014 10:43 AM, Anand Avati wrote: > > > > > On Thu, Jun 5, 2014 at 9:54 PM, Pranith Kumar Kara

Re: [Gluster-devel] Regarding doing away with refkeeper in locks xlator

2014-06-05 Thread Anand Avati
On Thu, Jun 5, 2014 at 9:54 PM, Pranith Kumar Karampuri wrote: > > On 06/06/2014 10:02 AM, Anand Avati wrote: > > On Thu, Jun 5, 2014 at 7:52 PM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> This sounds a bit complicated. I th

Re: [Gluster-devel] Regarding doing away with refkeeper in locks xlator

2014-06-05 Thread Anand Avati
On Thu, Jun 5, 2014 at 7:52 PM, Pranith Kumar Karampuri wrote: > > This sounds a bit complicated. I think there is a much simpler solution: > > - First, make update_refkeeper() check for blocked locks (which I > mentioned as "optional" previously) > > - Make grant_blocked_locks() double up and

Re: [Gluster-devel] Regarding doing away with refkeeper in locks xlator

2014-06-05 Thread Anand Avati
On Thu, Jun 5, 2014 at 10:46 AM, Krutika Dhananjay wrote: > To summarize, the real "problems" are: > > - Deref of pl_inode->refkeeper as an inode_t in the cleanup logger. It > is an internal artifact of pl_update_refkeeper() working and nobody else > is supposed to "touch" it. > > For this, the s

Re: [Gluster-devel] doubts in posix_handle_path and posix_handle_pump

2014-06-05 Thread Anand Avati
On 6/3/14, 3:49 AM, Xavier Hernandez wrote: On Tuesday 03 June 2014 15:42:19 Pranith Kumar Karampuri wrote: On 06/03/2014 02:42 PM, Xavier Hernandez wrote: The possible problem I see is that in the comments it says that this function returns a path to an IA_IFDIR (it will return IA_IFDIR on an

Re: [Gluster-devel] Regarding doing away with refkeeper in locks xlator

2014-06-05 Thread Anand Avati
On 6/4/14, 9:43 PM, Krutika Dhananjay wrote: *From: *"Pranith Kumar Karampuri" *To: *"Krutika Dhananjay" , "Anand Avati" *Cc: *gluster-devel@gluster.org *Sent: *Wedne

Re: [Gluster-devel] Regarding doing away with refkeeper in locks xlator

2014-06-05 Thread Anand Avati
On 6/3/14, 11:32 PM, Pranith Kumar Karampuri wrote: On 06/04/2014 11:37 AM, Krutika Dhananjay wrote: Hi, Recently there was a crash in locks translator (BZ 1103347, BZ 1097102) with the following backtrace: (gdb) bt #0 uuid_unpack (in=0x8 , uu=0x7fffea6c6a60) at ../../contrib/uuid/unpack.c:44

Re: [Gluster-devel] Regarding doing away with refkeeper in locks xlator

2014-06-04 Thread Anand Avati
On 6/3/14, 11:07 PM, Krutika Dhananjay wrote: Hi, Recently there was a crash in locks translator (BZ 1103347, BZ 1097102) with the following backtrace: (gdb) bt #0 uuid_unpack (in=0x8 , uu=0x7fffea6c6a60) at ../../contrib/uuid/unpack.c:44 #1 0x7feeba9e19d6 in uuid_unparse_x (uu=, out=0x235

Re: [Gluster-devel] spurios failures in tests/encryption/crypt.t

2014-05-21 Thread Anand Avati
On Tue, May 20, 2014 at 10:54 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > - Original Message ----- > > From: "Anand Avati" > > To: "Pranith Kumar Karampuri" > > Cc: "Edward Shishkin" , "Gluster Devel&quo

Re: [Gluster-devel] spurios failures in tests/encryption/crypt.t

2014-05-20 Thread Anand Avati
There are a few suspicious things going on here.. On Tue, May 20, 2014 at 10:07 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > > hi, > > > crypt.t is failing regression builds once in a while and most of > > > the times it is because of the failures just after the remount in

Re: [Gluster-devel] Need sensible default value for detecting unclean client disconnects

2014-05-20 Thread Anand Avati
Niels, This is a good addition. While gluster clients do a reasonably good job at detecting dead/hung servers with ping-timeout, the server side detection has been rather weak. TCP_KEEPALIVE has helped to some extent, for cases where an idling client (which holds a lock) goes dead. However if an ac

Re: [Gluster-devel] New project on the Forge - gstatus

2014-05-16 Thread Anand Avati
KP, Vipul, It will be awesome to get io-stats like instrumentation on the client side. Here are some further thoughts on how to implement that. If you have a recent git HEAD build, I would suggest that you explore the latency stats on the client side exposed through meta at $MNT/.meta/graphs/activ

Re: [Gluster-devel] Spurious failures because of nfs and snapshots

2014-05-15 Thread Anand Avati
On Thu, May 15, 2014 at 5:49 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > hi, > In the latest build I fired for review.gluster.com/7766 ( > http://build.gluster.org/job/regression/4443/console) failed because of > spurious failure. The script doesn't wait for nfs export to be av

Re: [Gluster-devel] Automatically building RPMs upon patch submission?

2014-05-12 Thread Anand Avati
On Mon, May 12, 2014 at 7:20 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > - Original Message ----- > > From: "Anand Avati" > > To: "Justin Clift" > > Cc: gluster-devel@gluster.org > > Sent: Tuesday, May 13, 201

Re: [Gluster-devel] Automatically building RPMs upon patch submission?

2014-05-12 Thread Anand Avati
On Mon, May 12, 2014 at 5:16 PM, Justin Clift wrote: > On 13/05/2014, at 1:00 AM, Anand Avati wrote: > > On Mon, May 12, 2014 at 4:39 PM, Justin Clift > wrote: > > On 13/05/2014, at 12:27 AM, Anand Avati wrote: > > > > > http://build.gluster.org/job/regression/

Re: [Gluster-devel] Automatically building RPMs upon patch submission?

2014-05-12 Thread Anand Avati
On Mon, May 12, 2014 at 4:39 PM, Justin Clift wrote: > On 13/05/2014, at 12:27 AM, Anand Avati wrote: > > > http://build.gluster.org/job/regression/build - key in the gerrit patch > number for the CHANGE_ID field, and click 'Build'. > > Doesn't that just a

Re: [Gluster-devel] Automatically building RPMs upon patch submission?

2014-05-12 Thread Anand Avati
On Mon, May 12, 2014 at 4:23 PM, Justin Clift wrote: > On 12/05/2014, at 9:04 PM, Anand Avati wrote: > > > And yeah, the other reason: if a dev pushes a series/set of dependent > patches, regression needs to run only on the last one (regression > test/voting is cumulative for

Re: [Gluster-devel] Automatically building RPMs upon patch submission?

2014-05-12 Thread Anand Avati
On Mon, May 12, 2014 at 11:26 AM, Anand Avati wrote: > > > > On Mon, May 12, 2014 at 11:21 AM, Kaleb S. KEITHLEY > wrote: > >> >> Then maybe we should run regression tests on check-in. I'm getting tired >> of queuing up regression tests. (And I know I&#

Re: [Gluster-devel] Automatically building RPMs upon patch submission?

2014-05-12 Thread Anand Avati
t; > Or If we can make regression test trigger automatically, conditional on smoke-test passing, that would be great. Last time I checked I couldn't figure how to (did not look very deep) and left it manual trigger. Avati > > > On 05/12/2014 02:17 PM, Anand Avati wrote: > >

Re: [Gluster-devel] Automatically building RPMs upon patch submission?

2014-05-12 Thread Anand Avati
It is much better to code review after regression tests pass (premise being human eye time is more precious than build server run time) On Mon, May 12, 2014 at 10:53 AM, Kaleb S. KEITHLEY wrote: > > How about also an auto run of regression at +1 or +2 code review? > > > > On 05/12/2014 01:49 P

Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-08 Thread Anand Avati
On Thu, May 8, 2014 at 12:20 PM, Jeff Darcy wrote: > > They were: a) snap view generation requires privileged ops to > > glusterd. So moving this task to the server side solves a lot of those > > challenges. > > Not really. A server-side component issuing privileged requests > whenever a client

Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-08 Thread Anand Avati
On Thu, May 8, 2014 at 11:48 AM, Jeff Darcy wrote: > > client graph is not dynamically modified. the snapview-client and > > protocol/server are inserted by volgen and no further changes are made on > > the client side. I believe Anand was referring to " Adding a > protocol/client > > instance to

Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-08 Thread Anand Avati
On Thu, May 8, 2014 at 4:53 AM, Jeff Darcy wrote: > > > * How do clients find it? Are we dynamically changing the client > > >side graph to add new protocol/client instances pointing to new > > >snapview-servers, or is snapview-client using RPC directly? Are > > >the snapview-server

Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-08 Thread Anand Avati
On Thu, May 8, 2014 at 4:48 AM, Jeff Darcy wrote: > > If snapview-server runs on all servers, how does a particular client > decide which one to use? Do we need to do something to avoid hot spots? > > Overall, it seems like having clients connect *directly* to the snapshot > volumes once they've

Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-08 Thread Anand Avati
On Thu, May 8, 2014 at 4:45 AM, Ira Cooper wrote: > Also inline. > > - Original Message - > > > The scalability factor I mentioned simply had to do with the core > > infrastructure (depending on very basic mechanisms like the epoll wait > > thread, the entire end-to-end flow of a single f

Re: [Gluster-devel] dht: selfheal of missing directories on nameless (by GFID) LOOKUP

2014-05-05 Thread Anand Avati
On Sun, May 4, 2014 at 10:10 PM, Raghavendra G wrote: > On Mon, May 5, 2014 at 12:32 AM, Anand Avati wrote: > >> >> >> >> On Sun, May 4, 2014 at 9:22 AM, Niels de Vos wrote: >> >>> Hi, >>> >>> bug 1093324 has been opened and we hav

Re: [Gluster-devel] dht: selfheal of missing directories on nameless (by GFID) LOOKUP

2014-05-04 Thread Anand Avati
On Sun, May 4, 2014 at 9:22 AM, Niels de Vos wrote: > Hi, > > bug 1093324 has been opened and we have identified the following cause: > > 1. an NFS-client does a LOOKUP of a directory on a volume > 2. the NFS-client receives a filehandle (contains volume-id + GFID) > 3. add-brick is executed, but

Re: [Gluster-devel] OS X porting merged

2014-05-03 Thread Anand Avati
Can you reproduce by recompiling with CFLAGS="-g -O0" ? On Sat, May 3, 2014 at 7:20 PM, Harshavardhana wrote: > Looks like something really strange happened here > > (gdb) bt > #0 0x7fff94cd06fe in write$NOCANCEL () > #1 0x7fff8ea5198b in _swrite () > #2 0x7fff8ea4cc0c in __sflu

Re: [Gluster-devel] libgfapi and any application

2014-04-30 Thread Anand Avati
Right, this is how the old booster worked. Booster was an LD_PRELOAD'able module which used libglusterfsclient (previous incarnation of libgfapi) to provide the exact functionality Joe is talking about. The libglusterfsclient had both a gfapi-like filesystem API and a thin VFS-like API on top. We w

Re: [Gluster-devel] Addition of sub-maintainers

2014-04-29 Thread Anand Avati
On Tue, Apr 29, 2014 at 8:58 PM, Vijay Bellur wrote: > Hi All, > > As discussed in the last community meeting [1], we are considering > addition of sub-maintainers to GlusterFS. The intent of this activity is to > streamline our patch management and make it more scalable. > > The responsibilities