Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
On 07/04/2014 11:20 AM, Pranith Kumar Karampuri wrote: On 07/04/2014 11:19 AM, Ravishankar N wrote: On 07/04/2014 11:09 AM, Pranith Kumar Karampuri wrote: Ravi, I already sent a patch for it in the morning at http://review.gluster.com/8233 Review please :-) 830665.t is identical in master where it succeeds. Looks like *match_subnet_v4() changes in master need to be backported to 3.5 as well. That is because Avati's patch where EXPECT matches reg-ex is not present on release-3.5 commit 9a34ea6a0a95154013676cabf8528b2679fb36c4 Author: Anand Avati av...@redhat.com Date: Fri Jan 24 18:30:32 2014 -0800 tests: support regex in EXPECT constructs Instead of just strings, provide the ability to specify a regex of the pattern to expect Change-Id: I6ada978197dceecc28490a2a40de73a04ab9abcd Signed-off-by: Anand Avati av...@redhat.com Reviewed-on: http://review.gluster.org/6788 Reviewed-by: Pranith Kumar Karampuri pkara...@redhat.com Tested-by: Gluster Build System jenk...@build.gluster.com Shall we backport this? I think we should; reviewed http://review.gluster.org/#/c/8235/. Thanks for the fix :) Pranith Pranith On 07/04/2014 11:00 AM, Ravishankar N wrote: Hi Niels/ Santosh, tests/bugs/bug-830665.t is consistently failing on 3.5 branch: not ok 17 Got *.redhat.com instead of \*.redhat.com not ok 19 Got 192.168.10.[1-5] instead of 192.168.10.\[1-5] and seems to be introduced by http://review.gluster.org/#/c/8223/ Could you please look into it? Thanks, Ravi ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
On 07/04/2014 12:00 PM, Santosh Pradhan wrote: Thanks guys for looking into this. I am just wondering how this passed the regression before Niels could merged this in? Good part is test case needs modification not code ;) There seems to be some bug in our regression testing code. Even though the regression failed it gave the verdict as SUCCESS http://build.gluster.org/job/rackspace-regression-2GB-triggered/97/consoleFull Pranith -Santosh On 07/04/2014 11:51 AM, Ravishankar N wrote: On 07/04/2014 11:20 AM, Pranith Kumar Karampuri wrote: On 07/04/2014 11:19 AM, Ravishankar N wrote: On 07/04/2014 11:09 AM, Pranith Kumar Karampuri wrote: Ravi, I already sent a patch for it in the morning at http://review.gluster.com/8233 Review please :-) 830665.t is identical in master where it succeeds. Looks like *match_subnet_v4() changes in master need to be backported to 3.5 as well. That is because Avati's patch where EXPECT matches reg-ex is not present on release-3.5 commit 9a34ea6a0a95154013676cabf8528b2679fb36c4 Author: Anand Avati av...@redhat.com Date: Fri Jan 24 18:30:32 2014 -0800 tests: support regex in EXPECT constructs Instead of just strings, provide the ability to specify a regex of the pattern to expect Change-Id: I6ada978197dceecc28490a2a40de73a04ab9abcd Signed-off-by: Anand Avati av...@redhat.com Reviewed-on: http://review.gluster.org/6788 Reviewed-by: Pranith Kumar Karampuri pkara...@redhat.com Tested-by: Gluster Build System jenk...@build.gluster.com Shall we backport this? I think we should; reviewed http://review.gluster.org/#/c/8235/. Thanks for the fix :) Pranith Pranith On 07/04/2014 11:00 AM, Ravishankar N wrote: Hi Niels/ Santosh, tests/bugs/bug-830665.t is consistently failing on 3.5 branch: not ok 17 Got *.redhat.com instead of \*.redhat.com not ok 19 Got 192.168.10.[1-5] instead of 192.168.10.\[1-5] and seems to be introduced by http://review.gluster.org/#/c/8223/ Could you please look into it? Thanks, Ravi ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
On Thu, Jul 3, 2014 at 11:30 PM, Santosh Pradhan sprad...@redhat.com wrote: Thanks guys for looking into this. I am just wondering how this passed the regression before Niels could merged this in? Good part is test case needs modification not code ;) We need a single maintainer for test cases alone to keep stability across, this would occur if some changes introduce races as we add more and more test cases. For example chmod.t from posix-compliance fails once in a while and it is not never maintained by us. -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
There seems to be some bug in our regression testing code. Even though the regression failed it gave the verdict as SUCCESS http://build.gluster.org/job/rackspace-regression-2GB-triggered/97/consoleFull This was fixed by Justin Clift recently -- Religious confuse piety with mere ritual, the virtuous confuse regulation with outcomes ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
On 07/04/2014 12:06 PM, Harshavardhana wrote: There seems to be some bug in our regression testing code. Even though the regression failed it gave the verdict as SUCCESS http://build.gluster.org/job/rackspace-regression-2GB-triggered/97/consoleFull This was fixed by Justin Clift recently All is well then :-) Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
On Fri, Jul 04, 2014 at 11:51:45AM +0530, Ravishankar N wrote: On 07/04/2014 11:20 AM, Pranith Kumar Karampuri wrote: On 07/04/2014 11:19 AM, Ravishankar N wrote: On 07/04/2014 11:09 AM, Pranith Kumar Karampuri wrote: Ravi, I already sent a patch for it in the morning at http://review.gluster.com/8233 Review please :-) 830665.t is identical in master where it succeeds. Looks like *match_subnet_v4() changes in master need to be backported to 3.5 as well. That is because Avati's patch where EXPECT matches reg-ex is not present on release-3.5 commit 9a34ea6a0a95154013676cabf8528b2679fb36c4 Author: Anand Avati av...@redhat.com Date: Fri Jan 24 18:30:32 2014 -0800 tests: support regex in EXPECT constructs Instead of just strings, provide the ability to specify a regex of the pattern to expect Change-Id: I6ada978197dceecc28490a2a40de73a04ab9abcd Signed-off-by: Anand Avati av...@redhat.com Reviewed-on: http://review.gluster.org/6788 Reviewed-by: Pranith Kumar Karampuri pkara...@redhat.com Tested-by: Gluster Build System jenk...@build.gluster.com Shall we backport this? I think we should; reviewed http://review.gluster.org/#/c/8235/. Thanks for the fix :) Thanks guys! Justin sent me a heads up on the 'should have failed regression testing' yesterday, but I was a little tied up. I was planning to look into the issue today, seems you already found it, wohoo! When this one passes regression tests, I'll merge it. Many thanks, Niels Pranith Pranith On 07/04/2014 11:00 AM, Ravishankar N wrote: Hi Niels/ Santosh, tests/bugs/bug-830665.t is consistently failing on 3.5 branch: not ok 17 Got *.redhat.com instead of \*.redhat.com not ok 19 Got 192.168.10.[1-5] instead of 192.168.10.\[1-5] and seems to be introduced by http://review.gluster.org/#/c/8223/ Could you please look into it? Thanks, Ravi ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
On 07/04/2014 12:04 PM, Harshavardhana wrote: On Thu, Jul 3, 2014 at 11:30 PM, Santosh Pradhan sprad...@redhat.com wrote: Thanks guys for looking into this. I am just wondering how this passed the regression before Niels could merged this in? Good part is test case needs modification not code ;) We need a single maintainer for test cases alone to keep stability across, this would occur if some changes introduce races as we add more and more test cases. I don't mind maintaining it along with Justin if people are okay with it. Pranith. For example chmod.t from posix-compliance fails once in a while and it is not never maintained by us. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
On 04/07/2014, at 7:30 AM, Santosh Pradhan wrote: Thanks guys for looking into this. I am just wondering how this passed the regression before Niels could merged this in? It was due to stupidity on my part. ;) Was adjusting the bash script in jenkins the other day, attempting to get the console output nicer looking. So, added a few echo statements in places, attempting to space things out. Previous (working code) was like this: ... sudo -E bash -x /opt/qa/regression.sh RET=$? if [ $RET = 0 ]; then V=+1 VERDICT=SUCCESS else V=-1 VERDICT=FAILED fi ... With the brilliant addition of echo statements in exactly the wrong place, it became: ... echo echo sudo -E bash -x /opt/qa/regression.sh echo echo RET=$? if [ $RET = 0 ]; then V=+1 VERDICT=SUCCESS else V=-1 VERDICT=FAILED fi ... ... and was using the return code from the echo statements. Not my brightest moment. ;) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] bug-822830.t fails on release-3.5 branch
On 04/07/2014, at 7:34 AM, Harshavardhana wrote: On Thu, Jul 3, 2014 at 11:30 PM, Santosh Pradhan sprad...@redhat.com wrote: Thanks guys for looking into this. I am just wondering how this passed the regression before Niels could merged this in? Good part is test case needs modification not code ;) We need a single maintainer for test cases alone to keep stability across, this would occur if some changes introduce races as we add more and more test cases. For example chmod.t from posix-compliance fails once in a while and it is not never maintained by us. Yeah, I'm not really sure what we should do about the smoke test stuff. Possibly useful? http://www.gluster.org/pipermail/gluster-infra/2014-March/68.html + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] regarding inode_link/unlink
On 07/04/2014 04:28 PM, Raghavendra Gowdappa wrote: - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Gluster Devel gluster-devel@gluster.org, Anand Avati av...@gluster.org, Brian Foster bfos...@redhat.com, Raghavendra Gowdappa rgowd...@redhat.com, Raghavendra Bhat rab...@redhat.com Sent: Friday, July 4, 2014 3:44:29 PM Subject: regarding inode_link/unlink hi, I have a doubt about when a particular dentry_unset thus inode_unref on parent dir happens on fuse-bridge in gluster. When a file is looked up for the first time fuse_entry_cbk does 'inode_link' with parent-gfid/bname. Whenever an unlink/rmdir/(lookup gives ENOENT) happens then corresponding inode unlink happens. The question is, will the present set of operations lead to leaks: 1) Mount 'M0' creates a file 'a' 2) Mount 'M1' of same volume deletes file 'a' M0 never touches 'a' anymore. When will inode_unlink happen for such cases? Will it lead to memory leaks? Kernel will eventually send forget (a) on M0 and that will cleanup the dentries and inode. Its equivalent to a file being looked up and never used again (deleting doesn't matter in this case). Do you know the trigger points for that? When I do 'touch a' on the mount point and leave the system like that, forget is not coming. If I do unlink on the file then forget is coming. Pranith Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Gerrit Statistics - June 2014
Hi All, I have pulled together some statistics from gerrit for fun. The statistics that I have generated are limited by my understanding of gerrit's gsql interface. I don't claim that these stats are 100% accurate - if you notice any aberrations, please let me know :). [1] has number of patches sent by submitters in June 2014, [2] has number of patches sent by submitters in 2014 till June 30th, [3] has number of reviews done by reviewers in June 2014 and [4] has number of reviews done by reviewers in 2014. Cheers, Vijay [1] http://employees.org/~vbellur/glusterfs/stats/patches-201406.txt [2] http://employees.org/~vbellur/glusterfs/stats/patches-2014.txt [3] http://employees.org/~vbellur/glusterfs/stats/reviews-201406.txt [4] http://employees.org/~vbellur/glusterfs/stats/reviews-2014.txt ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] spurious failure (bug-1112559.t)
Hi, I think the regression test bug-1112559.t is causing some spurious failures. I see some regression jobs being failed due to it. Regards, Raghavendra Bhat ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.6 Feature Freeze - move to mid next week?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 2014-07-04 13:30, Vijay Bellur wrote: Hi All, Given the holiday weekend in US, I feel that it would be appropriate to move the 3.6 feature freeze date to mid next week so that we can have more reviews done address review comments too. We can still continue to track other milestones as per our release schedule [1]. What do you folks think? Is 'git://git.gluster.org/glusterfs master' the right thing to clone if I'm interested in testing the IPv6 support (and the possible fixes for my outstanding bugs), or should I cherry-pick appropriate bits and apply to 3.5.1 (I'm not planning to go live until some time in september [or later ;-)]) /Anders - -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden -BEGIN PGP SIGNATURE- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTtrpAAAoJENZYyvaDG8Nct0QH/RpdwqrksBkUpuqNb2X+2aPO xZc4FtpmALNi6lM0sUcmGnMPLC9JptcHjw2edr4izrQheqoaFhfCo9Zbar0b+8Yy JVin11u+q6WrA/aVdH/+MhQ38M9lKBV1SUiIGP2FrEW805NbDNGfxj0q8S1mGmQr zgk8SL+cMcGDW7aNSckPj0P39Fa+lwYX3nI9mrQRAHf1mqP6Apl6zCL/kUmu2xgH hjX1cYEI4FleYl9h0L6FaROt4jThngs4S2be4E8Z8U+GmzuxMiCVaLKf2mZAQmEp ZGUBWRp7vDEvHLWZk5NTtiF5Ty1ntaRcBugS94yt/KJUNzJooIdYibh7BJjtPM4= =xyNm -END PGP SIGNATURE- ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.6 Feature Freeze - move to mid next week?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 2014-07-04 17:14, Jeff Darcy wrote: Given the holiday weekend in US, I feel that it would be appropriate to move the 3.6 feature freeze date to mid next week so that we can have more reviews done address review comments too. We can still continue to track other milestones as per our release schedule [1]. What do you folks think? I think the answer depends on what we can expect to change between now and then. Since the gluster.org feature page never got updated to reflect the real feature set for 3.6, I took the list from email sent after the planning meeting. * Better SSL Two out of three patches merged, one still in review. * Data Classification Design barely begun. * Heterogeneous Bricks Patch has CR+1 V+1 but still stalled in review. * Trash Ancient one is still there, probably doesn't even work. * Disperse Patches still in very active review. * Persistent AFR Changelog Xattributes Patches merged. * Better Peer Identification Patch still in review (fails verification). * Gluster Volume Snapshot Tons of patches merged, tons more still to come. * AFRv2 Jammed in long ago. * Policy Based Split-Brain Resolver (PBSBR) No patches, feature page still says in design. * RDMA Improvements No patches, feature page says work in progress. * Server-side Barrier Feature Patches merged. That leaves us with a very short list of items that are likely to change state. * Better SSL * Heterogeneous Bricks * Disperse * Better Peer Identification Of those, I think only disperse is likely to benefit from an extension. The others just need people to step up and finish reviewing them, which could happen today if there were sufficient will. The real question is what to do about disperse. Some might argue that it's already complete enough to go in, so long as its limitations are documented appropriately. Others might argue that it's still months away from being usable (especially wrt performance). In a way it doesn't matter, because either way a few days won't make a difference. We just need to make a collective decision based on its current state (or close to it). If we need to wait a few days before people can come together for that, so be it. OK, this probably answered my earlier question, since there is no IPv6 on this list (stated somewhere to depend on 'Better Peer Identification'), i.e. I should stick to 3.5.1 and only apply patches to address my needs and then check what needs to be done when 3.6.0 is out. /Anders - -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden -BEGIN PGP SIGNATURE- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTtsZ3AAoJENZYyvaDG8NcJPMH/j1P/DN4lerQHtxOLjS7b6MM dNw12blXOIioFbGv/Sh7EYQm5A0Db4Hk21ngIYQcrZVgab/rVv6pfqvpV97S74sE A1yzTfJSMtshSter4F4VSV7BZrPHq7+hYKEkTNEu4Ugw7+PcGvjMAhfVmgiVqUT4 xTqSzB3IsOPELXIOrlB6AZbA7037UvWyyhxjilH5IRVW8KB2ButP2baP0zXlXMf6 622mn3CK11mp/VrXHyxBgGXUMWpJQ9r1vLEn4COhqhALQJ+0vW8uayzcYMYWz56m etTJKtGTZtPalrt1XrFq2Ny5o1KsG4GXlRIGwoqIHtn2v71Sserl6CZQRA+AVAo= =/oBv -END PGP SIGNATURE- ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.6 Feature Freeze - move to mid next week?
- Original Message - From: Jeff Darcy jda...@redhat.com To: Vijay Bellur vbel...@redhat.com Cc: Gluster Devel gluster-devel@gluster.org Sent: Friday, July 4, 2014 10:14:32 AM Subject: Re: [Gluster-devel] 3.6 Feature Freeze - move to mid next week? Given the holiday weekend in US, I feel that it would be appropriate to move the 3.6 feature freeze date to mid next week so that we can have more reviews done address review comments too. We can still continue to track other milestones as per our release schedule [1]. What do you folks think? I think the answer depends on what we can expect to change between now and then. Since the gluster.org feature page never got updated to reflect the real feature set for 3.6, I took the list from email sent after the planning meeting. * Better SSL Two out of three patches merged, one still in review. * Data Classification Design barely begun. * Heterogeneous Bricks Patch has CR+1 V+1 but still stalled in review. * Trash Ancient one is still there, probably doesn't even work. * Disperse Patches still in very active review. * Persistent AFR Changelog Xattributes Patches merged. * Better Peer Identification Patch still in review (fails verification). * Gluster Volume Snapshot Tons of patches merged, tons more still to come. * AFRv2 Jammed in long ago. * Policy Based Split-Brain Resolver (PBSBR) No patches, feature page still says in design. * RDMA Improvements No patches, feature page says work in progress. * Server-side Barrier Feature Patches merged. That leaves us with a very short list of items that are likely to change state. * Better SSL * Heterogeneous Bricks * Disperse * Better Peer Identification Of those, I think only disperse is likely to benefit from an extension. The others just need people to step up and finish reviewing them, which could happen today if there were sufficient will. The real question is what to do about disperse. Some might argue that it's already complete enough to go in, so long as its limitations are documented appropriately. Others might argue that it's still months away from being usable (especially wrt performance). In a way it doesn't matter, because either way a few days won't make a difference. We just need to make a collective decision based on its current state (or close to it). If we need to wait a few days before people can come together for that, so be it. The reliability and performance of the erasure code translator is probably not at a level where we could guarantee the feature is bug free and ready. However the feature could be added to the gluster code base for people to begin to experiment with, as suggested we would need to document limitations. The idea being the more hands get on the code, the more bugs found, the more suggestions made for improvements, deeper integration, etc. I do not believe the erasure code translator is called disperse any longer. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.6 Feature Freeze - move to mid next week?
On 07/04/2014 08:44 PM, Jeff Darcy wrote: Given the holiday weekend in US, I feel that it would be appropriate to move the 3.6 feature freeze date to mid next week so that we can have more reviews done address review comments too. We can still continue to track other milestones as per our release schedule [1]. What do you folks think? I think the answer depends on what we can expect to change between now and then. Since the gluster.org feature page never got updated to reflect the real feature set for 3.6, I took the list from email sent after the planning meeting. * Better SSL Two out of three patches merged, one still in review. * Data Classification Design barely begun. * Heterogeneous Bricks Patch has CR+1 V+1 but still stalled in review. * Trash Ancient one is still there, probably doesn't even work. There is an improved implementation of trash in gerrit and can help get more traction with more reviews, rebases etc. * Disperse Patches still in very active review. * Persistent AFR Changelog Xattributes Patches merged. * Better Peer Identification Patch still in review (fails verification). * Gluster Volume Snapshot Tons of patches merged, tons more still to come. * AFRv2 Jammed in long ago. * Policy Based Split-Brain Resolver (PBSBR) No patches, feature page still says in design. * RDMA Improvements No patches, feature page says work in progress. Don't think there are any major pieces missing to get rdma functional. More testing and bug fixes should be mostly it. Can happen in the interval between feature freeze and code freeze. * Server-side Barrier Feature Patches merged. Update on a few other nice to have features that have either made the cut or can become available with some effort: * glusterd volume locks - already in * better logging - framework few xlators have adopted this * Exports Netgroups authentication - Needs some review attention and rebases * Gluster user serviceable snapshots - feature already in * rest-api - Early implementation available, need some more reviews * Object Count Archipelago - have some code, need a little more effort to make it available. That leaves us with a very short list of items that are likely to change state. * Better SSL * Heterogeneous Bricks * Disperse * Better Peer Identification Of those, I think only disperse is likely to benefit from an extension. The others just need people to step up and finish reviewing them, which could happen today if there were sufficient will. The real question is what to do about disperse. Some might argue that it's already complete enough to go in, so long as its limitations are documented appropriately. Others might argue that it's still months away from being usable (especially wrt performance). In a way it doesn't matter, because either way a few days won't make a difference. We just need to make a collective decision based on its current state (or close to it). If we need to wait a few days before people can come together for that, so be it. My inclination with respect to disperse/ec is to get it into the code base and mature it there (mostly from a performance perspective). Better peer identification, trash and subset of the features in above lists can benefit by a few days extension. So it seems worthwhile for me to push out feature freeze till mid next week. -Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.6 Feature Freeze - move to mid next week?
There is an improved implementation of trash in gerrit and can help get more traction with more reviews, rebases etc. I see nine patches for this, all failing verification and all but one inactive since March 10. Given our review rate, is this likely to converge in only a week? * RDMA Improvements No patches, feature page says work in progress. Don't think there are any major pieces missing to get rdma functional. More testing and bug fixes should be mostly it. Can happen in the interval between feature freeze and code freeze. The feature page specifically mentions co-existence with TCP, and performance. I'm guessing those are addressed by 149, and possibly by 4378. Is that correct? ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.6 Feature Freeze - move to mid next week?
On 07/04/2014 11:15 PM, Jeff Darcy wrote: There is an improved implementation of trash in gerrit and can help get more traction with more reviews, rebases etc. I see nine patches for this, all failing verification and all but one inactive since March 10. Given our review rate, is this likely to converge in only a week? I think this re-factoring exercise has starved review attention by lack of rebases and not passing regression tests. Anoop and Jiffin are expected to be active on this next week. If all works well, we can possibly get Trash in. * RDMA Improvements No patches, feature page says work in progress. Don't think there are any major pieces missing to get rdma functional. More testing and bug fixes should be mostly it. Can happen in the interval between feature freeze and code freeze. The feature page specifically mentions co-existence with TCP, and performance. I'm guessing those are addressed by 149, and possibly by 4378. Is that correct? Yes, those address a few issues. I believe there are a few open bugs (don't have bz IDs handy right now) in those areas which need to be sorted out and I am hopeful that we can address them by code freeze. Raghavendra - please fill in here if you have more context on these issues. Thanks, Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Bug#751888 closed by Thomas Goirand z...@debian.org (Bug#751888: fixed in glusterfs 3.5.0-1.1)
On 07/02/2014 05:39 PM, Debian Bug Tracking System wrote: This is an automatic notification regarding your Bug report which was filed against the glusterfs-server package: #751888: glusterfs-server: creating symlinks generates errors It has been closed by Thomas Goirandz...@debian.org. Hello. I have just checked the version uploaded to unstable (3.5.0-1.1). And I can confirm that the bug is fixed. Thank you all for the quick resolution. Bye Matteo C. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.6 Feature Freeze - move to mid next week?
On 04/07/2014, at 12:30 PM, Vijay Bellur wrote: Hi All, Given the holiday weekend in US, I feel that it would be appropriate to move the 3.6 feature freeze date to mid next week so that we can have more reviews done address review comments too. We can still continue to track other milestones as per our release schedule [1]. What do you folks think? How about end of next week? The extra few days could make a positive difference as to what gets in. (?) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Gerrit Statistics - June 2014
On 04/07/2014, at 1:14 PM, Vijay Bellur wrote: Hi All, I have pulled together some statistics from gerrit for fun. The statistics that I have generated are limited by my understanding of gerrit's gsql interface. I don't claim that these stats are 100% accurate - if you notice any aberrations, please let me know :). [1] has number of patches sent by submitters in June 2014, [2] has number of patches sent by submitters in 2014 till June 30th, [3] has number of reviews done by reviewers in June 2014 and [4] has number of reviews done by reviewers in 2014. Cheers, Vijay [1] http://employees.org/~vbellur/glusterfs/stats/patches-201406.txt [2] http://employees.org/~vbellur/glusterfs/stats/patches-2014.txt [3] http://employees.org/~vbellur/glusterfs/stats/reviews-201406.txt [4] http://employees.org/~vbellur/glusterfs/stats/reviews-2014.txt Awesome. Pranith appears to be a patch generating machine :, and you're absolutely nailing reviews. With the reviews do you reckon that's accurate? Wonder if it's picking up the Change has been successfully cherry-picked as xxx as messages and stuff as reviews? + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] triggers for sending inode forgets
hi, I work on glusterfs and was debugging a memory leak. Need your help in figuring out if something is done properly or not. When a file is looked up for the first time in gluster through fuse, gluster remembers the parent-inode, basename for that inode. Whenever an unlink/rmdir/(lookup gives ENOENT) happens then corresponding forgetting of parent-inode, basename happens. In all other cases it relies on fuse to send forget of an inode to release these associations. I was wondering what are the trigger points for sending forgets by fuse. Lets say M0, M1 are fuse mounts of same volume. 1) Mount 'M0' creates a file 'a' 2) Mount 'M1' of deletes file 'a' M0 never touches 'a' anymore. Will a forget be sent on inode of 'a'? If yes when? Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Gerrit Statistics - June 2014
On 07/05/2014 01:59 AM, Justin Clift wrote: On 04/07/2014, at 1:14 PM, Vijay Bellur wrote: Hi All, I have pulled together some statistics from gerrit for fun. The statistics that I have generated are limited by my understanding of gerrit's gsql interface. I don't claim that these stats are 100% accurate - if you notice any aberrations, please let me know :). [1] has number of patches sent by submitters in June 2014, [2] has number of patches sent by submitters in 2014 till June 30th, [3] has number of reviews done by reviewers in June 2014 and [4] has number of reviews done by reviewers in 2014. Cheers, Vijay [1] http://employees.org/~vbellur/glusterfs/stats/patches-201406.txt [2] http://employees.org/~vbellur/glusterfs/stats/patches-2014.txt [3] http://employees.org/~vbellur/glusterfs/stats/reviews-201406.txt [4] http://employees.org/~vbellur/glusterfs/stats/reviews-2014.txt Awesome. Pranith appears to be a patch generating machine :, and you're absolutely nailing reviews. With the reviews do you reckon that's accurate? Review count includes those reviews that provide a CR vote (-2, -1, +1, +2). If different versions of the patchset are reviewed by the same reviewer, review count does get incremented as many times. Since there has been an effort put in to review multiple times, I think it is worth counting that. Wonder if it's picking up the Change has been successfully cherry-picked as xxx as messages and stuff as reviews? Only if the CR accompanying that message has a +1 or +2 vote. If a patch gets merged without any CR votes, the committer's review count does not increment with the query I run. If review comments are passed without a CR vote, neither do they get picked up by the query. -Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] triggers for sending inode forgets
On 07/05/2014 08:17 AM, Anand Avati wrote: On Fri, Jul 4, 2014 at 7:03 PM, Pranith Kumar Karampuri pkara...@redhat.com mailto:pkara...@redhat.com wrote: hi, I work on glusterfs and was debugging a memory leak. Need your help in figuring out if something is done properly or not. When a file is looked up for the first time in gluster through fuse, gluster remembers the parent-inode, basename for that inode. Whenever an unlink/rmdir/(lookup gives ENOENT) happens then corresponding forgetting of parent-inode, basename happens. This is because of the path resolver explicitly calls d_invalidate() on a dentry when d_revalidate() fails on it. In all other cases it relies on fuse to send forget of an inode to release these associations. I was wondering what are the trigger points for sending forgets by fuse. Lets say M0, M1 are fuse mounts of same volume. 1) Mount 'M0' creates a file 'a' 2) Mount 'M1' of deletes file 'a' M0 never touches 'a' anymore. Will a forget be sent on inode of 'a'? If yes when? Really depends on when the memory manager decides to start reclaiming memory from dcache due to memory pressure. If the system is not under memory pressure, and if the stale dentry is never encountered by the path resolver, the inode may never receive a forget. To keep a tight utilization limit on the inode/dcache, you will have to proactively fuse_notify_inval_entry on old/deleted files. Thanks for this info Avati. I see that in fuse-bridge for glusterfs there is a setxattr interface to do that. Is that what you are referring to? Pranith Thanks ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] triggers for sending inode forgets
On Fri, Jul 4, 2014 at 8:17 PM, Pranith Kumar Karampuri pkara...@redhat.com wrote: On 07/05/2014 08:17 AM, Anand Avati wrote: On Fri, Jul 4, 2014 at 7:03 PM, Pranith Kumar Karampuri pkara...@redhat.com wrote: hi, I work on glusterfs and was debugging a memory leak. Need your help in figuring out if something is done properly or not. When a file is looked up for the first time in gluster through fuse, gluster remembers the parent-inode, basename for that inode. Whenever an unlink/rmdir/(lookup gives ENOENT) happens then corresponding forgetting of parent-inode, basename happens. This is because of the path resolver explicitly calls d_invalidate() on a dentry when d_revalidate() fails on it. In all other cases it relies on fuse to send forget of an inode to release these associations. I was wondering what are the trigger points for sending forgets by fuse. Lets say M0, M1 are fuse mounts of same volume. 1) Mount 'M0' creates a file 'a' 2) Mount 'M1' of deletes file 'a' M0 never touches 'a' anymore. Will a forget be sent on inode of 'a'? If yes when? Really depends on when the memory manager decides to start reclaiming memory from dcache due to memory pressure. If the system is not under memory pressure, and if the stale dentry is never encountered by the path resolver, the inode may never receive a forget. To keep a tight utilization limit on the inode/dcache, you will have to proactively fuse_notify_inval_entry on old/deleted files. Thanks for this info Avati. I see that in fuse-bridge for glusterfs there is a setxattr interface to do that. Is that what you are referring to? In glusterfs fuse-bridge.c:fuse_invalidate_entry() is the function you want to look at. The setxattr() interface is just for testing the functionality. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t
hi Joseph, The test above failed on a documentation patch, so it has got to be a spurious failure. Check http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull for more information Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel