[Gluster-devel] tests/basic/ec/ec-new-entry.t spurious failure
Test mentioned in $Subj failed here [1] [1] https://build.gluster.org/job/rackspace-regression-2GB-triggered/14856/consoleFull ~Atin ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] RFC: Gluster.Next: Where and how DHT2 work/code would be hosted
> I wonder if glusterd2 could also be a different directory in > experimental/. We could add a new configure option, say something like > --enable-glusterd2, that compiles & installs glusterd2 instead of the > existing glusterd. Thoughts? It might be a bit painful. Firstly, anything that involves configure.ac and its cronies is likely to induce a certain amount of nausea. Secondly, glusterd2 has a bunch of new dependencies that are not currently satisfied on our regression test machines (or most developers'). It's not impossible, and most of those obstacles need to be overcome eventually, but I think I'd rather keep glusterd2 developers focused on the glusterd code itself for now and defer work on that other stuff until later. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] RFC: Gluster.Next: Where and how DHT2 work/code would be hosted
On Friday 09 October 2015 10:39 PM, Shyam wrote: On 10/09/2015 11:26 AM, Jeff Darcy wrote: My position is that we should maximize visibility for other developers by doing all work on review.gluster.org. If it doesn't affect existing tests, it should even go on master. This includes: * Minor changes (e.g. list.h or syncop.* in http://review.gluster.org/#/c/8913/) * Refactoring that doesn't change functionality (e.g. all of http://review.gluster.org/#/c/9411/) * Translators that no existing test will enable (e.g. DHT2) It's not hard to ensure that experimental translators get built but not shipped, just by tweaking the specfile. I think it's something to do with "ghost" but maybe someone who actually knows can just answer off the top of their head before I spend 10x as much time investigating. The really sticky case is incompatible changes to permanent infrastructure, such as GlusterD 2. My preference for those to be on review.gluster.org as well, but on a branch other than master. It's tempting to make other things dependent on GlusterD 2 and put them on the branch as well, but IMO that temptation should be avoided. Periodic merges from master onto the branch *will* become a time sink, *especially* if other people are following the advice above to make other changes on master. That's exactly what happened with NSR before, and I don't think it will be any different this time or with DHT2. It's really not *that* much work to make something compatible with GlusterD 1 as well, and/or to make it selectable via an option. In the long run, it's likely to be less work than either constant branch management or a big-bang merge at the end. Overall I am fine with the "experimental" on master, I think nothing avoids a review problem better than having things in master. When something move out of experimental, I think we should have had enough eyes on the code, to make that last move less painful than what it is today, i.e a big merge request. So in short my vote is a +1 for the "experimental" manner of approaching this, (esp. for DHT2). Anyway, start of this would be this patch: http://review.gluster.org/#/c/12321 Thanks Shyam, this seems like the right approach to me for all ongoing development. Over time we could also establish graduation criterion from experimental to mainline. I wonder if glusterd2 could also be a different directory in experimental/. We could add a new configure option, say something like --enable-glusterd2, that compiles & installs glusterd2 instead of the existing glusterd. Thoughts? Regards, Vijay ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] RFC: Gluster.Next: Where and how DHT2 work/code would be hosted
On 10/09/2015 11:26 AM, Jeff Darcy wrote: My position is that we should maximize visibility for other developers by doing all work on review.gluster.org. If it doesn't affect existing tests, it should even go on master. This includes: * Minor changes (e.g. list.h or syncop.* in http://review.gluster.org/#/c/8913/) * Refactoring that doesn't change functionality (e.g. all of http://review.gluster.org/#/c/9411/) * Translators that no existing test will enable (e.g. DHT2) It's not hard to ensure that experimental translators get built but not shipped, just by tweaking the specfile. I think it's something to do with "ghost" but maybe someone who actually knows can just answer off the top of their head before I spend 10x as much time investigating. The really sticky case is incompatible changes to permanent infrastructure, such as GlusterD 2. My preference for those to be on review.gluster.org as well, but on a branch other than master. It's tempting to make other things dependent on GlusterD 2 and put them on the branch as well, but IMO that temptation should be avoided. Periodic merges from master onto the branch *will* become a time sink, *especially* if other people are following the advice above to make other changes on master. That's exactly what happened with NSR before, and I don't think it will be any different this time or with DHT2. It's really not *that* much work to make something compatible with GlusterD 1 as well, and/or to make it selectable via an option. In the long run, it's likely to be less work than either constant branch management or a big-bang merge at the end. Overall I am fine with the "experimental" on master, I think nothing avoids a review problem better than having things in master. When something move out of experimental, I think we should have had enough eyes on the code, to make that last move less painful than what it is today, i.e a big merge request. So in short my vote is a +1 for the "experimental" manner of approaching this, (esp. for DHT2). Anyway, start of this would be this patch: http://review.gluster.org/#/c/12321 ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] RFC: Gluster.Next: Where and how DHT2 work/code would be hosted
My position is that we should maximize visibility for other developers by doing all work on review.gluster.org. If it doesn't affect existing tests, it should even go on master. This includes: * Minor changes (e.g. list.h or syncop.* in http://review.gluster.org/#/c/8913/) * Refactoring that doesn't change functionality (e.g. all of http://review.gluster.org/#/c/9411/) * Translators that no existing test will enable (e.g. DHT2) It's not hard to ensure that experimental translators get built but not shipped, just by tweaking the specfile. I think it's something to do with "ghost" but maybe someone who actually knows can just answer off the top of their head before I spend 10x as much time investigating. The really sticky case is incompatible changes to permanent infrastructure, such as GlusterD 2. My preference for those to be on review.gluster.org as well, but on a branch other than master. It's tempting to make other things dependent on GlusterD 2 and put them on the branch as well, but IMO that temptation should be avoided. Periodic merges from master onto the branch *will* become a time sink, *especially* if other people are following the advice above to make other changes on master. That's exactly what happened with NSR before, and I don't think it will be any different this time or with DHT2. It's really not *that* much work to make something compatible with GlusterD 1 as well, and/or to make it selectable via an option. In the long run, it's likely to be less work than either constant branch management or a big-bang merge at the end. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] TEST FAILED ./tests/basic/mount-nfs-auth.t
October 9 2015 2:56 AM, "Milind Changire" wrote: > https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/10776/consoleFull > > says > [06:18:00] ./tests/basic/mount-nfs-auth.t .. > not ok 62 Got "N" instead of "Y" > not ok 64 Got "N" instead of "Y" > not ok 65 Got "N" instead of "Y" > not ok 67 Got "N" instead of "Y" > Failed 4/87 subtests This one fails pretty regularly for me in my own testing. It almost always succeeds on a second try, and doesn't seem to fail as much on the regression-test machines either. The tests involved are these: TEST 62 (line 270): Y check_mount_success patchy TEST 64 (line 272): Y umount_nfs /mnt/nfs/0 TEST 65 (line 276): Y check_mount_success patchy/ TEST 67 (line 278): Y umount_nfs /mnt/nfs/0 This is almost certainly a race involving nfs.auth-refresh-interval-sec and how quickly the test runs. Changing the EXPECT_WITHIN lines to use an interval slightly longer than AUTH_REFRESH_INTERVAL might help. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] RFC: Gluster.Next: Where and how DHT2 work/code would be hosted
On 10/09/2015 12:07 AM, Atin Mukherjee wrote: First of all my apologies for not going through the meeting blog before sending my thoughts on how we plan to maintain GlusterD 2.0 [1]. This approach seems fine to me as long as we don't touch any existing xlators. How do we handle cases where other xlators get impacted by certain changes. Are we going to copy the whole translator in xlators/experimental and start working on it? Nope, we should send a change request for that xlator as a separate commit when possible. The counter example to this is, point (4) below (where DHT2 needs a bit of change in glusterd, but ...). I suggest such changes be maintained as .patch files inside the xlator, till a point when this can be merged is decided. Instead of all this wouldn't it be simpler to have development under a separate branch say "4.0-unstable" and we could disable CI on this branch till it becomes stable? Are we worried about pulling in the changes from this to master once the branch becomes stable? I guess the worry is *bulk* changes appearing in master (as per meeting minutes). I share the same concern as well (on bulk changes), but I am unsure of review stringency on experimental, as things will evolve here, than each commit be ready for a clean review from day 1. So, this is an open confusion in my head as well, as when we want to move an xlator from experimental to suported, what would be the criteria? Would we not be doing bulk reviews then as well? What do others think? This is just my thought and I would like to get a clarity on this. Thanks, Atin [1] http://www.gluster.org/pipermail/gluster-devel/2015-October/046872.html On 10/08/2015 11:35 PM, Shyam wrote: Hi, On checking yesterday's gluster meeting AIs and (later) reading the minutes, for DHT2 here is what I gather and propose to do for $SUBJECT. Feel free to add/negate any plans. (This can also be discussed at [2]) --- 1) Create a directory under the glusterfs master branch as follows, ./xlators/*experimental*/dht2 ./xlators/*experimental*/posix2 See patch request at [2] All code, design documents (work products in general) would go into this directory. 2) Code that compiles and does not cause CI failures could *potentially* be merged with very few DHT2 dev folks assent. There would possibly be no CI integration till we get something working, so merges would be based on compile passing initially. Soon there would be an attempt at getting unit testing integrated, so that code being submitted is not abysmally horrendous 3) Common framework code changes (if any) would be presented as a separate commit request 4) (Big problem) DHT2 requires glusterd changes to create a volume as DHT2 and not DHT, this would be maintained as a .patch in the dht2 directory as above. This is so that people can play with DHT2 volumes if interested. Integration of this piece either comes with glusterd 2.0 or based on time lines of other events, in the current version of glusterd. (if you are interested in seeing the current version of this patch, go here [1]) --- If there is some key disagreement on certain points like (2) above, then we would need to bring in DHT2 code in parts so that it makes sense. This is fine too, just that we would have 2 repos till we reach a point of maturity in development. --- *Some issues with the approach:* A) We need to ensure we do not ship xlators compiled from the experimental directory B) We need to possibly add a buddy maintainer for experimental translator owners, who can help with the process of merging their changes. C) I am not sure how this helps the review process, as initially xlator development can be iffy and so we do not expect reviews to be stringent. Later when we want to move this out of the experimental category, how do we review the same now, and what actions do we take to ensure quality? Won't we have the same bulk code review issue? --- Shameless plug: For quality and if an xlator plays well with other parts of gluster the distaf framework of testing against possible graphs and access protocols can be of immense help. Shyam [1] https://github.com/ShyamsundarR/glusterfs/commit/663eeb98f6a51384c8745b8882e7c6c4f7b58a7c [2] http://review.gluster.org/#/c/12321/1 ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Gluster.Next Design discussion - Event report
If you have missed out attending the design discussion we had over couple of days last week, here is the event report. Day 1, 28th September 2015 *DHT 2* We started the design discussion with DHT2 where Shyam explained the motivation behind DHT2, the current pitfalls with the scalability requirements. He touched base on core principals and concepts like no duplication of directories, centralized & granular layouts. On disk format was also discussed in details. In between we also had questions coming from the community and Shyam addressed them periodically. Some flowcharts were also covered to discuss about how the fops will flow with DHT2 scheme. A significant amount of time was also spent in discussing the impact on other translators like Posix, Quota, Changelog etc. DHT hangout session is available at [1] *Heketi* Post lunch we begun with a session on Heketi [2] - an intelligent, on-demand, automated GlusterFS volume manager. Luis Pabon explained that why Heketi will play a crucial role to position Gluster as a cloud storage system. The architecture of Heketi [3] was discussed in detail and a small Heketi demo was much appreciated. Luis also pointed out that in near term the resilience has to be taken care by admin, as a future goal, Heketi needs to ensure its own resiliency. Heketi also has a future plan to handle events to take care of failures. The first cut of Heketi has been released few days back and is available for use. *GlusterD 2.0* After an eventful discussion around Heketi we moved to GlusterD 2.0 and KP discussed motivations behind (re)designing GlusterD. The existing design doesn't scale well as number of nodes in the cluster increase. The amount of configuration data is exchanged where a new node is added to the cluster is quadratic in number of nodes. The configuration store is replicated on all the nodes in the cluster and is not guaranteed to be consistent. Replicating on all nodes doesn't scale well with increase in number of nodes. GlusterD 2.0 is focused on making the configuration store resilient to nodes failing and scale with increase in number of nodes. It will also make integration of existing and new feature specific commands (say quota-limit-usage) would be made simpler and separate from internals of GlusterD. It was decided that GlusterD team will send out a proposal for an interface that feature specific commands need to implement. Hangout recording for Heketi & GlusterD 2.0 is available at [4] Day 2, 29th September 2015 *NSR* The NSR discussion kicked off with a background on the project, the use cases behind it, before deep diving into the project. Jeff spoke at length about the basic principles on which NSR is based, and then moved on to explain the various architectural components of NSR. He explained, about the journal, the terms, NSR client, before handing it over to Avra, who gave a walk through of NSR server, and the journal states. Jeff resumed the forum with talks about reconciliation, and we had an open table discussion about in-memory journal view, and the discussions ended with how NSR can provide flexible consistency, depending on the use case. You could watch the entire discussion at [5] *Gluster Eventing* Post lunch the discussion on Eventing framework started with Samikshan giving an overview of StorageD, DBus and a list of events this framework is aiming to support. He then spoke of the architecture of how StorageD can retrieve Gluster states from individual nodes and expose them as DBus objects implementing corresponding interfaces. Hook scripts would be used to notify StorageD of changes on Gluster front so that StorageD can update itself and send out necessary change notifications from individual nodes. There were questions regarding how these events from individual nodes could be converted to one stream of events for the entire cluster. Samikshan will be looking at event buses like Salt to address this aspect. Watch this discussion offline at [6] If you have any questions on these, feel free to reach us. Regards, Gluster.Next team [1] https://www.youtube.com/watch?v=HM_0PeG0tFI [2] https://github.com/heketi/heketi [3] https://github.com/heketi/heketi/wiki/Architecture [4] https://www.youtube.com/watch?v=iBFfHv4bne8 [5] https://www.youtube.com/watch?v=oa7468Rfsbw [6] https://www.youtube.com/watch?v=ToWwfBKxWCQ ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel