Re: [Gluster-devel] Spurious failures
Hi Krutika, It's failing with ++ gluster --mode=script --wignore volume geo-rep master slave21.cloud.gluster.org::slave create push-pem Gluster version mismatch between master and slave. I will look into it. Thanks and Regards, Kotresh H R - Original Message - > From: "Krutika Dhananjay" > To: "Atin Mukherjee" > Cc: "Gluster Devel" , "Gaurav Garg" > , "Aravinda" , > "Kotresh Hiremath Ravishankar" > Sent: Tuesday, September 22, 2015 9:03:44 PM > Subject: Re: Spurious failures > > Ah! Sorry. I didn't read that line. :) > > Just figured even ./tests/geo-rep/georep-basic-dr-rsync.t is added to bad > tests list. > > So it's just /tests/geo-rep/georep-basic-dr-tarssh.t for now. > > Thanks Atin! > > -Krutika > > - Original Message - > > > From: "Atin Mukherjee" > > To: "Krutika Dhananjay" > > Cc: "Gluster Devel" , "Gaurav Garg" > > , "Aravinda" , "Kotresh Hiremath > > Ravishankar" > > Sent: Tuesday, September 22, 2015 8:51:22 PM > > Subject: Re: Spurious failures > > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t (Wstat: > > 0 Tests: 8 Failed: 2) > > Failed tests: 6, 8 > > Files=1, Tests=8, 48 wallclock secs ( 0.01 usr 0.01 sys + 0.88 cusr > > 0.56 csys = 1.46 CPU) > > Result: FAIL > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t: bad > > status 1 > > *Ignoring failure from known-bad test > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t* > > [11:24:16] ./tests/bugs/glusterd/bug-1242543-replace-brick.t .. ok > > 17587 ms > > [11:24:16] > > All tests successful > > > On 09/22/2015 08:46 PM, Krutika Dhananjay wrote: > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/14421/consoleFull > > > > > > Ctrl + f 'not ok'. > > > > > > -Krutika > > > > > > > > > > > > *From: *"Atin Mukherjee" > > > *To: *"Krutika Dhananjay" , "Gluster Devel" > > > > > > *Cc: *"Gaurav Garg" , "Aravinda" > > > , "Kotresh Hiremath Ravishankar" > > > > > > *Sent: *Tuesday, September 22, 2015 8:39:56 PM > > > *Subject: *Re: Spurious failures > > > > > > Krutika, > > > > > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t is > > > already a part of bad_tests () in both mainline and 3.7. Could you > > > provide me the link where this test has failed explicitly and that has > > > caused the regression to fail? > > > > > > ~Atin > > > > > > > > > On 09/22/2015 07:27 PM, Krutika Dhananjay wrote: > > > > Hi, > > > > > > > > The following tests seem to be failing consistently on the build > > > > machines in Linux: > > > > > > > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t .. > > > > > > > > ./tests/geo-rep/georep-basic-dr-rsync.t .. > > > > > > > > ./tests/geo-rep/georep-basic-dr-tarssh.t .. > > > > > > > > I have added these tests into the tracker etherpad. > > > > > > > > Meanwhile could someone from geo-rep and glusterd team take a look or > > > > perhaps move them to bad tests list? > > > > > > > > > > > > Here is one place where the three tests failed: > > > > > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/14421/consoleFull > > > > > > > > -Krutika > > > > > > > > > > > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failures
Hi Krutika, Looks like the prerequisites for geo-replication to work is changed in slave21 Hi Michael, Could you please check following settings are made in all linux regression machines? Or provide me with root password so that I can verify. 1. Setup Passwordless SSH for the root user: 2. Add below line in /root/.bashrc. This is required as geo-rep does "gluster --version" via ssh and it can't find the gluster PATH via ssh. export PATH=$PATH:/build/install/sbin:/build/install/bin Once above settings are done, the following script should output proper version. --- #!/bin/bash function SSHM() { ssh -q \ -oPasswordAuthentication=no \ -oStrictHostKeyChecking=no \ -oControlMaster=yes \ "$@"; } function cmd_slave() { local cmd_line; cmd_line=$(cat < From: "Kotresh Hiremath Ravishankar" > To: "Krutika Dhananjay" > Cc: "Atin Mukherjee" , "Gluster Devel" > , "Gaurav Garg" > , "Aravinda" > Sent: Wednesday, September 23, 2015 12:31:12 PM > Subject: Re: Spurious failures > > Hi Krutika, > > It's failing with > > ++ gluster --mode=script --wignore volume geo-rep master > slave21.cloud.gluster.org::slave create push-pem > Gluster version mismatch between master and slave. > > I will look into it. > > Thanks and Regards, > Kotresh H R > > - Original Message - > > From: "Krutika Dhananjay" > > To: "Atin Mukherjee" > > Cc: "Gluster Devel" , "Gaurav Garg" > > , "Aravinda" , > > "Kotresh Hiremath Ravishankar" > > Sent: Tuesday, September 22, 2015 9:03:44 PM > > Subject: Re: Spurious failures > > > > Ah! Sorry. I didn't read that line. :) > > > > Just figured even ./tests/geo-rep/georep-basic-dr-rsync.t is added to bad > > tests list. > > > > So it's just /tests/geo-rep/georep-basic-dr-tarssh.t for now. > > > > Thanks Atin! > > > > -Krutika > > > > - Original Message - > > > > > From: "Atin Mukherjee" > > > To: "Krutika Dhananjay" > > > Cc: "Gluster Devel" , "Gaurav Garg" > > > , "Aravinda" , "Kotresh Hiremath > > > Ravishankar" > > > Sent: Tuesday, September 22, 2015 8:51:22 PM > > > Subject: Re: Spurious failures > > > > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t (Wstat: > > > 0 Tests: 8 Failed: 2) > > > Failed tests: 6, 8 > > > Files=1, Tests=8, 48 wallclock secs ( 0.01 usr 0.01 sys + 0.88 cusr > > > 0.56 csys = 1.46 CPU) > > > Result: FAIL > > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t: bad > > > status 1 > > > *Ignoring failure from known-bad test > > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t* > > > [11:24:16] ./tests/bugs/glusterd/bug-1242543-replace-brick.t .. ok > > > 17587 ms > > > [11:24:16] > > > All tests successful > > > > > On 09/22/2015 08:46 PM, Krutika Dhananjay wrote: > > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/14421/consoleFull > > > > > > > > Ctrl + f 'not ok'. > > > > > > > > -Krutika > > > > > > > > > > > > > > > > *From: *"Atin Mukherjee" > > > > *To: *"Krutika Dhananjay" , "Gluster Devel" > > > > > > > > *Cc: *"Gaurav Garg" , "Aravinda" > > > > , "Kotresh Hiremath Ravishankar" > > > > > > > > *Sent: *Tuesday, September 22, 2015 8:39:56 PM > > > > *Subject: *Re: Spurious failures > > > > > > > > Krutika, > > > > > > > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t is > > > > already a part of bad_tests () in both mainline and 3.7. Could you > > > > provide me the link where this test has failed explicitly and that has > > > > caused the regression to fail? > > > > > > > > ~Atin > > > > > > > > > > > > On 09/22/2015 07:27 PM, Krutika Dhananjay wrote: > > > > > Hi, > > > > > > > > > > The following tests seem to be failing consistently on the build > > > > > machines in Linux: > > > > > > > > > > ./tests/bugs/glusterd/bug-1238706-daemons-stop-on-peer-cleanup.t .. > > > > > > > > > > ./tests/geo-rep/georep-basic-dr-rsync.t .. > > > > > > > > > > ./tests/geo-rep/georep-basic-dr-tarssh.t .. > > > > > > > > > > I have added these tests into the tracker etherpad. > > > > > > > > > > Meanwhile could someone from geo-rep and glusterd team take a look or > > > > > perhaps move them to bad tests list? > > > > > > > > > > > > > > > Here is one place where the three tests failed: > > > > > > > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/14421/consoleFull > > > > > > > > > > -Krutika > > > > > > > > > > > > > > > > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Regarding message ids for quota
Hi, While working on http://review.gluster.org/#/c/12217/ , I figured that the message ids in the doxygen-friendly comments in quota-messages.h are supposed to start from 12 whereas they seem to start from 11. This would cause these to conflict with the message ids for upcall xlator. What this means is that the extraction of messages using doxygen tool for the purpose of documentation could cause message-ids in the 11+ series to have multiple diagnoses and recommended actions. I have reported this issue at https://bugzilla.redhat.com/show_bug.cgi?id=1265531 . Thanks, Krutika ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failures
Le mercredi 23 septembre 2015 à 03:25 -0400, Kotresh Hiremath Ravishankar a écrit : > Hi Krutika, > > Looks like the prerequisites for geo-replication to work is changed > in slave21 > > Hi Michael, Hi, > Could you please check following settings are made in all linux regression > machines? Yeah, I will add to salt. > Or provide me with root password so that I can verify. Root login using password should be disabled, so no. If that's still working and people use it, that's gonna change soon, too much problems with it. > 1. Setup Passwordless SSH for the root user: Can you be more explicit on where should the user come from so I can properly integrate that ? There is something adding lots of line to /root/.ssh/authorized_keys on the slave, and this make me quite unconfortable, so if that's it, I rather have it done cleanly, and for that, I need to understand the test, and the requirement. > 2. Add below line in /root/.bashrc. This is required as geo-rep does "gluster > --version" via ssh >and it can't find the gluster PATH via ssh. > export PATH=$PATH:/build/install/sbin:/build/install/bin I will do this one. Is georep supposed to work on other platform like freebsd ? ( because freebsd do not have bash, so I have to adapt to local way, but if that's not gonna be tested, I rather not spend too much time on reading the handbook for now ) > Once above settings are done, the following script should output proper > version. > > --- > #!/bin/bash > > function SSHM() > { > ssh -q \ > -oPasswordAuthentication=no \ > -oStrictHostKeyChecking=no \ > -oControlMaster=yes \ > "$@"; > } > > function cmd_slave() > { > local cmd_line; > cmd_line=$(cat < function do_verify() { > ver=\$(gluster --version | head -1 | cut -f2 -d " "); > echo \$ver; > }; > source /etc/profile && do_verify; > EOF > ); > echo $cmd_line; > }[root@slave32 ~] > > HOST=$1 > cmd_line=$(cmd_slave); > ver=`SSHM root@$HOST bash -c "'$cmd_line'"`; > echo $ver > - > > I could verify for slave32. > [root@slave32 ~]# vi /tmp/gver.sh > [root@slave32 ~]# /tmp/gver.sh slave32 > 3.8dev > > Please help me in verifying the same for all the linux regression machines. > -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS signature.asc Description: This is a digitally signed message part ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failures
Hi Michael, Please find my replies below. >>> Root login using password should be disabled, so no. If that's still >>> working and people use it, that's gonna change soon, too much problems >>> with it. Ok >>>Can you be more explicit on where should the user come from so I can >>>properly integrate that ? It's just PasswordLess SSH from root to root on to same host. 1. Generate ssh key: #ssh-keygen 2. Add it to /root/.ssh/authorized_keys #ssh-copy-id -i root@host Requirement by geo-replication: 'ssh root@host' should not ask for password >>>There is something adding lots of line to /root/.ssh/authorized_keys on >>>the slave, and this make me quite unconfortable, so if that's it, I >>>rather have it done cleanly, and for that, I need to understand the >>>test, and the requirement. Yes, geo-rep is doing it. It adds only once per session. Since the test is running continuously for different patches, it's building up. I will submit a patch to clean it up in geo-rep testsuite itself. >>>I will do this one. Thank you! >>>Is georep supposed to work on other platform like freebsd ? ( because >>>freebsd do not have bash, so I have to adapt to local way, but if that's >>>not gonna be tested, I rather not spend too much time on reading the >>>handbook for now ) As of now it is supported only on Linux, it has known issues with other platforms such as NetBSD... Thanks and Regards, Kotresh H R - Original Message - > From: "Michael Scherer" > To: "Kotresh Hiremath Ravishankar" > Cc: "Krutika Dhananjay" , "Atin Mukherjee" > , "Gaurav Garg" > , "Aravinda" , "Gluster Devel" > > Sent: Wednesday, September 23, 2015 3:30:39 PM > Subject: Re: Spurious failures > > Le mercredi 23 septembre 2015 à 03:25 -0400, Kotresh Hiremath > Ravishankar a écrit : > > Hi Krutika, > > > > Looks like the prerequisites for geo-replication to work is changed > > in slave21 > > > > Hi Michael, > > Hi, > > > Could you please check following settings are made in all linux regression > > machines? > > Yeah, I will add to salt. > > > Or provide me with root password so that I can verify. > > Root login using password should be disabled, so no. If that's still > working and people use it, that's gonna change soon, too much problems > with it. > > > 1. Setup Passwordless SSH for the root user: > > Can you be more explicit on where should the user come from so I can > properly integrate that ? > > There is something adding lots of line to /root/.ssh/authorized_keys on > the slave, and this make me quite unconfortable, so if that's it, I > rather have it done cleanly, and for that, I need to understand the > test, and the requirement. > > > 2. Add below line in /root/.bashrc. This is required as geo-rep does > > "gluster --version" via ssh > >and it can't find the gluster PATH via ssh. > > export PATH=$PATH:/build/install/sbin:/build/install/bin > > I will do this one. > > Is georep supposed to work on other platform like freebsd ? ( because > freebsd do not have bash, so I have to adapt to local way, but if that's > not gonna be tested, I rather not spend too much time on reading the > handbook for now ) > > > Once above settings are done, the following script should output proper > > version. > > > > --- > > #!/bin/bash > > > > function SSHM() > > { > > ssh -q \ > > -oPasswordAuthentication=no \ > > -oStrictHostKeyChecking=no \ > > -oControlMaster=yes \ > > "$@"; > > } > > > > function cmd_slave() > > { > > local cmd_line; > > cmd_line=$(cat < > function do_verify() { > > ver=\$(gluster --version | head -1 | cut -f2 -d " "); > > echo \$ver; > > }; > > source /etc/profile && do_verify; > > EOF > > ); > > echo $cmd_line; > > }[root@slave32 ~] > > > > HOST=$1 > > cmd_line=$(cmd_slave); > > ver=`SSHM root@$HOST bash -c "'$cmd_line'"`; > > echo $ver > > - > > > > I could verify for slave32. > > [root@slave32 ~]# vi /tmp/gver.sh > > [root@slave32 ~]# /tmp/gver.sh slave32 > > 3.8dev > > > > Please help me in verifying the same for all the linux regression machines. > > > > -- > Michael Scherer > Sysadmin, Community Infrastructure and Platform, OSAS > > > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failures
Le mercredi 23 septembre 2015 à 06:24 -0400, Kotresh Hiremath Ravishankar a écrit : > Hi Michael, > > Please find my replies below. > > >>> Root login using password should be disabled, so no. If that's still > >>> working and people use it, that's gonna change soon, too much problems > >>> with it. > > Ok > > >>>Can you be more explicit on where should the user come from so I can > >>>properly integrate that ? > > It's just PasswordLess SSH from root to root on to same host. > 1. Generate ssh key: > #ssh-keygen > 2. Add it to /root/.ssh/authorized_keys > #ssh-copy-id -i root@host > > Requirement by geo-replication: > 'ssh root@host' should not ask for password So, it is ok if I restrict that to be used only on 127.0.0.1 ? -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS signature.asc Description: This is a digitally signed message part ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] REMINDER: Weekly gluster community meeting today at 1200 UTC
Hi All, At 1200 UTC we will have the weekly Gluster Community meeting. Meeting details: - location: #gluster-meeting on Freenode IRC - date: every Wednesday - time: 12:00 UTC, 14:00 CEST, 17:30 IST (in your terminal, run: date -d "12:00 UTC") - agenda: https://public.pad.fsfe.org/p/gluster-community-meetings Currently the following items are listed: * Roll Call * Status of last week's action items * Gluster 3.7 * Gluster 3.8 * Gluster 3.6 * Gluster 3.5 * Gluster 4.0 * Open Floor - bring your own topic! The last topic has space for additions. If you have a suitable topic to discuss, please add it to the agenda. Regards, Vijay ___ Gluster-users mailing list gluster-us...@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] REMINDER: Weekly gluster community meeting about to start
Hi All, In a few minutes from now we will have the regular weekly Gluster Community meeting. Meeting details: - location: #gluster-meeting on Freenode IRC - date: every Wednesday - time: 12:00 UTC, 14:00 CEST, 17:30 IST (in your terminal, run: date -d "12:00 UTC") - agenda: https://public.pad.fsfe.org/p/gluster-community-meetings Currently the following items are listed: * Roll Call * Status of last week's action items * Gluster 3.7 * Gluster 3.8 * Gluster 3.6 * Gluster 3.5 * Gluster 4.0 * Open Floor - bring your own topic! The last topic has space for additions. If you have a suitable topic to discuss, please add it to the agenda. Thanks, Niels pgpMcX_fHjC4C.pgp Description: PGP signature ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] REMINDER: Weekly gluster community meeting about to start
Hi all, thanks for the participation today. In case you have missed the meeting, remind yourself to join next week Wednesday at 12:00 UTC. More details in the agenda: https://public.pad.fsfe.org/p/gluster-community-meetings A lot of action items have been added. When you (got) volunteerd to take something on, please attend next weeks meeting or leave a status note in the agenda. Thanks, Niels Meeting summary 1. a. Agenda: https://public.pad.fsfe.org/p/gluster-community-meetings (ndevos, 12:01:38) 2. Roll Call (ndevos, 12:01:43) 3. Action Items of last week (ndevos, 12:04:22) 4. kshlm to check back with misc on the new jenkins slaves (ndevos, 12:04:26) 5. krishnan_p to update Gluster News about Gluster.next progress (ndevos, 12:05:48) a. ACTION: krishnan_p will add information about GlusterD-2.0 to the weekly news (ndevos, 12:15:52) b. ACTION: krishnan_p and atinmu will remind developers to not work in personal repositories, but request one for github.com/gluster (ndevos, 12:17:24) c. ACTION: krishnan_p will send an email to the -devel list about merging the glusterd-2.0 work into the main glusterfs repo (ndevos, 12:18:09) 6. poornimag to send a mail on gluster-devel asking for volunteers to backport glfs_fini patches to release-3.5 (ndevos, 12:18:21) 7. kkeithley will send an email to the list to get opinions on how to close/move EOL'd bugs (ndevos, 12:19:35) a. ACTION: kkeithley will reply to his previous email, confirming that End-Of-Life bugs will be closed (ndevos, 12:24:04) b. ACTION: kkeithley will close all the EOL'd bugs with a note (ndevos, 12:25:36) 8. jdarcy (and/or others) will post version of the NSR spec "pretty soon" (ndevos, 12:25:51) 9. overclk will get the dht-scalability doc in glusterfs-specs update to the latest design (ndevos, 12:26:49) 10. GlusterFS 3.7 (ndevos, 12:28:01) a. https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2&id=glusterfs-3.7.4&hide_resolved=1 (ndevos, 12:28:11) b. ACTION: kshlm to clean up 3.7.4 tracker bug (hagarth, 12:31:53) 11. GlusterFS 3.6 (ndevos, 12:32:07) a. http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/ (raghu, 12:32:29) b. http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/ (hchiramm_, 12:33:52) c. if you are on Fedora 22, you can do 'dnf --enablerepo=udpates-testing update glusterfs' (ndevos, 12:36:41) 12. GlusterFS 3.5 (ndevos, 12:38:51) a. https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2&hide_resolved=1&id=glusterfs-3.5.7 (ndevos, 12:38:56) b. ACTION: ndevos send out a reminder to the maintainers about more actively enforcing backports of bugfixes (ndevos, 12:48:40) 13. GlusterFS 3.8 (ndevos, 12:48:51) a. http://www.gluster.org/pipermail/gluster-devel/2015-September/046791.html (ndevos, 12:49:35) b. https://public.pad.fsfe.org/p/gluster-3.8-features (hagarth, 12:49:41) 14. Gluster 4.0 (ndevos, 12:50:56) a. https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0 (ndevos, 12:51:14) 15. Open Floow (ndevos, 12:58:11) 16. Testing for various releases (ndevos, 12:58:18) a. ACTION: hagarth will add a topic to the agenda for next weeks meeting about release testing (ndevos, 13:00:02) 17. posting http://blog.gluster.org/2015/08/welcome-to-the-new-gluster-community-lead/ to the mailing lists and general introduction for Amye (ndevos, 13:00:13) a. ACTION: amye will post http://blog.gluster.org/2015/08/welcome-to-the-new-gluster-community-lead/ to the mailing lists and provide a general intro (hagarth, 13:01:26) 18. Moderator for the next meeting(s) (ndevos, 13:01:58) a. hagarth will host next weeks meeting (ndevos, 13:03:08) b. Weekly reminder to announce Gluster attendance of events: https://public.pad.fsfe.org/p/gluster-events (ndevos, 13:03:18) c. REMINDER to put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news (ndevos, 13:03:31) Meeting ended at 13:04:45 UTC (full logs). Action items 1. krishnan_p will add information about GlusterD-2.0 to the weekly news 2. krishnan_p and atinmu will remind developers to not work in personal repositories, but request one for github.com/gluster 3. krishnan_p will send an email to the -devel list about merging the glusterd-2.0 work into the main glusterfs repo 4. kkeithley will reply to his previous email, confirming that End-Of-Life bugs will be closed 5. kkeithley will close all the EOL'd bugs with a note 6. kshlm to clean up 3.7.4 tracker bug 7. ndevos send out a reminder to the maintainers about more actively enforcing backports of bugfixes 8. hagarth will add a topic to the agenda for next weeks meeting about release testing 9. amye will post http://blog.gluster.org/2015/08/welcome-to-the-new-glust
Re: [Gluster-devel] question about how to handle bugs filed against End-Of-Life versions of glusterfs
At today's Gluster Community meeting[3] it was decided — taking into consideration the opinions expressed on the gluster-users and gluster-devel mailing lists — that all bugs filed against end-of-life versions of glusterfs will be simply closed. I.e. CLOSED/WONTFIX. Bug submitters may reopen the bug if they believe the bug still exists in a newer version, updating the bug to indicate the newer version which the bug exists in. On 09/16/2015 09:05 AM, Kaleb S. KEITHLEY wrote: > Hi, > > A question was raised during Tuesday's (2015-09-15) Gluster Bug Triage > meeting[1], and discussed today (2015-09-16) at the Gluster Community > meeting[2] about how to handle currently open bugs and new bugs filed > against GlusterFS versions which have reached end-of-life (EOL). > > As an example, Fedora simply closes any remaining open bugs when the > version reaches EOL. It's incumbent on the person who filed the bug to > reopen it if it still exists in newer versions. > > Option A is: create a new set of 'umbrella' Versions, e.g. > 3.4-end-of-life, 3.3-end-of-life, etc.; _reassign_ all bugs filed > against 3.4.x to 3.4.x-end-of-life; then delete the 3.4.x Versions from > bugzilla. Any new bugs filed against, e.g., any 3.4.x version are > assigned to, 3.4-end-of-life. > > Option B is: create a new set of 'umbrella' Versions, e.g. > 3.4-end-of-life, 3.3-end-of-life, etc.; _close_ all bugs filed against > 3.4.x; then delete the 3.4.x Versions from bugzilla. Any new bugs filed > against, e.g., any 3.4.x version are assigned to, 3.4-end-of-life. > > The main difference is whether existing bugs are reassigned or simply > closed. In either case if a new bug is filed against an EOL version then > during bug triage the bug will be checked to see if it still exists in > newer versions and reassigned to the later version, or closed as > appropriate. > > You may reply to this email — Reply-to: is set to > mailto:gluster-devel@gluster.org — to register your opinion. > > Thanks, > > > [1] > http://meetbot.fedoraproject.org/gluster-meeting/2015-09-15/gluster-meeting.2015-09-15-12.02.log.html > [2] > http://meetbot.fedoraproject.org/gluster-meeting/2015-09-16/gluster-meeting.2015-09-16-12.01.log.html [3] http://meetbot.fedoraproject.org/gluster-meeting/2015-09-23/gluster-meeting.2015-09-23-12.01.log.html -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Gluster.Next design discussion - Call for participation
Hello All, We are going to have a full two days of design discussion session on different initiatives we planned for Gluster.Next from 28th September to 29th September this month. We look forward for your participation and valuable feedback during this time. List of initiatives and its respective time slot as follows: 1. DHT 2.0 (9:00 EDT - 12:00 EDT, 28th Sept 2015) 2. Heketi & GlusterD 2.0 (13:00 EDT - 16:00 EDT, 28th Sept 2015) 3. NSR (9:00 EDT - 12:00 EDT, 29th Sept 2015) 4. Eventing (13:00 EDT - 16:00 EDT, 29th Sept 2015) Google hangout invite links For 28th Sept - https://plus.google.com/u/0/events/ccrvfih3l0fs4r9d3rv47ads1rg For 29th Sept - https://plus.google.com/u/0/events/cemtimvs7i3nittgm876c0mn0cs Also we will be opening up a collaborative editor (to be shared shortly) to capture important notes out of these discussions. Look forward to see you all. Regards, Atin ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Introducing Amye Scavarda - Gluster Community Lead
Hello! You've seen me lurking around the IRC channels and a bit in meetings - but I realized that even though we'd put out blog posts earlier in August when I joined, I hadn't officially introduced myself! I'm Amye Scavarda and I'll be your community catalyst. As a community lead, you'll see me a lot online and at events. I'll be helping to define our community goals with the technical leadership, helping to create pathways for new contributors, and helping our existing community grow. In practice, I'm interested in infrastructure, documentation and how we can gather together to make Gluster successful. There's some blog posts out announcing me being here, both on Gluster.org and Red Hat's blog: http://blog.gluster.org/2015/08/welcome-to-the-new-gluster-community-lead/ http://community.redhat.com/blog/2015/08/welcome-to-the-new-gluster-community-lead/ I'm looking for your stories and ideas for how you'd grow Gluster. Drop me an email here or find me in IRC, I'm 'amye'. (Cross-posting to: gluster-us...@gluster.org, gluster-devel@gluster.org to make sure I don't miss anyone.) -- Amye Scavarda | a...@redhat.com | Gluster Community Lead ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failures
Hi, >>>So, it is ok if I restrict that to be used only on 127.0.0.1 ? I think no, testcases use 'H0' to create volumes H0=${H0:=`hostname`}; Geo-rep expects passwordLess SSH to 'H0' Thanks and Regards, Kotresh H R - Original Message - > From: "Michael Scherer" > To: "Kotresh Hiremath Ravishankar" > Cc: "Krutika Dhananjay" , "Atin Mukherjee" > , "Gaurav Garg" > , "Aravinda" , "Gluster Devel" > > Sent: Wednesday, 23 September, 2015 5:05:58 PM > Subject: Re: Spurious failures > > Le mercredi 23 septembre 2015 à 06:24 -0400, Kotresh Hiremath > Ravishankar a écrit : > > Hi Michael, > > > > Please find my replies below. > > > > >>> Root login using password should be disabled, so no. If that's still > > >>> working and people use it, that's gonna change soon, too much problems > > >>> with it. > > > > Ok > > > > >>>Can you be more explicit on where should the user come from so I can > > >>>properly integrate that ? > > > > It's just PasswordLess SSH from root to root on to same host. > > 1. Generate ssh key: > > #ssh-keygen > > 2. Add it to /root/.ssh/authorized_keys > > #ssh-copy-id -i root@host > > > > Requirement by geo-replication: > > 'ssh root@host' should not ask for password > > So, it is ok if I restrict that to be used only on 127.0.0.1 ? > > -- > Michael Scherer > Sysadmin, Community Infrastructure and Platform, OSAS > > > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel