[Gluster-devel] GlusterFS-3.4.4beta1 and GlusterFS-3.5.1beta1 RPMs on download.gluster.org
GlusterFS-3.4.4beta1 and GlusterFS-3.5.1beta1 RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora 19-21 are now available in the YUM repos at http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.4beta1/ and http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.1beta1/ -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Running regression tests in docker container.
Hi All, I have been playing with docker for a while to see if we can use it to improve our regression test times. Here is the github repo having Dockerfile and related scripts. https://github.com/raghavendra-talur/Gluster-in-Docker/tree/master Out of the 191 tests that we have in tests folder right now these are the tests that fail for me in container; after removing most of the issues which were docker related like container not having some packages installed, host not having some kernel modules(I know this is why docker is meant more for applications than something like system software.) * tests/basic/rpm.t Failed test: 4 * I think we can ignore this for now considering that we had some problems with it recently. * tests/bugs/886998/strict-readdir.t Failed tests: 10, 24 * looks like bug or readdirp fuse support * tests/features/glupy.t Failed tests: 2, 6 * failed to import error in init of glupy.c should be a minor issue with respect to path. * tests/bugs/bug-802417.t Failed tests: 27-28 * changelog and afr related Yet to investigate. * tests/bugs/bug-808400-dist.t Failed test: 10 failure in fcntl test * tests/bugs/bug-808400-repl.t Failed test: 10 failure in fcntl test * tests/bugs/bug-808400-stripe.t Failed test: 10 failure in fcntl test * tests/bugs/bug-808400.t Failed test: 12 failure in fcntl test * Yet to investigate. * tests/bugs/bug-990028.t Failed tests: * 164-165 I have no idea why this is failing, it indicates a bug in glusterfs code :( Running the tests in one container is no use of course. Running with combination of xargs and ssh with 5 containers running I was able to get these times. On local machine: ~90 mins With 5 containers: ~45 mins Although I admit, my system was really slow with all 5 containers running tests. Sending this mail to help anyone to take it from here if they want to test gluster in container. I will meanwhile continue to debug failures until I get failures to 0!. -- Thanks! Raghavendra Talur | Red Hat Storage Developer | Bangalore | +918039245176 ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Switching from OpenSSL to PolarSSL
One of my tasks for 3.6 is to update/improve the SSL code. Long ago, I had decided that part of the next major update to SSL should include switching from OpenSSL to PolarSSL. Why? Two reasons. (1) The OpenSSL API is awful, and poorly documented to boot. We have to go through some rather unpleasant contortions in the socket module to accommodate it. AFAICT, this would be less of a problem with PolarSSL. (2) OpenSSL is less secure. Since I had this thought, I've been paying attention to which SSL implementations respond first to each exploit. For BEAST and CRIME, PolarSSL was first. OpenSSL was consistently last, with GnuTLS and NSS in between. Heartbleed was an *entirely OpenSSL-specific* bug that never affected PolarSSL in the first place. The BSD style OpenSSL license has also caused some concern before. While those concerns have been minor, PolarSSL is straight GPLv2+ so even those should go away. The one negative I've found is that, while PolarSSL is in Fedora 20 and EPEL, it doesn't seem to have made it into RHEL (including RHEL7) yet. So, before I expend a ton of effort replacing this code, does anyone else think it shouldn't be done and that the enhancements should be made to the current OpenSSL code instead? ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
I think the main question regards CentOS support, with further questions about Debian/Ubuntu support. If we have to ship PolarSSL packages with our releases to support major distros, is that too much of a burden? -JM - Original Message - One of my tasks for 3.6 is to update/improve the SSL code. Long ago, I had decided that part of the next major update to SSL should include switching from OpenSSL to PolarSSL. Why? Two reasons. (1) The OpenSSL API is awful, and poorly documented to boot. We have to go through some rather unpleasant contortions in the socket module to accommodate it. AFAICT, this would be less of a problem with PolarSSL. (2) OpenSSL is less secure. Since I had this thought, I've been paying attention to which SSL implementations respond first to each exploit. For BEAST and CRIME, PolarSSL was first. OpenSSL was consistently last, with GnuTLS and NSS in between. Heartbleed was an *entirely OpenSSL-specific* bug that never affected PolarSSL in the first place. The BSD style OpenSSL license has also caused some concern before. While those concerns have been minor, PolarSSL is straight GPLv2+ so even those should go away. The one negative I've found is that, while PolarSSL is in Fedora 20 and EPEL, it doesn't seem to have made it into RHEL (including RHEL7) yet. So, before I expend a ton of effort replacing this code, does anyone else think it shouldn't be done and that the enhancements should be made to the current OpenSSL code instead? ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
I think the main question regards CentOS support, with further questions about Debian/Ubuntu support. I believe CentOS would leverage the EPEL support. PolarSSL is already packaged for Debian (Wheezy) and Ubuntu (Trusty) so we should be set. If we have to ship PolarSSL packages with our releases to support major distros, is that too much of a burden? Nothing we haven't had to deal with before, but so far I think RHEL (without EPEL) is the only distro that even has a problem. This being an upstream mailing list, I think I can safely say that one downstream's problems don't change what's best for the project as a whole. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
On 05/27/2014 09:43 AM, Jeff Darcy wrote: So, before I expend a ton of effort replacing this code, does anyone else think it shouldn't be done and that the enhancements should be made to the current OpenSSL code instead? The most compelling arguments — to me — are the speed with which things are fixed and the lack of Heartbleed vuln. PolarSSL appears to be the clear winner on both counts. My only concern is its 'pure' GPLv2+ license — is that compatible with with our 'GPLv2 or LGPLv3+' license. I'm not sure why the BSD-style OpenSSL license was an issue; perhaps just the GPL compatibility due to what looks like a weak advertising clause. In any event, it's license didn't pollute our code. Do we need to have our attorney bless the change. -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
On 05/27/2014 11:00 AM, Kaleb KEITHLEY wrote: In any event, it's license didn't pollute our code. Do we need to have our attorney bless the change. _its_ license didn't pollute our code. -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
My only concern is its 'pure' GPLv2+ license — is that compatible with with our 'GPLv2 or LGPLv3+' license. The answer that matters, as always, is that only a real lawyer can say. My own uninformed guess is that we would be considered a derivative of them (instead of vice versa) and thus we'd be OK as long as we had GPLv2 as a (not necessarily only) option. The thornier question is what would happen for a piece of code that was derivative of both. In that case it might need to be GPLv2 exactly to be redistributable with both, but - again - that's for the lawyers to say. I'm not sure why the BSD-style OpenSSL license was an issue; perhaps just the GPL compatibility due to what looks like a weak advertising clause. In any event, it's license didn't pollute our code. Do we need to have our attorney bless the change. We'd need to do that anyway, as we should with every incorporation of new code under new licenses. On the other hand, I'd be amazed if PolarSSL's license from the same family as ours was more problematic than OpenSSL's unique one. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
It has a specific exclusion for GPL 3.0. https://polarssl.org/foss-license-exception On May 27, 2014 8:01:51 AM PDT, Kaleb KEITHLEY kkeit...@redhat.com wrote: On 05/27/2014 11:00 AM, Kaleb KEITHLEY wrote: In any event, it's license didn't pollute our code. Do we need to have our attorney bless the change. _its_ license didn't pollute our code. -- Kaleb ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel -- Sent from my Android device with K-9 Mail. Please excuse my brevity.___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
Also, IANAL, but their code is GPL compatible, even if they are being dicks and requiring copyright assignment for their proprietary dual licensing. But at least their code is GPL compatible, which OpenSSL's is not. So I say +1, use this. On Tue, May 27, 2014 at 11:44 AM, Joe Julian j...@julianfamily.org wrote: It has a specific exclusion for GPL 3.0. https://polarssl.org/foss-license-exception On May 27, 2014 8:01:51 AM PDT, Kaleb KEITHLEY kkeit...@redhat.com wrote: On 05/27/2014 11:00 AM, Kaleb KEITHLEY wrote: In any event, it's license didn't pollute our code. Do we need to have our attorney bless the change. _its_ license didn't pollute our code. -- Kaleb Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel -- Sent from my Android device with K-9 Mail. Please excuse my brevity. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
The only thing that I find that may be an issue for some use cases is https://polarssl.org/kb/generic/is-polarssl-fips-certified On May 27, 2014 6:43:54 AM PDT, Jeff Darcy jda...@redhat.com wrote: One of my tasks for 3.6 is to update/improve the SSL code. Long ago, I had decided that part of the next major update to SSL should include switching from OpenSSL to PolarSSL. Why? Two reasons. (1) The OpenSSL API is awful, and poorly documented to boot. We have to go through some rather unpleasant contortions in the socket module to accommodate it. AFAICT, this would be less of a problem with PolarSSL. (2) OpenSSL is less secure. Since I had this thought, I've been paying attention to which SSL implementations respond first to each exploit. For BEAST and CRIME, PolarSSL was first. OpenSSL was consistently last, with GnuTLS and NSS in between. Heartbleed was an *entirely OpenSSL-specific* bug that never affected PolarSSL in the first place. The BSD style OpenSSL license has also caused some concern before. While those concerns have been minor, PolarSSL is straight GPLv2+ so even those should go away. The one negative I've found is that, while PolarSSL is in Fedora 20 and EPEL, it doesn't seem to have made it into RHEL (including RHEL7) yet. So, before I expend a ton of effort replacing this code, does anyone else think it shouldn't be done and that the enhancements should be made to the current OpenSSL code instead? ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel -- Sent from my Android device with K-9 Mail. Please excuse my brevity.___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Switching from OpenSSL to PolarSSL
The only thing that I find that may be an issue for some use cases is https://polarssl.org/kb/generic/is-polarssl-fips-certified Not meaning to sound flippant, but if we ever did seek FIPS certification I suspect that our choice of SSL library would be the least of our worries. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Spurious failire ./tests/bugs/bug-1049834.t [16]
CC gluster-devel Pranith - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Avra Sengupta aseng...@redhat.com Sent: Wednesday, May 28, 2014 6:42:53 AM Subject: Spurious failire ./tests/bugs/bug-1049834.t [16] hi Avra, Could you look into it. Patch == http://review.gluster.com/7889/1 Author== Avra Sengupta aseng...@redhat.com Build triggered by== amarts Build-url == http://build.gluster.org/job/regression/4586/consoleFull Download-log-at == http://build.gluster.org:443/logs/regression/glusterfs-logs-20140527:14:51:09.tgz Test written by == Author: Avra Sengupta aseng...@redhat.com ./tests/bugs/bug-1049834.t [16] #!/bin/bash . $(dirname $0)/../include.rc . $(dirname $0)/../cluster.rc . $(dirname $0)/../volume.rc . $(dirname $0)/../snapshot.rc cleanup; 1 TEST verify_lvm_version 2 TEST launch_cluster 2 3 TEST setup_lvm 2 4 TEST $CLI_1 peer probe $H2 5 EXPECT_WITHIN $PROBE_TIMEOUT 1 peer_count 6 TEST $CLI_1 volume create $V0 $H1:$L1 $H2:$L2 7 EXPECT 'Created' volinfo_field $V0 'Status' 8 TEST $CLI_1 volume start $V0 9 EXPECT 'Started' volinfo_field $V0 'Status' #Setting the snap-max-hard-limit to 4 10 TEST $CLI_1 snapshot config $V0 snap-max-hard-limit 4 PID_1=$! wait $PID_1 #Creating 3 snapshots on the volume (which is the soft-limit) 11 TEST create_n_snapshots $V0 3 $V0_snap 12 TEST snapshot_n_exists $V0 3 $V0_snap #Creating the 4th snapshot on the volume and expecting it to be created # but with the deletion of the oldest snapshot i.e 1st snapshot 13 TEST $CLI_1 snapshot create ${V0}_snap4 ${V0} 14 TEST snapshot_exists 1 ${V0}_snap4 15 TEST ! snapshot_exists 1 ${V0}_snap1 ***16 TEST $CLI_1 snapshot delete ${V0}_snap4 17 TEST $CLI_1 snapshot create ${V0}_snap1 ${V0} 18 TEST snapshot_exists 1 ${V0}_snap1 #Deleting the 4 snaps #TEST delete_n_snapshots $V0 4 $V0_snap #TEST ! snapshot_n_exists $V0 4 $V0_snap cleanup; Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Spurious failure in ./tests/bugs/bug-948686.t [14, 15, 16]
hi kp, Could you look into it. Patch == http://review.gluster.com/7889/1 Author== Avra Sengupta aseng...@redhat.com Build triggered by== amarts Build-url == http://build.gluster.org/job/regression/4586/consoleFull Download-log-at == http://build.gluster.org:443/logs/regression/glusterfs-logs-20140527:14:51:09.tgz Test written by == Author: Krishnan Parthasarathi kpart...@redhat.com ./tests/bugs/bug-948686.t [14, 15, 16] #!/bin/bash . $(dirname $0)/../include.rc . $(dirname $0)/../volume.rc . $(dirname $0)/../cluster.rc function check_peers { $CLI_1 peer status | grep 'Peer in Cluster (Connected)' | wc -l } cleanup; #setup cluster and test volume 1 TEST launch_cluster 3; # start 3-node virtual cluster 2 TEST $CLI_1 peer probe $H2; # peer probe server 2 from server 1 cli 3 TEST $CLI_1 peer probe $H3; # peer probe server 3 from server 1 cli 4 EXPECT_WITHIN $PROBE_TIMEOUT 2 check_peers; 5 TEST $CLI_1 volume create $V0 replica 2 $H1:$B1/$V0 $H1:$B1/${V0}_1 $H2:$B2/$V0 $H3:$B3/$V0 6 TEST $CLI_1 volume start $V0 7 TEST glusterfs --volfile-server=$H1 --volfile-id=$V0 $M0 #kill a node 8 TEST kill_node 3 #modify volume config to see change in volume-sync 9 TEST $CLI_1 volume set $V0 write-behind off #add some files to the volume to see effect of volume-heal cmd 10 TEST touch $M0/{1..100}; 11 TEST $CLI_1 volume stop $V0; 12 TEST $glusterd_3; 13 EXPECT_WITHIN $PROBE_TIMEOUT 2 check_peers; ***14 TEST $CLI_3 volume start $V0; ***15 TEST $CLI_2 volume stop $V0; ***16 TEST $CLI_2 volume delete $V0; cleanup; 17 TEST glusterd; 18 TEST $CLI volume create $V0 $H0:$B0/$V0 19 TEST $CLI volume start $V0 pkill glusterd; pkill glusterfsd; 20 TEST glusterd 21 TEST $CLI volume status $V0 cleanup; ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] IP addresses in peer status/pool list
This is surprising. There hasn't been any change that could cause this kind of behaviour AFAICT. Normally, the peer from which the probes were performed would be shown using IPs on the other nodes. But if a reverse probe with hostnames should fix this too. Could you share more information on your setup (version, logs, etc)? ~kaushal On Wed, May 28, 2014 at 5:27 AM, Paul Cuzner pcuz...@redhat.com wrote: Hi, Can anyone shed any light on why I see IP addresses in peer status or pool list output instead of names? In clusters where the names were used in the probes, and volumes built with node names I see IP's in the peer status output? This has been the case for a while - I just like to understand why? I've even seen some output where 2 nodes are listed by IP instead of names.. Cheers, Paul C ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel