Re: [sage-devel] Help us test Cython?
On Sat, Aug 21, 2010 at 6:17 PM, Mitesh Patel wrote: > On 08/12/2010 05:01 PM, Mitesh Patel wrote: >> On 08/11/2010 07:19 PM, Robert Bradshaw wrote: >>> Thanks for looking into this, another data point is really helpful. I >>> put a vanilla Sage in hudson and for a while it was passing all of its >>> tests every time, then all of the sudden it started failing too. Very >>> strange... For now I've resorted to starting up Sage in a loop (as the >>> segfault always happened during startup) and am seeing about a 0.5% >>> failure rate (which is the same that I see with a vanilla Sage). >>> Hopefully we can get the parallel testing to work much more reliably >>> so we can use it as a good indicator in our Cython build farm to keep >>> people from breaking Sage (and I'm honestly really surprised we >>> haven't run into these issues during release management as well...) >> >> Strange, indeed. >> >> Could the startup segfault be distinct from and not present among the >> doctest faults? Testing about 2500 files with a 0.5% startup failure >> rate would give us about 13 extra, random(?) faults per run, which we >> don't see. > > Or should that be 0.05%? Sorry, yes, 0.05%. > Some more data: I batch-tested 'sage -c "quit"' about 1.2e6 times with > vanilla 4.5.3.alpha1 on sage.math. Each run ended with exit status 0 > and no cores or discernible errors. Nice. Not sure what changed, but sounds like something did. - Robert -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On 08/12/2010 05:01 PM, Mitesh Patel wrote: > On 08/11/2010 07:19 PM, Robert Bradshaw wrote: >> Thanks for looking into this, another data point is really helpful. I >> put a vanilla Sage in hudson and for a while it was passing all of its >> tests every time, then all of the sudden it started failing too. Very >> strange... For now I've resorted to starting up Sage in a loop (as the >> segfault always happened during startup) and am seeing about a 0.5% >> failure rate (which is the same that I see with a vanilla Sage). >> Hopefully we can get the parallel testing to work much more reliably >> so we can use it as a good indicator in our Cython build farm to keep >> people from breaking Sage (and I'm honestly really surprised we >> haven't run into these issues during release management as well...) > > Strange, indeed. > > Could the startup segfault be distinct from and not present among the > doctest faults? Testing about 2500 files with a 0.5% startup failure > rate would give us about 13 extra, random(?) faults per run, which we > don't see. Or should that be 0.05%? Some more data: I batch-tested 'sage -c "quit"' about 1.2e6 times with vanilla 4.5.3.alpha1 on sage.math. Each run ended with exit status 0 and no cores or discernible errors. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
William Stein or I wrote: So sage-ptest may be responsible, somehow, for the faults that leave cores in apparently random directories and perhaps also for random test failures. >> >> Here's one problem: When we test >> >> /path/to/foo.py >> >> sage-doctest writes >> >> SAGE_TESTDIR/.doctest_foo.py >> >> runs the new file through 'python', and deletes it. This can cause >> collisions when we test in parallel multiple files with the same >> basename, e.g., __init__, all, misc, conf, constructor, morphism, index, >> tests, homset, element, twist, tutorial, sagetex, crystals, >> cartesian_product, template, ring, etc. (There's a similar problem with >> testing non-library files, which sage-doctest first effectively copies >> to SAGE_TESTDIR.) We could instead use >> >> .doctest_path_to_foo.py >> >> or >> >> .doctest_path_to_foo_ABC123.py >> >> where ABC123 is unique. With the latter we could run multiple >> simultaneous tests of the same file. I'll open a ticket or maybe use an >> existing one. I've opened http://trac.sagemath.org/sage_trac/ticket/9739 specifically for this problem. > Could we use the tempfile module instead of using SAGE_TESTDIR. The > tempfile module makes files and directories by default that are unique > and are *designed* to live on a fast filesystem, which gets cleaned > regularly. > > sage: import tempfile Sure. I've added a comment about this to #9739. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On 08/12/2010 05:18 PM, William Stein wrote: > On Thu, Aug 12, 2010 at 3:01 PM, Mitesh Patel wrote: >> I made 20 copies of 4.5.3.alpha0. In each copy, I ran the long doctest >> suite serially with the alternate tester, which I modified to rename any >> new cores after each test. All copies end up with a "stealth" core in >> >> data/extcode/genus2reduction/ >> >> and point to >> >> sage/interfaces/genus2reduction.py >> >> as the only source of this core. I'll open a ticket. This is now http://trac.sagemath.org/sage_trac/ticket/9738 -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On Thu, Aug 12, 2010 at 3:01 PM, Mitesh Patel wrote: > On 08/11/2010 07:19 PM, Robert Bradshaw wrote: >> On Wed, Aug 11, 2010 at 4:15 PM, Mitesh Patel wrote: >>> On 08/11/2010 03:25 AM, Mitesh Patel wrote: On 08/05/2010 11:12 PM, Robert Bradshaw wrote: > On Thu, Aug 5, 2010 at 4:26 AM, Mitesh Patel wrote: >> On 08/04/2010 03:10 AM, Robert Bradshaw wrote: >>> So it looks like you're getting segfaults all over the place as >>> well... Hmm... Could you test with >>> https://sage.math.washington.edu:8091/hudson/job/sage-build/163/artifact/cython-devel.spkg >>> ? >> >> With the new package, I get similar results, i.e., apparently random >> segfaults. The core score is about the same: > > Well, it's clear there's something going on. What about testing on a > plain-vanilla sage (with the old Cython)? I get similar results on sage.math with the released 4.5.3.alpha0: $ cd /scratch/mpatel/tmp/cython/sage-4.5.3.alpha0-segs $ find -name core_x\* -type f | wc 30 30 1315 $ grep egmentation ptestlong-j20-*log | wc 11 70 939 So the problem could indeed lie elsewhere, e.g., in the doctesting system. I'll try to run some experiments with the attached make-based parallel doctester. >>> >>> With the alternate tester and vanilla 4.5.3.alpha0, I get "only" the 20 >>> cores in >>> >>> data/extcode/genus2reduction/ >>> >>> (I don't know yet how many times and with which file(s) this fault >>> happens during each long doctest run nor whether certain files >>> reproducibly trigger the fault.) > > > I made 20 copies of 4.5.3.alpha0. In each copy, I ran the long doctest > suite serially with the alternate tester, which I modified to rename any > new cores after each test. All copies end up with a "stealth" core in > > data/extcode/genus2reduction/ > > and point to > > sage/interfaces/genus2reduction.py > > as the only source of this core. I'll open a ticket. > > >>> Moreover, the only failed doctests are in startup.py (19 times) and >>> decorate.py (once). The logs maketestlong-j20-* don't explicitly >>> mention segmentation faults. >>> >>> So sage-ptest may be responsible, somehow, for the faults that leave >>> cores in apparently random directories and perhaps also for random test >>> failures. > > > Here's one problem: When we test > > /path/to/foo.py > > sage-doctest writes > > SAGE_TESTDIR/.doctest_foo.py > > runs the new file through 'python', and deletes it. This can cause > collisions when we test in parallel multiple files with the same > basename, e.g., __init__, all, misc, conf, constructor, morphism, index, > tests, homset, element, twist, tutorial, sagetex, crystals, > cartesian_product, template, ring, etc. (There's a similar problem with > testing non-library files, which sage-doctest first effectively copies > to SAGE_TESTDIR.) We could instead use > > .doctest_path_to_foo.py > > or > > .doctest_path_to_foo_ABC123.py > > where ABC123 is unique. With the latter we could run multiple > simultaneous tests of the same file. I'll open a ticket or maybe use an > existing one. Could we use the tempfile module instead of using SAGE_TESTDIR. The tempfile module makes files and directories by default that are unique and are *designed* to live on a fast filesystem, which gets cleaned regularly. sage: import tempfile William -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On 08/11/2010 07:19 PM, Robert Bradshaw wrote: > On Wed, Aug 11, 2010 at 4:15 PM, Mitesh Patel wrote: >> On 08/11/2010 03:25 AM, Mitesh Patel wrote: >>> On 08/05/2010 11:12 PM, Robert Bradshaw wrote: On Thu, Aug 5, 2010 at 4:26 AM, Mitesh Patel wrote: > On 08/04/2010 03:10 AM, Robert Bradshaw wrote: >> So it looks like you're getting segfaults all over the place as >> well... Hmm... Could you test with >> https://sage.math.washington.edu:8091/hudson/job/sage-build/163/artifact/cython-devel.spkg >> ? > > With the new package, I get similar results, i.e., apparently random > segfaults. The core score is about the same: Well, it's clear there's something going on. What about testing on a plain-vanilla sage (with the old Cython)? >>> >>> I get similar results on sage.math with the released 4.5.3.alpha0: >>> >>> $ cd /scratch/mpatel/tmp/cython/sage-4.5.3.alpha0-segs >>> $ find -name core_x\* -type f | wc >>> 30 301315 >>> $ grep egmentation ptestlong-j20-*log | wc >>> 11 70 939 >>> >>> So the problem could indeed lie elsewhere, e.g., in the doctesting system. >>> >>> I'll try to run some experiments with the attached make-based parallel >>> doctester. >> >> With the alternate tester and vanilla 4.5.3.alpha0, I get "only" the 20 >> cores in >> >> data/extcode/genus2reduction/ >> >> (I don't know yet how many times and with which file(s) this fault >> happens during each long doctest run nor whether certain files >> reproducibly trigger the fault.) I made 20 copies of 4.5.3.alpha0. In each copy, I ran the long doctest suite serially with the alternate tester, which I modified to rename any new cores after each test. All copies end up with a "stealth" core in data/extcode/genus2reduction/ and point to sage/interfaces/genus2reduction.py as the only source of this core. I'll open a ticket. >> Moreover, the only failed doctests are in startup.py (19 times) and >> decorate.py (once). The logs maketestlong-j20-* don't explicitly >> mention segmentation faults. >> >> So sage-ptest may be responsible, somehow, for the faults that leave >> cores in apparently random directories and perhaps also for random test >> failures. Here's one problem: When we test /path/to/foo.py sage-doctest writes SAGE_TESTDIR/.doctest_foo.py runs the new file through 'python', and deletes it. This can cause collisions when we test in parallel multiple files with the same basename, e.g., __init__, all, misc, conf, constructor, morphism, index, tests, homset, element, twist, tutorial, sagetex, crystals, cartesian_product, template, ring, etc. (There's a similar problem with testing non-library files, which sage-doctest first effectively copies to SAGE_TESTDIR.) We could instead use .doctest_path_to_foo.py or .doctest_path_to_foo_ABC123.py where ABC123 is unique. With the latter we could run multiple simultaneous tests of the same file. I'll open a ticket or maybe use an existing one. >> The Cython beta, at least, may be off the hook. But I'll check with the >> alternate tester. I get the same results with 4.5.3.alpha0 + the Cython beta (rev 3629). > Thanks for looking into this, another data point is really helpful. I > put a vanilla Sage in hudson and for a while it was passing all of its > tests every time, then all of the sudden it started failing too. Very > strange... For now I've resorted to starting up Sage in a loop (as the > segfault always happened during startup) and am seeing about a 0.5% > failure rate (which is the same that I see with a vanilla Sage). > Hopefully we can get the parallel testing to work much more reliably > so we can use it as a good indicator in our Cython build farm to keep > people from breaking Sage (and I'm honestly really surprised we > haven't run into these issues during release management as well...) Strange, indeed. Could the startup segfault be distinct from and not present among the doctest faults? Testing about 2500 files with a 0.5% startup failure rate would give us about 13 extra, random(?) faults per run, which we don't see. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On Wed, Aug 11, 2010 at 4:15 PM, Mitesh Patel wrote: > On 08/11/2010 03:25 AM, Mitesh Patel wrote: >> On 08/05/2010 11:12 PM, Robert Bradshaw wrote: >>> On Thu, Aug 5, 2010 at 4:26 AM, Mitesh Patel wrote: On 08/04/2010 03:10 AM, Robert Bradshaw wrote: > So it looks like you're getting segfaults all over the place as > well... Hmm... Could you test with > https://sage.math.washington.edu:8091/hudson/job/sage-build/163/artifact/cython-devel.spkg > ? With the new package, I get similar results, i.e., apparently random segfaults. The core score is about the same: >>> >>> Well, it's clear there's something going on. What about testing on a >>> plain-vanilla sage (with the old Cython)? >> >> I get similar results on sage.math with the released 4.5.3.alpha0: >> >> $ cd /scratch/mpatel/tmp/cython/sage-4.5.3.alpha0-segs >> $ find -name core_x\* -type f | wc >> 30 30 1315 >> $ grep egmentation ptestlong-j20-*log | wc >> 11 70 939 >> >> So the problem could indeed lie elsewhere, e.g., in the doctesting system. >> >> I'll try to run some experiments with the attached make-based parallel >> doctester. > > With the alternate tester and vanilla 4.5.3.alpha0, I get "only" the 20 > cores in > > data/extcode/genus2reduction/ > > (I don't know yet how many times and with which file(s) this fault > happens during each long doctest run nor whether certain files > reproducibly trigger the fault.) > > Moreover, the only failed doctests are in startup.py (19 times) and > decorate.py (once). The logs maketestlong-j20-* don't explicitly > mention segmentation faults. > > So sage-ptest may be responsible, somehow, for the faults that leave > cores in apparently random directories and perhaps also for random test > failures. > > The Cython beta, at least, may be off the hook. But I'll check with the > alternate tester. Thanks for looking into this, another data point is really helpful. I put a vanilla Sage in hudson and for a while it was passing all of its tests every time, then all of the sudden it started failing too. Very strange... For now I've resorted to starting up Sage in a loop (as the segfault always happened during startup) and am seeing about a 0.5% failure rate (which is the same that I see with a vanilla Sage). Hopefully we can get the parallel testing to work much more reliably so we can use it as a good indicator in our Cython build farm to keep people from breaking Sage (and I'm honestly really surprised we haven't run into these issues during release management as well...) - Robert -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On 08/11/2010 03:25 AM, Mitesh Patel wrote: > On 08/05/2010 11:12 PM, Robert Bradshaw wrote: >> On Thu, Aug 5, 2010 at 4:26 AM, Mitesh Patel wrote: >>> On 08/04/2010 03:10 AM, Robert Bradshaw wrote: So it looks like you're getting segfaults all over the place as well... Hmm... Could you test with https://sage.math.washington.edu:8091/hudson/job/sage-build/163/artifact/cython-devel.spkg ? >>> >>> With the new package, I get similar results, i.e., apparently random >>> segfaults. The core score is about the same: >> >> Well, it's clear there's something going on. What about testing on a >> plain-vanilla sage (with the old Cython)? > > I get similar results on sage.math with the released 4.5.3.alpha0: > > $ cd /scratch/mpatel/tmp/cython/sage-4.5.3.alpha0-segs > $ find -name core_x\* -type f | wc > 30 301315 > $ grep egmentation ptestlong-j20-*log | wc > 11 70 939 > > So the problem could indeed lie elsewhere, e.g., in the doctesting system. > > I'll try to run some experiments with the attached make-based parallel > doctester. With the alternate tester and vanilla 4.5.3.alpha0, I get "only" the 20 cores in data/extcode/genus2reduction/ (I don't know yet how many times and with which file(s) this fault happens during each long doctest run nor whether certain files reproducibly trigger the fault.) Moreover, the only failed doctests are in startup.py (19 times) and decorate.py (once). The logs maketestlong-j20-* don't explicitly mention segmentation faults. So sage-ptest may be responsible, somehow, for the faults that leave cores in apparently random directories and perhaps also for random test failures. The Cython beta, at least, may be off the hook. But I'll check with the alternate tester. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On 08/05/2010 11:12 PM, Robert Bradshaw wrote: > On Thu, Aug 5, 2010 at 4:26 AM, Mitesh Patel wrote: >> On 08/04/2010 03:10 AM, Robert Bradshaw wrote: >>> On Sat, Jul 31, 2010 at 2:51 AM, Mitesh Patel wrote: On 07/30/2010 01:54 AM, Craig Citro wrote: > So we're currently working on a long-overdue release of Cython with > all kinds of snazzy new features. However, our automated testing > system seems to keep turning up sporadic segfaults when running the > sage doctest suite. This is obviously bad, but we're having a hard > time reproducing this -- they seem to be *very* occasional failures > while starting up sage, and thus far the only consistent appearance > has been *within* our automated testing system (hudson). We've got a > pile of dumped cores, which have mostly led us to the conclusions that > (1) the problem occurs at a seemingly random point, so we should > suspect some sort of memory corruption, and (2) sage does a *whole* > lot of stuff when it starts up. ;) > [...] > After that, run the full test suite as many times as you're willing, > hopefully with and without parallel doctesting (i.e. sage -tp). Then > let us know what you turn up -- lots of random failures, or does > everything pass? Points for machines we can ssh into and generated > core files (ulimit -c unlimited), and even more points for anyone > seeing consistent/repeatable failures. I'd also be very interested of > reports that you've run the test suite N times with no failures. Below are some results from parallel doctests on sage.math. In each of /mnt/usb1/scratch/mpatel/tmp/sage-4.4.4-cython /mnt/usb1/scratch/mpatel/tmp/sage-4.5.1-cython I have run (or am still running) ./tester | tee -a ztester & where 'tester' contains #!/bin/bash ulimit -c unlimited RUNS=20 for I in `seq 1 $RUNS`; do LOG="ptestlong-j20-$I.log" if [ ! -f "$LOG" ]; then echo "Run $I of $RUNS" nice ./sage -tp 20 -long -sagenb devel/sage > "$LOG" 2>&1 # grep -A2 -B1 dumped "$LOG" ls -lsFtr `find -type f -name core` | grep core | tee -a "$LOG" # Rename each core to core_cy.$I rm -f _ren find -name core -type f | awk '{print "mv "$0" "$0"_cy.'${I}'"}' > _ren . _ren fi done [...] Should I test differently? >>> >>> So it looks like you're getting segfaults all over the place as >>> well... Hmm... Could you test with >>> https://sage.math.washington.edu:8091/hudson/job/sage-build/163/artifact/cython-devel.spkg >>> ? >> >> With the new package, I get similar results, i.e., apparently random >> segfaults. The core score is about the same: > > Well, it's clear there's something going on. What about testing on a > plain-vanilla sage (with the old Cython)? I get similar results on sage.math with the released 4.5.3.alpha0: $ cd /scratch/mpatel/tmp/cython/sage-4.5.3.alpha0-segs $ find -name core_x\* -type f | wc 30 301315 $ grep egmentation ptestlong-j20-*log | wc 11 70 939 So the problem could indeed lie elsewhere, e.g., in the doctesting system. I'll try to run some experiments with the attached make-based parallel doctester. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org # -*- Makefile -*- # Make-based Sage parallel doctester. # # Usage: make -j [jobs] -B -f Makefile.doctest # # Other options: -i (ignore errors), -k (keep going), #-l [load] (load average) TEST = sage -t -long #TEST = . local/bin/sage-env && local/bin/sage-doctest -long FINDDIRS = devel/sage/sage \ devel/sage/doc/common devel/sage/doc/en devel/sage/doc/fr \ local/lib/python2.6/site-packages/sagenb-0.8.2-py2.6.egg FINDOPTS = -name \*.sage -o -name \*.py -o -name \*.pyx \ -o -name \*.pxi -o -name \*.tex -o -name \*.rst # 'shell' replaces newlines in the output with spaces: FILES = $(shell find ${FINDDIRS} ${FINDOPTS}) all: $(FILES) $(FILES): @${TEST} "$@" # @echo "Exit code for $@: $$?"
Re: [sage-devel] Help us test Cython?
On Thu, Aug 5, 2010 at 4:26 AM, Mitesh Patel wrote: > On 08/04/2010 03:10 AM, Robert Bradshaw wrote: >> On Sat, Jul 31, 2010 at 2:51 AM, Mitesh Patel wrote: >>> On 07/30/2010 01:54 AM, Craig Citro wrote: So we're currently working on a long-overdue release of Cython with all kinds of snazzy new features. However, our automated testing system seems to keep turning up sporadic segfaults when running the sage doctest suite. This is obviously bad, but we're having a hard time reproducing this -- they seem to be *very* occasional failures while starting up sage, and thus far the only consistent appearance has been *within* our automated testing system (hudson). We've got a pile of dumped cores, which have mostly led us to the conclusions that (1) the problem occurs at a seemingly random point, so we should suspect some sort of memory corruption, and (2) sage does a *whole* lot of stuff when it starts up. ;) [...] After that, run the full test suite as many times as you're willing, hopefully with and without parallel doctesting (i.e. sage -tp). Then let us know what you turn up -- lots of random failures, or does everything pass? Points for machines we can ssh into and generated core files (ulimit -c unlimited), and even more points for anyone seeing consistent/repeatable failures. I'd also be very interested of reports that you've run the test suite N times with no failures. >>> >>> Below are some results from parallel doctests on sage.math. In each of >>> >>> /mnt/usb1/scratch/mpatel/tmp/sage-4.4.4-cython >>> /mnt/usb1/scratch/mpatel/tmp/sage-4.5.1-cython >>> >>> I have run (or am still running) >>> >>> ./tester | tee -a ztester & >>> >>> where 'tester' contains >>> >>> #!/bin/bash >>> ulimit -c unlimited >>> >>> RUNS=20 >>> for I in `seq 1 $RUNS`; >>> do >>> LOG="ptestlong-j20-$I.log" >>> if [ ! -f "$LOG" ]; then >>> echo "Run $I of $RUNS" >>> nice ./sage -tp 20 -long -sagenb devel/sage > "$LOG" 2>&1 >>> >>> # grep -A2 -B1 dumped "$LOG" >>> ls -lsFtr `find -type f -name core` | grep core | tee -a "$LOG" >>> >>> # Rename each core to core_cy.$I >>> rm -f _ren >>> find -name core -type f | awk '{print "mv "$0" "$0"_cy.'${I}'"}' _ren >>> . _ren >>> fi >>> done >>> >>> The log files and cores (renamed to core_cy.1, etc.) are still in/under >>> SAGE_ROOT. >>> >>> I don't know if the results tell you more than you already know. For >>> example, >>> >>> sage-4.5.1-cython$ for x in `\ls ptestlong-j20-*`; do grep "doctests >>> failed" $x; done | grep -v "0 doctests failed" | sort | uniq -c >>> 1 sage -t -long devel/sage/sage/graphs/graph.py # 2 >>> doctests failed >>> 19 sage -t -long devel/sage/sage/tests/startup.py # 1 >>> doctests failed >>> >>> But >>> >>> sage-4.5.1-cython$ find -name core_cy\* | sort >>> ./data/extcode/genus2reduction/core_cy.1 >>> ./data/extcode/genus2reduction/core_cy.10 >>> ./data/extcode/genus2reduction/core_cy.11 >>> ./data/extcode/genus2reduction/core_cy.12 >>> ./data/extcode/genus2reduction/core_cy.13 >>> ./data/extcode/genus2reduction/core_cy.14 >>> ./data/extcode/genus2reduction/core_cy.15 >>> ./data/extcode/genus2reduction/core_cy.16 >>> ./data/extcode/genus2reduction/core_cy.17 >>> ./data/extcode/genus2reduction/core_cy.18 >>> ./data/extcode/genus2reduction/core_cy.19 >>> ./data/extcode/genus2reduction/core_cy.2 >>> ./data/extcode/genus2reduction/core_cy.20 >>> ./data/extcode/genus2reduction/core_cy.3 >>> ./data/extcode/genus2reduction/core_cy.4 >>> ./data/extcode/genus2reduction/core_cy.5 >>> ./data/extcode/genus2reduction/core_cy.6 >>> ./data/extcode/genus2reduction/core_cy.7 >>> ./data/extcode/genus2reduction/core_cy.8 >>> ./data/extcode/genus2reduction/core_cy.9 >>> ./devel/sage-main/doc/fr/tutorial/core_cy.17 >>> ./devel/sage-main/sage/algebras/core_cy.17 >>> ./devel/sage-main/sage/categories/core_cy.4 >>> ./devel/sage-main/sage/categories/core_cy.6 >>> ./devel/sage-main/sage/combinat/root_system/core_cy.12 >>> ./devel/sage-main/sage/databases/core_cy.1 >>> ./devel/sage-main/sage/databases/core_cy.18 >>> ./devel/sage-main/sage/ext/core_cy.18 >>> ./devel/sage-main/sage/groups/matrix_gps/core_cy.5 >>> ./devel/sage-main/sage/gsl/core_cy.4 >>> ./devel/sage-main/sage/misc/core_cy.10 >>> ./devel/sage-main/sage/misc/core_cy.17 >>> ./devel/sage-main/sage/misc/core_cy.2 >>> ./devel/sage-main/sage/modular/abvar/core_cy.7 >>> ./devel/sage-main/sage/plot/plot3d/core_cy.19 >>> ./devel/sage-main/sage/rings/core_cy.20 >>> ./local/lib/python2.6/site-packages/sagenb-0.8.1-py2.6.egg/sagenb/testing/tests/core_cy.19 >>> >>> Should I test differently? >> >> So it looks like you're getting segfaults all over the place as >> well... Hmm... Could you test with >> https://sage.math.washington.edu:8091/hudson/job/sage-build/163/artifact/cython-devel.spkg >> ? > > With the new package, I get similar results, i.e.
Re: [sage-devel] Help us test Cython?
On 08/04/2010 03:10 AM, Robert Bradshaw wrote: > On Sat, Jul 31, 2010 at 2:51 AM, Mitesh Patel wrote: >> On 07/30/2010 01:54 AM, Craig Citro wrote: >>> So we're currently working on a long-overdue release of Cython with >>> all kinds of snazzy new features. However, our automated testing >>> system seems to keep turning up sporadic segfaults when running the >>> sage doctest suite. This is obviously bad, but we're having a hard >>> time reproducing this -- they seem to be *very* occasional failures >>> while starting up sage, and thus far the only consistent appearance >>> has been *within* our automated testing system (hudson). We've got a >>> pile of dumped cores, which have mostly led us to the conclusions that >>> (1) the problem occurs at a seemingly random point, so we should >>> suspect some sort of memory corruption, and (2) sage does a *whole* >>> lot of stuff when it starts up. ;) >>> [...] >>> After that, run the full test suite as many times as you're willing, >>> hopefully with and without parallel doctesting (i.e. sage -tp). Then >>> let us know what you turn up -- lots of random failures, or does >>> everything pass? Points for machines we can ssh into and generated >>> core files (ulimit -c unlimited), and even more points for anyone >>> seeing consistent/repeatable failures. I'd also be very interested of >>> reports that you've run the test suite N times with no failures. >> >> Below are some results from parallel doctests on sage.math. In each of >> >> /mnt/usb1/scratch/mpatel/tmp/sage-4.4.4-cython >> /mnt/usb1/scratch/mpatel/tmp/sage-4.5.1-cython >> >> I have run (or am still running) >> >> ./tester | tee -a ztester & >> >> where 'tester' contains >> >> #!/bin/bash >> ulimit -c unlimited >> >> RUNS=20 >> for I in `seq 1 $RUNS`; >> do >>LOG="ptestlong-j20-$I.log" >>if [ ! -f "$LOG" ]; then >>echo "Run $I of $RUNS" >>nice ./sage -tp 20 -long -sagenb devel/sage > "$LOG" 2>&1 >> >># grep -A2 -B1 dumped "$LOG" >>ls -lsFtr `find -type f -name core` | grep core | tee -a "$LOG" >> >># Rename each core to core_cy.$I >>rm -f _ren >>find -name core -type f | awk '{print "mv "$0" "$0"_cy.'${I}'"}' >>> _ren >>. _ren >>fi >> done >> >> The log files and cores (renamed to core_cy.1, etc.) are still in/under >> SAGE_ROOT. >> >> I don't know if the results tell you more than you already know. For >> example, >> >> sage-4.5.1-cython$ for x in `\ls ptestlong-j20-*`; do grep "doctests >> failed" $x; done | grep -v "0 doctests failed" | sort | uniq -c >> 1 sage -t -long devel/sage/sage/graphs/graph.py # 2 >> doctests failed >> 19 sage -t -long devel/sage/sage/tests/startup.py # 1 >> doctests failed >> >> But >> >> sage-4.5.1-cython$ find -name core_cy\* | sort >> ./data/extcode/genus2reduction/core_cy.1 >> ./data/extcode/genus2reduction/core_cy.10 >> ./data/extcode/genus2reduction/core_cy.11 >> ./data/extcode/genus2reduction/core_cy.12 >> ./data/extcode/genus2reduction/core_cy.13 >> ./data/extcode/genus2reduction/core_cy.14 >> ./data/extcode/genus2reduction/core_cy.15 >> ./data/extcode/genus2reduction/core_cy.16 >> ./data/extcode/genus2reduction/core_cy.17 >> ./data/extcode/genus2reduction/core_cy.18 >> ./data/extcode/genus2reduction/core_cy.19 >> ./data/extcode/genus2reduction/core_cy.2 >> ./data/extcode/genus2reduction/core_cy.20 >> ./data/extcode/genus2reduction/core_cy.3 >> ./data/extcode/genus2reduction/core_cy.4 >> ./data/extcode/genus2reduction/core_cy.5 >> ./data/extcode/genus2reduction/core_cy.6 >> ./data/extcode/genus2reduction/core_cy.7 >> ./data/extcode/genus2reduction/core_cy.8 >> ./data/extcode/genus2reduction/core_cy.9 >> ./devel/sage-main/doc/fr/tutorial/core_cy.17 >> ./devel/sage-main/sage/algebras/core_cy.17 >> ./devel/sage-main/sage/categories/core_cy.4 >> ./devel/sage-main/sage/categories/core_cy.6 >> ./devel/sage-main/sage/combinat/root_system/core_cy.12 >> ./devel/sage-main/sage/databases/core_cy.1 >> ./devel/sage-main/sage/databases/core_cy.18 >> ./devel/sage-main/sage/ext/core_cy.18 >> ./devel/sage-main/sage/groups/matrix_gps/core_cy.5 >> ./devel/sage-main/sage/gsl/core_cy.4 >> ./devel/sage-main/sage/misc/core_cy.10 >> ./devel/sage-main/sage/misc/core_cy.17 >> ./devel/sage-main/sage/misc/core_cy.2 >> ./devel/sage-main/sage/modular/abvar/core_cy.7 >> ./devel/sage-main/sage/plot/plot3d/core_cy.19 >> ./devel/sage-main/sage/rings/core_cy.20 >> ./local/lib/python2.6/site-packages/sagenb-0.8.1-py2.6.egg/sagenb/testing/tests/core_cy.19 >> >> Should I test differently? > > So it looks like you're getting segfaults all over the place as > well... Hmm... Could you test with > https://sage.math.washington.edu:8091/hudson/job/sage-build/163/artifact/cython-devel.spkg > ? With the new package, I get similar results, i.e., apparently random segfaults. The core score is about the same: $ find -name core_cy2.\* | sort ./data/extcode/genus2reduction/core_cy2.1 ./data/extcode/genus2reduction/
Re: [sage-devel] Help us test Cython?
On Sat, Jul 31, 2010 at 2:51 AM, Mitesh Patel wrote: > On 07/30/2010 01:54 AM, Craig Citro wrote: >> So we're currently working on a long-overdue release of Cython with >> all kinds of snazzy new features. However, our automated testing >> system seems to keep turning up sporadic segfaults when running the >> sage doctest suite. This is obviously bad, but we're having a hard >> time reproducing this -- they seem to be *very* occasional failures >> while starting up sage, and thus far the only consistent appearance >> has been *within* our automated testing system (hudson). We've got a >> pile of dumped cores, which have mostly led us to the conclusions that >> (1) the problem occurs at a seemingly random point, so we should >> suspect some sort of memory corruption, and (2) sage does a *whole* >> lot of stuff when it starts up. ;) >> [...] >> After that, run the full test suite as many times as you're willing, >> hopefully with and without parallel doctesting (i.e. sage -tp). Then >> let us know what you turn up -- lots of random failures, or does >> everything pass? Points for machines we can ssh into and generated >> core files (ulimit -c unlimited), and even more points for anyone >> seeing consistent/repeatable failures. I'd also be very interested of >> reports that you've run the test suite N times with no failures. > > Below are some results from parallel doctests on sage.math. In each of > > /mnt/usb1/scratch/mpatel/tmp/sage-4.4.4-cython > /mnt/usb1/scratch/mpatel/tmp/sage-4.5.1-cython > > I have run (or am still running) > > ./tester | tee -a ztester & > > where 'tester' contains > > #!/bin/bash > ulimit -c unlimited > > RUNS=20 > for I in `seq 1 $RUNS`; > do > LOG="ptestlong-j20-$I.log" > if [ ! -f "$LOG" ]; then > echo "Run $I of $RUNS" > nice ./sage -tp 20 -long -sagenb devel/sage > "$LOG" 2>&1 > > # grep -A2 -B1 dumped "$LOG" > ls -lsFtr `find -type f -name core` | grep core | tee -a "$LOG" > > # Rename each core to core_cy.$I > rm -f _ren > find -name core -type f | awk '{print "mv "$0" "$0"_cy.'${I}'"}' >> _ren > . _ren > fi > done > > The log files and cores (renamed to core_cy.1, etc.) are still in/under > SAGE_ROOT. > > I don't know if the results tell you more than you already know. For > example, > > sage-4.5.1-cython$ for x in `\ls ptestlong-j20-*`; do grep "doctests > failed" $x; done | grep -v "0 doctests failed" | sort | uniq -c > 1 sage -t -long devel/sage/sage/graphs/graph.py # 2 > doctests failed > 19 sage -t -long devel/sage/sage/tests/startup.py # 1 > doctests failed > > But > > sage-4.5.1-cython$ find -name core_cy\* | sort > ./data/extcode/genus2reduction/core_cy.1 > ./data/extcode/genus2reduction/core_cy.10 > ./data/extcode/genus2reduction/core_cy.11 > ./data/extcode/genus2reduction/core_cy.12 > ./data/extcode/genus2reduction/core_cy.13 > ./data/extcode/genus2reduction/core_cy.14 > ./data/extcode/genus2reduction/core_cy.15 > ./data/extcode/genus2reduction/core_cy.16 > ./data/extcode/genus2reduction/core_cy.17 > ./data/extcode/genus2reduction/core_cy.18 > ./data/extcode/genus2reduction/core_cy.19 > ./data/extcode/genus2reduction/core_cy.2 > ./data/extcode/genus2reduction/core_cy.20 > ./data/extcode/genus2reduction/core_cy.3 > ./data/extcode/genus2reduction/core_cy.4 > ./data/extcode/genus2reduction/core_cy.5 > ./data/extcode/genus2reduction/core_cy.6 > ./data/extcode/genus2reduction/core_cy.7 > ./data/extcode/genus2reduction/core_cy.8 > ./data/extcode/genus2reduction/core_cy.9 > ./devel/sage-main/doc/fr/tutorial/core_cy.17 > ./devel/sage-main/sage/algebras/core_cy.17 > ./devel/sage-main/sage/categories/core_cy.4 > ./devel/sage-main/sage/categories/core_cy.6 > ./devel/sage-main/sage/combinat/root_system/core_cy.12 > ./devel/sage-main/sage/databases/core_cy.1 > ./devel/sage-main/sage/databases/core_cy.18 > ./devel/sage-main/sage/ext/core_cy.18 > ./devel/sage-main/sage/groups/matrix_gps/core_cy.5 > ./devel/sage-main/sage/gsl/core_cy.4 > ./devel/sage-main/sage/misc/core_cy.10 > ./devel/sage-main/sage/misc/core_cy.17 > ./devel/sage-main/sage/misc/core_cy.2 > ./devel/sage-main/sage/modular/abvar/core_cy.7 > ./devel/sage-main/sage/plot/plot3d/core_cy.19 > ./devel/sage-main/sage/rings/core_cy.20 > ./local/lib/python2.6/site-packages/sagenb-0.8.1-py2.6.egg/sagenb/testing/tests/core_cy.19 > > Should I test differently? So it looks like you're getting segfaults all over the place as well... Hmm... Could you test with https://sage.math.washington.edu:8091/hudson/job/sage-build/163/artifact/cython-devel.spkg ? - Robert -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On 07/30/2010 01:54 AM, Craig Citro wrote: > So we're currently working on a long-overdue release of Cython with > all kinds of snazzy new features. However, our automated testing > system seems to keep turning up sporadic segfaults when running the > sage doctest suite. This is obviously bad, but we're having a hard > time reproducing this -- they seem to be *very* occasional failures > while starting up sage, and thus far the only consistent appearance > has been *within* our automated testing system (hudson). We've got a > pile of dumped cores, which have mostly led us to the conclusions that > (1) the problem occurs at a seemingly random point, so we should > suspect some sort of memory corruption, and (2) sage does a *whole* > lot of stuff when it starts up. ;) > [...] > After that, run the full test suite as many times as you're willing, > hopefully with and without parallel doctesting (i.e. sage -tp). Then > let us know what you turn up -- lots of random failures, or does > everything pass? Points for machines we can ssh into and generated > core files (ulimit -c unlimited), and even more points for anyone > seeing consistent/repeatable failures. I'd also be very interested of > reports that you've run the test suite N times with no failures. Below are some results from parallel doctests on sage.math. In each of /mnt/usb1/scratch/mpatel/tmp/sage-4.4.4-cython /mnt/usb1/scratch/mpatel/tmp/sage-4.5.1-cython I have run (or am still running) ./tester | tee -a ztester & where 'tester' contains #!/bin/bash ulimit -c unlimited RUNS=20 for I in `seq 1 $RUNS`; do LOG="ptestlong-j20-$I.log" if [ ! -f "$LOG" ]; then echo "Run $I of $RUNS" nice ./sage -tp 20 -long -sagenb devel/sage > "$LOG" 2>&1 # grep -A2 -B1 dumped "$LOG" ls -lsFtr `find -type f -name core` | grep core | tee -a "$LOG" # Rename each core to core_cy.$I rm -f _ren find -name core -type f | awk '{print "mv "$0" "$0"_cy.'${I}'"}' > _ren . _ren fi done The log files and cores (renamed to core_cy.1, etc.) are still in/under SAGE_ROOT. I don't know if the results tell you more than you already know. For example, sage-4.5.1-cython$ for x in `\ls ptestlong-j20-*`; do grep "doctests failed" $x; done | grep -v "0 doctests failed" | sort | uniq -c 1 sage -t -long devel/sage/sage/graphs/graph.py # 2 doctests failed 19 sage -t -long devel/sage/sage/tests/startup.py # 1 doctests failed But sage-4.5.1-cython$ find -name core_cy\* | sort ./data/extcode/genus2reduction/core_cy.1 ./data/extcode/genus2reduction/core_cy.10 ./data/extcode/genus2reduction/core_cy.11 ./data/extcode/genus2reduction/core_cy.12 ./data/extcode/genus2reduction/core_cy.13 ./data/extcode/genus2reduction/core_cy.14 ./data/extcode/genus2reduction/core_cy.15 ./data/extcode/genus2reduction/core_cy.16 ./data/extcode/genus2reduction/core_cy.17 ./data/extcode/genus2reduction/core_cy.18 ./data/extcode/genus2reduction/core_cy.19 ./data/extcode/genus2reduction/core_cy.2 ./data/extcode/genus2reduction/core_cy.20 ./data/extcode/genus2reduction/core_cy.3 ./data/extcode/genus2reduction/core_cy.4 ./data/extcode/genus2reduction/core_cy.5 ./data/extcode/genus2reduction/core_cy.6 ./data/extcode/genus2reduction/core_cy.7 ./data/extcode/genus2reduction/core_cy.8 ./data/extcode/genus2reduction/core_cy.9 ./devel/sage-main/doc/fr/tutorial/core_cy.17 ./devel/sage-main/sage/algebras/core_cy.17 ./devel/sage-main/sage/categories/core_cy.4 ./devel/sage-main/sage/categories/core_cy.6 ./devel/sage-main/sage/combinat/root_system/core_cy.12 ./devel/sage-main/sage/databases/core_cy.1 ./devel/sage-main/sage/databases/core_cy.18 ./devel/sage-main/sage/ext/core_cy.18 ./devel/sage-main/sage/groups/matrix_gps/core_cy.5 ./devel/sage-main/sage/gsl/core_cy.4 ./devel/sage-main/sage/misc/core_cy.10 ./devel/sage-main/sage/misc/core_cy.17 ./devel/sage-main/sage/misc/core_cy.2 ./devel/sage-main/sage/modular/abvar/core_cy.7 ./devel/sage-main/sage/plot/plot3d/core_cy.19 ./devel/sage-main/sage/rings/core_cy.20 ./local/lib/python2.6/site-packages/sagenb-0.8.1-py2.6.egg/sagenb/testing/tests/core_cy.19 Should I test differently? -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On Fri, Jul 30, 2010 at 1:44 AM, Mitesh Patel wrote: > On 07/30/2010 01:54 AM, Craig Citro wrote: >> So we're currently working on a long-overdue release of Cython with >> all kinds of snazzy new features. However, our automated testing >> system seems to keep turning up sporadic segfaults when running the >> sage doctest suite. This is obviously bad, but we're having a hard >> time reproducing this -- they seem to be *very* occasional failures >> while starting up sage, and thus far the only consistent appearance >> has been *within* our automated testing system (hudson). We've got a >> pile of dumped cores, which have mostly led us to the conclusions that >> (1) the problem occurs at a seemingly random point, so we should >> suspect some sort of memory corruption, and (2) sage does a *whole* >> lot of stuff when it starts up. ;) >> >> So we'd love to see if other people see these same failures. Anyone >> want to try out the new cython? You can grab all the files you need >> here: >> >> http://sage.math.washington.edu/home/craigcitro/cython-0.13-beta/ >> >> There's a new spkg and 6 patches against the sage library. You can add >> the patches, sage -i the spkg, and then do a sage -ba, and voila! you >> should have a sage running the bleeding edge cython. (If that doesn't >> build, it means I forgot some patch somewhere -- there's a working >> sage-4.4.4 with the new cython in /scratch/craigcitro/cy-work/fcubed >> on sage.math if anyone wants to root around.) > > I haven't done any testing yet, but in order to get './sage -ba' to > finish, I applied a seventh patch (attached), which I discovered with > > hg diff -R /scratch/craigcitro/cy-work/fcubed/devel/sage Thanks! Yes, you'd probably need that too. - Robert -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] Help us test Cython?
On 07/30/2010 01:54 AM, Craig Citro wrote: > So we're currently working on a long-overdue release of Cython with > all kinds of snazzy new features. However, our automated testing > system seems to keep turning up sporadic segfaults when running the > sage doctest suite. This is obviously bad, but we're having a hard > time reproducing this -- they seem to be *very* occasional failures > while starting up sage, and thus far the only consistent appearance > has been *within* our automated testing system (hudson). We've got a > pile of dumped cores, which have mostly led us to the conclusions that > (1) the problem occurs at a seemingly random point, so we should > suspect some sort of memory corruption, and (2) sage does a *whole* > lot of stuff when it starts up. ;) > > So we'd love to see if other people see these same failures. Anyone > want to try out the new cython? You can grab all the files you need > here: > > http://sage.math.washington.edu/home/craigcitro/cython-0.13-beta/ > > There's a new spkg and 6 patches against the sage library. You can add > the patches, sage -i the spkg, and then do a sage -ba, and voila! you > should have a sage running the bleeding edge cython. (If that doesn't > build, it means I forgot some patch somewhere -- there's a working > sage-4.4.4 with the new cython in /scratch/craigcitro/cy-work/fcubed > on sage.math if anyone wants to root around.) I haven't done any testing yet, but in order to get './sage -ba' to finish, I applied a seventh patch (attached), which I discovered with hg diff -R /scratch/craigcitro/cy-work/fcubed/devel/sage > After that, run the full test suite as many times as you're willing, > hopefully with and without parallel doctesting (i.e. sage -tp). Then > let us know what you turn up -- lots of random failures, or does > everything pass? Points for machines we can ssh into and generated > core files (ulimit -c unlimited), and even more points for anyone > seeing consistent/repeatable failures. I'd also be very interested of > reports that you've run the test suite N times with no failures. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org # HG changeset patch # User Mitesh Patel # Date 1280479098 25200 # Node ID 82cbd7d1972a74357d6cebda757f42e6a58fde64 # Parent 41eae77c6bd6d1fb54827a96c1a019c036904a46 Update CYTHON_INCLUDE_DIRS diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -464,7 +464,11 @@ class sage_build_ext(build_ext): ## Dependency checking # -CYTHON_INCLUDE_DIRS=[ SAGE_LOCAL + '/lib/python/site-packages/Cython/Includes/' ] +#CYTHON_INCLUDE_DIRS=[ SAGE_LOCAL + '/lib/python/site-packages/Cython/Includes/' ] +CYTHON_INCLUDE_DIRS=[ +SAGE_LOCAL + '/lib/python/site-packages/Cython/Includes/', +SAGE_LOCAL + '/lib/python/site-packages/Cython/Includes/Deprecated/' +] # matches any dependency import re