[sage-combinat-devel] Re: [sage-algebra] Re: Introducing a framework for is_* methods

2010-10-25 Thread Nicolas M. Thiery
Dear Simon,

On Sun, Oct 24, 2010 at 02:34:53PM -0700, Simon King wrote:
 As you may have noticed, I posted on sage-devel and asked if there is
 interest in the automatic dynamical update of categories, at the
 expense of slowing down sage -testall -long by 1.5%. I could imagine
 that many people wouldn't like to slow things down if the only gain is
 a generalisation of an abstract framework. But let us see what people
 think.

I have been overflowed by sage-devel lately, but I will have a look! I
would bet for a strong opposition, unless the slow down would only
concern code specifically using the feature.

 in either case, I think that a method update_category is non-
 controversial and clearly a missing feature. Using my experiences with
 *automatically* updating the category and subclassing both parent and
 element classes exploting the abc module,  I would certainly be able
 to implement the update_category functionality (simply drop the word
 automatically in this sentence...).

Looking forward to that! This feature, especially combined with better
join treatment for Finite/Commutative/..., will be really powerful.

Cheers,
Nicolas

PS: by the way: the pushout construction mechanism does overlap much
with the functorial constructions in the categories
(Algebras/Subquotients/...). At some point, we will need to
investigate exactly how much they overlap, and devise plans to merge
the two features or at least make them interact smoothly. I know the
general principle of the pushout mechanism, but you have a much
stronger practical experience with it. Would you volunteer to start
this investigation?

--
Nicolas M. Thiéry Isil nthi...@users.sf.net
http://Nicolas.Thiery.name/

-- 
You received this message because you are subscribed to the Google Groups 
sage-combinat-devel group.
To post to this group, send email to sage-combinat-de...@googlegroups.com.
To unsubscribe from this group, send email to 
sage-combinat-devel+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sage-combinat-devel?hl=en.



[sage-combinat-devel] Re: Posets in Sage

2010-10-25 Thread Christian Stump
  Whereas points 2) and 3) are ok, point 1) seems to be broken right
  now.

I hope, the bug is solved. Could you please recheck.

I also moved all recent work on posets by Franco and myself
together...

Christian

-- 
You received this message because you are subscribed to the Google Groups 
sage-combinat-devel group.
To post to this group, send email to sage-combinat-de...@googlegroups.com.
To unsubscribe from this group, send email to 
sage-combinat-devel+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sage-combinat-devel?hl=en.



Re: [sage-combinat-devel] error in queue

2010-10-25 Thread Anne Schilling

I am not an expert with that. It seems to me that you added a new module
in a new folder. I am not sure it fix fix your problem but you should
try to declare this new folder (if it is the case)
in /sage/devel/sage-combinat/sage/setup.py
Look at the lines around line 790 and add your folder there.

For the import :
from rigged_configurations.all import * --  in all.py in /combinat
from filename import bla --  all.py in the new module.

Also don't forget an __init__.py file in your new folder (with just a
space inside this file...)


Brilliant! Thank you, this works. I pushed the code.

Cheers,

Anne

--
You received this message because you are subscribed to the Google Groups 
sage-combinat-devel group.
To post to this group, send email to sage-combinat-de...@googlegroups.com.
To unsubscribe from this group, send email to 
sage-combinat-devel+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sage-combinat-devel?hl=en.



[sage-combinat-devel] Re: Posets in Sage

2010-10-25 Thread Christian Stump
Salut,

 ok, order ideal now works. But my procedure order_ideal_lattice is
 still broken. Here is where the problem sits:

 B = Posets.BooleanLattice(4)
 B.order_ideal(B.antichains()[1])

 This does not work. Should not we be able to take the order ideal of
 an antichain ?

Can you check if everything works well again?

I undid the improvements, thus it is much slower again (which doesn't
bother in small cases), but I didn't know how to solve the problem
properly...

-- 
You received this message because you are subscribed to the Google Groups 
sage-combinat-devel group.
To post to this group, send email to sage-combinat-de...@googlegroups.com.
To unsubscribe from this group, send email to 
sage-combinat-devel+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sage-combinat-devel?hl=en.



[sage-combinat-devel] sage notebook

2010-10-25 Thread Anne Schilling

Hi,

Sage-combinat might not be the right place to ask this:

I just gave a number theory lecture and did a demonstration using the
sage notebook. However, since the classroom had no internet access and
I had started the session in my office (using sage installed on my laptop),
it then complained that no sage server was found and I could not run any
computations.

Is this a bug? I thought the notebook is just running locally on my machine.

Best,

Anne

--
You received this message because you are subscribed to the Google Groups 
sage-combinat-devel group.
To post to this group, send email to sage-combinat-de...@googlegroups.com.
To unsubscribe from this group, send email to 
sage-combinat-devel+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sage-combinat-devel?hl=en.



[sage-devel] Re: Has the number of doctests fallen ?

2010-10-25 Thread koffie
Are you testing all doctest (ie also the long ones) since it could be
that some long doctests have been tagged as such in an update.

Then there is ofcourse still the possibility that there was a merge
off a ticket which improves general sage performance.

The recently proposed regresion testing package would have been
usefull right now :).

Kind regards,
Maarten Derickx

On Oct 24, 4:12 pm, Dr. David Kirkby david.kir...@onetel.net
wrote:
 On10/24/10 02:55 PM, Dan Drake wrote:





  On Sun, 24 Oct 2010 at 08:52AM +0100, David Kirkby wrote:
  It used to take about 1800 seconds to doctest Sage on my Sun Ultra 27
  which runs OpenSolaris. That has now dropped to about 1600 seconds in
  the latest version (4.6.rc0).

  You can use sage -coverageall to give you information on the number of
  doctests. With Sage 4.5, I see:

  Overall weighted coverage score:  82.7%
  Total number of functions:  25764

  With 4.6.rc0, I see:

  Overall weighted coverage score:  84.3%
  Total number of functions:  26592

  More functions, higher coverage -- so more doctests.

  Dan

  --
  ---  Dan Drake
  -  http://mathsci.kaist.ac.kr/~drake
  ---

 Thank you. Interesting. I wonder if one of the longer tests has been dropped?
 For some reason I can now test this is 1600 s or so, instead of 1800 s.

 Anyway, I guess it is better than the computer slowing down!!

 dave

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Has the number of doctests fallen ?

2010-10-25 Thread Dr. David Kirkby

On 10/25/10 08:35 AM, koffie wrote:

Are you testing all doctest (ie also the long ones) since it could be
that some long doctests have been tagged as such in an update.

Then there is ofcourse still the possibility that there was a merge
off a ticket which improves general sage performance.

The recently proposed regresion testing package would have been
usefull right now :).

Kind regards,
Maarten Derickx


Yes,

I always run 'make ptestlong' but have noticed the testing time has fallen from 
around 1800 s to around 1600 s.


My OpenSolaris machine is a buildbot slave. If you look at the test time for the 
latest buildbot


http://build.sagemath.org/sage/builders/hawk%20full/builds/8/steps/shell_6/logs/stdio

it is 1566 seconds. Yet I once tested Sage 100 times in a loop, and know it was 
taking around 1800 s to test then. I think its dropped by around 200 seconds, 
though I can't precisely pin down when the change occurred.


It would be useful if we could collect some statistics on the time to run the 
tests on certain hardware, though of course we need to look at CPU time, not 
wall time, as the latter will change with system load. But I know my system was 
idle(ish) when tests were taking 1800 s.


I always tend to have VirtualBox running, which does eat up a bit of CPU time, 
but I have that running now and still the tests are taking 200 s less at least 
some previous versions of Sage


Dave

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Trac keywords

2010-10-25 Thread Burcin Erocal
On Sat, 23 Oct 2010 18:58:05 +0200
Jeroen Demeyer jdeme...@cage.ugent.be wrote:

 I have a page on the wiki with a proposal for standard Trac keywords.
 Feel free to edit: http://wiki.sagemath.org/TracKeywords
 
 I'm not sure that all these keywords are useful, but some of them
 certainly are.

I really like your suggestion. IMHO, if trac provided a way for
people to watch keywords instead of components, these could easily
replace the current list of components.

This suggests that roundup has such a feature:

http://trac.edgewall.org/ticket/2954

Perhaps we can achieve something similar with these plugins to trac:

http://trac-hacks.org/wiki/AnnouncerPlugin

http://trac-hacks.org/wiki/WatchlistPlugin

This also looks useful:

http://trac-hacks.org/wiki/KeywordSuggestPlugin


Cheers,
Burcin

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: spanish translation of the tutorial

2010-10-25 Thread pang
On 24 oct, 17:48, Pablo De Napoli pden...@gmail.com wrote:
 Hi

 I've seen the video contributing to Sage where W. Stein mentioned
 the translation of the tutorial to Spanish as
 a possible contribution to sage.
 Indeed, some parts of the tutorial have been already translated into
 Spanish by Luis V. (12 months ago) However, the translations are
 disseminated along several tickets on trac.

 I'm planning to review his translation (I've read some parts, and
 needs some improvement: it is to literal,
 some parts does not sound Spanish but a word by word translation from English)


Do you plan to translate the other sections of the tutorial too? I can
help you with some. I need to prepare some notes in spanish for my
course anyway. Let's say I translate the section on linear algebra:
where should I put it to make your life easier?

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: libpari segfault related to sage-4.5.3?i

2010-10-25 Thread Jan Groenewald
Hi

After some ubuntu 10.04.1 upgrades:

   Aptitude 0.4.11.11: log report
   Sun, Oct 24 2010 16:00:35 +0200

   IMPORTANT: this log only lists intended actions; actions which fail due to
   dpkg problems may not be completed.

   Will install 14 packages, and remove 0 packages.
   4096B of disk space will be used
   
===
   [UPGRADE] comerr-dev 2.1-1.41.11-1ubuntu2 - 2.1-1.41.11-1ubuntu2.1
   [UPGRADE] e2fslibs 1.41.11-1ubuntu2 - 1.41.11-1ubuntu2.1
   [UPGRADE] e2fsprogs 1.41.11-1ubuntu2 - 1.41.11-1ubuntu2.1
   [UPGRADE] libc-bin 2.11.1-0ubuntu7.4 - 2.11.1-0ubuntu7.5
   [UPGRADE] libc-dev-bin 2.11.1-0ubuntu7.4 - 2.11.1-0ubuntu7.5
   [UPGRADE] libc6 2.11.1-0ubuntu7.4 - 2.11.1-0ubuntu7.5
   [UPGRADE] libc6-dev 2.11.1-0ubuntu7.4 - 2.11.1-0ubuntu7.5
   [UPGRADE] libc6-i386 2.11.1-0ubuntu7.4 - 2.11.1-0ubuntu7.5
   [UPGRADE] libcomerr2 1.41.11-1ubuntu2 - 1.41.11-1ubuntu2.1
   [UPGRADE] libss2 1.41.11-1ubuntu2 - 1.41.11-1ubuntu2.1
   [UPGRADE] nscd 2.11.1-0ubuntu7.4 - 2.11.1-0ubuntu7.5
   [UPGRADE] python-papyon 0.4.8-0ubuntu1 - 0.4.8-0ubuntu2
   [UPGRADE] update-manager 1:0.134.10 - 1:0.134.11
   [UPGRADE] update-manager-core 1:0.134.10 - 1:0.134.11
   
===

   Log complete.

The segfault no longer occurs on *every* run of sage -t 
devel/sage/sage/interfaces/sage0.py,
only on *most* runs.  With sage-4.5.3 I got 70 segfaults in 83 runs. I'm pretty 
sure before
these upgrades every run of sage -t caused a segfault. ICBW.

On another desktop I did sage -upgrade 
http://sage.math.washington.edu/home/release/sage-4.6.rc0/sage-4.6.rc0/
and afterwards out of 100+ runs no segfault.

I guess we'll never know now. 

regards,
Jan

-- 
   .~. 
   /V\ Jan Groenewald
  /( )\www.aims.ac.za
  ^^-^^ 

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Supported platforms - once again

2010-10-25 Thread David Kirkby
As has been remarked before, Sage has number lists of supported
platforms, no two of which agree with each other.

I proposed some time ago we break the list into 3

1) Fully supported - every Sage release is tested on it.
2) Expected to work
3) Probably will not work, but porting work in ongoing

See

http://wiki.sagemath.org/suggested-for-supported-platforms

Now we have a build bot for Sage, it is relatively easy to test every
release of Sage on a number of systems. Currently there are 17 systems
on which Sage is being built.

http://build.sagemath.org/sage/waterfall

I suggest that we provide a page like

http://wiki.sagemath.org/suggested-for-supported-platforms

but put those 17 systems into the Fully supported. That means the
exact versions of the operating systems would be given, and not just
Fedora or Ubunta, OS X or Solaris.

Then, we move into the Expected to work category, recent
distributions of these systems, and any older ones we might expect to
work, but do not actually test on.

Any attempt to say we support the latest release of a distribution
is IMHO unwise, as we can't possibly do this. Linux distributions come
out all the time, and often break. Apparently Sage has been broken for
some time on OpenSUSE 11.2 and 11.3.

We should then have an errata page like

http://wiki.sagemath.org/errata

to let people know of any issues that are discovered after the release.

Does this sound reasonable to everyone? If so, I am willing to collect
the exact information about all the systems in the buildbot, and add
them to the Fully supported. (I'm assuming that Sage can be made to
pass all tests on all the hardware on the buildbots, though if that is
not so, then that system would obviously not be placed in the Fully
supported section).

Given we have a buildbot, it should be fairly easy to create binaries
for all these systems too, and make the binaries available.

We really *must* get ride of all these different lists of supported
systems and have one single list, and as many links to that list as we
want. Then the list only needs to get updated in one place.

If we can get agreement on this, I'll do the work, but I'm not going
to waste my time finding out the right information, if there are going
to be endless arguments of what we support. To me, fully supporting
what we can easily test on is the right way to proceed.

Since Minh has been using an external server (I think run by GNU) for
Debian, we can probably add Debian at some point if we can get
permission to run a buildbot slave there.



Dave

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: spanish translation of the tutorial

2010-10-25 Thread Pablo De Napoli
Yes, it appears that only the first section of the tutorial has been translated
yet (put please check on trac to avoid duplicating effords).


I can coordinate the translation of the Spanish translation.
It would be great if you can help translating a section (like the
section in linear algebra).
Perhaps others can help with other sections.

You can put it on a ticket on trac, but please put a link on ticket
#10165 so we have all the parts togheter.

In any case, please send me a private e-mail (you can write me in Spanish)

regards,
Pablo

On Mon, Oct 25, 2010 at 7:27 AM, pang pablo.ang...@uam.es wrote:
 On 24 oct, 17:48, Pablo De Napoli pden...@gmail.com wrote:
 Hi

 I've seen the video contributing to Sage where W. Stein mentioned
 the translation of the tutorial to Spanish as
 a possible contribution to sage.
 Indeed, some parts of the tutorial have been already translated into
 Spanish by Luis V. (12 months ago) However, the translations are
 disseminated along several tickets on trac.

 I'm planning to review his translation (I've read some parts, and
 needs some improvement: it is to literal,
 some parts does not sound Spanish but a word by word translation from 
 English)


 Do you plan to translate the other sections of the tutorial too? I can
 help you with some. I need to prepare some notes in spanish for my
 course anyway. Let's say I translate the section on linear algebra:
 where should I put it to make your life easier?

 --

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] problem generating the documentation (spanish translation)

2010-10-25 Thread Pablo De Napoli
Hi,

I've a problem with generating the documentation (Spanish translation
of the tutorial)

I've converted the attachment of ticket #7222 to utf using iconv

iconv -f latin1 -t utf8  tour_help.rst  tour_help-utf8.rst

and after that

mv tour_help.rst tour_help-utf8.rst

(This seems to be needed, the sage documentation building tools
appears to expect the
files in the utf8 encoding).

When I run

sage -docbuild es/tutorial html

I got an error message

sphinx-build -b html -d
/home/pablo/sage/sage-4.5.3/devel/sage/doc/output/doctrees/es/tutorial
   /home/pablo/sage/sage-4.5.3/devel/sage/doc/es/tutorial
/home/pablo/sage/sage-4.5.3/devel/sage/doc/output/html/es/tutorial
Running Sphinx v0.6.3
loading translations [es]... done
loading pickled environment... done
building [html]: targets for 5 source files that are out of date
updating environment: 0 added, 0 changed, 0 removed
looking for now-outdated files... none found
preparing documents... done
writing output... [100%] tour_help
Exception occurred:
  File 
/home/pablo/sage/sage-4.5.3/local/lib/python2.6/site-packages/docutils/nodes.py,
line 331, in __new__
return reprunicode.__new__(cls, data)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
1697: ordinal not in range(128)
The full traceback has been saved in /tmp/sphinx-err-8XRLfY.log, if
you want to report the issue to the author.
Please also report this if it was a user error, so that a better error
message can be provided next time.
Send reports to sphinx-...@googlegroups.com. Thanks!
Build finished.  The built documents can be found in
/home/pablo/sage/sage-4.5.3/devel/sage/doc/output/html/es/tutorial

However, if I run for example

sage -docbuild es/tutorial pdf

everything works Ok.

I thought at the begining that the problem might be the rather old
docutils version in Sage
so I've tried to upgrade docutils (ticket #10166, please review it!)

However, things didn't work either.

So what might be the cause of trouble?
Perhaps the old version of sphinx in Sage?

please help me!

regards
Pablo

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] sqrt() returns wrong result for large inputs

2010-10-25 Thread G Hahn
Hi

while calculating the integer part of square roots I realized that
sqrt() returns wrong results for large inputs (although the sqrt()
command itself accepts bignum values).
example: int(sqrt(2^94533))
I guess that this is due to the fact that SAGE simplifies the
expression above as sqrt(2) * 2^47266 and approximates sqrt(2) using
10 or so digits.
Is there a way to compute sqrt() correctly for large inputs (at least
the integer part)?
Best regards,
Georg Hahn

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread David Kirkby
On 21 October 2010 01:33, David Roe r...@math.harvard.edu wrote:
 There are a number of tickets in trac about performance regressions in
 Sage.  I'm sure there are far more performance regressions which we don't
 know about because nobody noticed.


I agree, and I've seen some comments from William that writing code
one way or another can change things by a factor of 100.

 As someone writing library code, it's generally not obvious that one is
 about to introduce a performance regression (otherwise you'd probably not do
 it).

Agreed.

 Consequently, I've been thinking recently about how to detect performance
 regressions automatically.  There are really two parts to the problem:
 gathering timing data on the Sage library, and analyzing that data to
 determine if regression have occurred (and how serious they are).


 Data gathering:

 One could modify local/bin/sage-doctest to allow the option of changing each
 doctest by wrapping it in a timeit() call.  This would then generate a
 timing datum for each doctest line.
 * these timings vary from run to run (presumably due to differing levels of
 load on the machine).  I don't know how to account for this, but usually
 it's a fairly small effect (on the order of 10% error).

They would differ by a lot more than 10%. One of my machines is a Sage
buildbot client. If that is building Sage, and I'm not, Sage will take
about and hour to build and test. If I'm building Sage at the same or
similar time, or will increase that by a factor of at least two.

What is needed is to measure CPU time used. That should be relatively
stable and not depend too much on system load, though even there I
would not be surprised by changes of +/- 10%.

 * if you're testing against a previous version of sage, the doctest
 structure will have changed because people wrote more doctests.  And doctest
 lines depend on each other: you define variables that are used in later
 lines.  So inserting a line could make timings of later lines incomparable
 to the exact same line without the inserted line.  We might be able to parse
 the lines and check that various objects are actually the same (across
 different versions of sage, so this would require either a version-invariant
 hash or saving in one version, loading in the other and comparing.  And you
 would have to do that for each variable that occurs in the line), but that
 seems to be getting too complicated...

Getting a checksum of each doctest would be easy. I suggest we use:

$ cksum sometest.py  | awk '{print $1}'

because that will be totally portable across all platforms. 'cksum' is
32-bit checksum that's part of the POSIX standard and the algorithm is
defined. So there's no worry about whether one has an md5 program, and
if so what its called.

 One way of handling these problems is to create a relational database to put
 timings in.  This could also be interesting for benchmarketing purposes: we
 could have timings on various machines, and we highlight performance
 improvements, in addition to watching for performance regressions.


Sounds good. But it could be a lot of work to implement.

 So, here's a first draft for a database schema to put timing data into.
 I've included a description of each table, along with a description of
 columns I thought were non-obvious.  I'm definitely interested in
 suggestsion for improving this schema.

 Table: Hosts
 # computer information; including identifying data to determine when running
 on same host
 col: id
 col: identifying_data # some way to uniquely identify the computer on which
 a test is being run. Presumably the output of some unix function, but I
 don't know what.

One needs to be a bit careful here, as any of these changes could
cause a change in system performance

1) Upgrade of CPU speed.
2) Upgrade of operating system

Mathworks list on their site methods they use for licensing purposes

http://www.mathworks.com/support/solutions/en/data/1-171PI/

we could use some of them as a starting point.

 Table: Sage_version
 # a table giving each existing version of Sage an id

 There are also questions of how this would be distributed.  Presumably the
 data part wouldn't come standard with sage.  Maybe an optional spkg?  Of
 course, you're mainly interested in comparing DIFFERENT versions of sage on
 the SAME computer, which doesn't really fit with how we normally distribute
 code and data.


 Analysis:
 Once we have the database set up and have gathered some data, you can do all
 kinds of things with it.  I'm most interested in how to find speed
 regressions, and I can certainly imagine writing code to do so.  You have
 data from a previous version of sage (or your own, before you applied some
 patch(es)) and you run a regression test with sage -rt or something; this
 generates the same kind of timing data and then you look for slowdowns,
 either absolute or relative to the average ratio of your current run to the
 previous run for a given line of code.  It can then print out a 

[sage-devel] Re: Regression testing

2010-10-25 Thread Donald Alan Morrison


On Oct 25, 8:19 am, David Kirkby david.kir...@onetel.net wrote:
 Getting a checksum of each doctest would be easy. I suggest we use:

 $ cksum sometest.py  | awk '{print $1}'

 because that will be totally portable across all platforms. 'cksum' is
 32-bit checksum that's part of the POSIX standard and the algorithm is
 defined. So there's no worry about whether one has an md5 program, and
 if so what its called.

http://docs.python.org/library/hashlib.html#module-hashlib

Python's standard library hashlib contains both MD5 and SHA1 Message
Digests.

Their advantage over the checksum (CRC) algorithm, is that the output
digest changes dramatically when only 1 input bit changes.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] sqrt() returns wrong result for large inputs

2010-10-25 Thread Francois Maltey

Georg wrote :

while calculating the integer part of square roots I realized that
sqrt() returns wrong results for large inputs (although the sqrt()
command itself accepts bignum values).
example: int(sqrt(2^94533))
  

int isn't a mathematical Sage type, but  Integer is a Sage type.
And Integer (sqrt(2^1234567)) fails

But floor over Integer seems fine :

n=10001 ; res=floor(sqrt(2^n)) ; sign(res^2-2^n) ; sign((res+1)^2-2^n)
I get -1 and 1.

but it fails around n=3 or 4.

You may also get a precise numerical approximation by the method 
_.n(digits=).
By example : sqrt(2).n(digits=1). But in this case you must compute 
by pen the digit value.


A very similar exercice : Is the number of digits of 123^456^789 even or 
odd ?

Of course you must read 123^(456^789), not (123^456)^789 !

I hope this help you...

F. (in France)

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Supported platforms - once again

2010-10-25 Thread John Cremona
Your suggestions all look very sensible to me -- go for it (provided
several other people agree, of course).

John

On Mon, Oct 25, 2010 at 3:09 PM, David Kirkby david.kir...@onetel.net wrote:
 As has been remarked before, Sage has number lists of supported
 platforms, no two of which agree with each other.

 I proposed some time ago we break the list into 3

 1) Fully supported - every Sage release is tested on it.
 2) Expected to work
 3) Probably will not work, but porting work in ongoing

 See

 http://wiki.sagemath.org/suggested-for-supported-platforms

 Now we have a build bot for Sage, it is relatively easy to test every
 release of Sage on a number of systems. Currently there are 17 systems
 on which Sage is being built.

 http://build.sagemath.org/sage/waterfall

 I suggest that we provide a page like

 http://wiki.sagemath.org/suggested-for-supported-platforms

 but put those 17 systems into the Fully supported. That means the
 exact versions of the operating systems would be given, and not just
 Fedora or Ubunta, OS X or Solaris.

 Then, we move into the Expected to work category, recent
 distributions of these systems, and any older ones we might expect to
 work, but do not actually test on.

 Any attempt to say we support the latest release of a distribution
 is IMHO unwise, as we can't possibly do this. Linux distributions come
 out all the time, and often break. Apparently Sage has been broken for
 some time on OpenSUSE 11.2 and 11.3.

 We should then have an errata page like

 http://wiki.sagemath.org/errata

 to let people know of any issues that are discovered after the release.

 Does this sound reasonable to everyone? If so, I am willing to collect
 the exact information about all the systems in the buildbot, and add
 them to the Fully supported. (I'm assuming that Sage can be made to
 pass all tests on all the hardware on the buildbots, though if that is
 not so, then that system would obviously not be placed in the Fully
 supported section).

 Given we have a buildbot, it should be fairly easy to create binaries
 for all these systems too, and make the binaries available.

 We really *must* get ride of all these different lists of supported
 systems and have one single list, and as many links to that list as we
 want. Then the list only needs to get updated in one place.

 If we can get agreement on this, I'll do the work, but I'm not going
 to waste my time finding out the right information, if there are going
 to be endless arguments of what we support. To me, fully supporting
 what we can easily test on is the right way to proceed.

 Since Minh has been using an external server (I think run by GNU) for
 Debian, we can probably add Debian at some point if we can get
 permission to run a buildbot slave there.



 Dave

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to 
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at 
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] sqrt() returns wrong result for large inputs

2010-10-25 Thread John Cremona
When you do sqrt(2^m) when m is odd, say m=2*k+1, the returned value
is symbolically 2*k * sqrt(2):

sage: sqrt(2^101)
1125899906842624*sqrt(2)

Now using Integer() to round that will evaluate sqrt(2)
approximately to standard precision, which is not enough.  Instead,
use the isqrt() method for Integers:

sage: a = 2^94533
sage: b = a.isqrt()
sage: a  b^2
True
sage: (b+1)^2  a
True

John

On Mon, Oct 25, 2010 at 4:50 PM, Francois Maltey fmal...@nerim.fr wrote:
 Georg wrote :

 while calculating the integer part of square roots I realized that
 sqrt() returns wrong results for large inputs (although the sqrt()
 command itself accepts bignum values).
 example: int(sqrt(2^94533))


 int isn't a mathematical Sage type, but  Integer is a Sage type.
 And Integer (sqrt(2^1234567)) fails

 But floor over Integer seems fine :

 n=10001 ; res=floor(sqrt(2^n)) ; sign(res^2-2^n) ; sign((res+1)^2-2^n)
 I get -1 and 1.

 but it fails around n=3 or 4.

 You may also get a precise numerical approximation by the method
 _.n(digits=).
 By example : sqrt(2).n(digits=1). But in this case you must compute by
 pen the digit value.

 A very similar exercice : Is the number of digits of 123^456^789 even or odd
 ?
 Of course you must read 123^(456^789), not (123^456)^789 !

 I hope this help you...

 F. (in France)

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Supported platforms - once again

2010-10-25 Thread Gonzalo Tornaria
On Mon, Oct 25, 2010 at 12:09 PM, David Kirkby david.kir...@onetel.net wrote:
 Since Minh has been using an external server (I think run by GNU) for
 Debian, we can probably add Debian at some point if we can get
 permission to run a buildbot slave there.

1. Is there a reason for not running debian on a vm on boxen?

2. what is needed to run a buildbot?

(it would feel pretty awkward not to support debian)

Gonzalo

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Supported platforms - once again

2010-10-25 Thread David Kirkby
On 25 October 2010 17:26, Gonzalo Tornaria torna...@math.utexas.edu wrote:
 On Mon, Oct 25, 2010 at 12:09 PM, David Kirkby david.kir...@onetel.net 
 wrote:
 Since Minh has been using an external server (I think run by GNU) for
 Debian, we can probably add Debian at some point if we can get
 permission to run a buildbot slave there.

 1. Is there a reason for not running debian on a vm on boxen?

To my knowledge there is currently not a buildbot on any VM. I believe
there is a plan to do this. But at this very moment there are not any
to my knowledge. Here's the list. As far as I'm aware the machines are
either on

a) *.math.washington.edu hosts
b) hosts on skynet
c) 'hawk which is my personal machine.

 2. what is needed to run a buildbot?

Mitesh will know more, since he has set them up.

I know for me personally I just gave him an account (username
buildbot), and an IP address into which the buildbot can ssh.

 (it would feel pretty awkward not to support debian)

Agreed, but at this moment in time there is not a regular Debian
machine. Hence I suggest we do not consider Debian fully supported
until such as time as we test on Debian.

 Gonzalo

Dave

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] problem generating the documentation (spanish translation)

2010-10-25 Thread Minh Nguyen
Hi Pablo

On Tue, Oct 26, 2010 at 1:56 AM, Pablo De Napoli pden...@gmail.com wrote:
 Hi,

 I've a problem with generating the documentation (Spanish translation
 of the tutorial)

See if the following page helps:

http://wiki.sagemath.org/devel/nonASCII

-- 
Regards
Minh Van Nguyen

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Supported platforms - once again

2010-10-25 Thread kcrisman
  2. what is needed to run a buildbot?

 Mitesh will know more, since he has set them up.

 I know for me personally I just gave him an account (username
 buildbot), and an IP address into which the buildbot can ssh.


Was there an announcement of this recently?  I might be able to make a
machine available for this, depending on what the buildbot would
actually do.  Mitesh or you can email offlist if desired.

- kcrisman

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Supported platforms - once again

2010-10-25 Thread Robert Bradshaw
On Mon, Oct 25, 2010 at 7:09 AM, David Kirkby david.kir...@onetel.net wrote:
 As has been remarked before, Sage has number lists of supported
 platforms, no two of which agree with each other.

 I proposed some time ago we break the list into 3

 1) Fully supported - every Sage release is tested on it.
 2) Expected to work
 3) Probably will not work, but porting work in ongoing

 See

 http://wiki.sagemath.org/suggested-for-supported-platforms

 Now we have a build bot for Sage, it is relatively easy to test every
 release of Sage on a number of systems. Currently there are 17 systems
 on which Sage is being built.

 http://build.sagemath.org/sage/waterfall

 I suggest that we provide a page like

 http://wiki.sagemath.org/suggested-for-supported-platforms

 but put those 17 systems into the Fully supported. That means the
 exact versions of the operating systems would be given, and not just
 Fedora or Ubunta, OS X or Solaris.

 Then, we move into the Expected to work category, recent
 distributions of these systems, and any older ones we might expect to
 work, but do not actually test on.

 Any attempt to say we support the latest release of a distribution
 is IMHO unwise, as we can't possibly do this. Linux distributions come
 out all the time, and often break. Apparently Sage has been broken for
 some time on OpenSUSE 11.2 and 11.3.

 We should then have an errata page like

 http://wiki.sagemath.org/errata

 to let people know of any issues that are discovered after the release.

 Does this sound reasonable to everyone? If so, I am willing to collect
 the exact information about all the systems in the buildbot, and add
 them to the Fully supported. (I'm assuming that Sage can be made to
 pass all tests on all the hardware on the buildbots, though if that is
 not so, then that system would obviously not be placed in the Fully
 supported section).

 Given we have a buildbot, it should be fairly easy to create binaries
 for all these systems too, and make the binaries available.

+1

 We really *must* get ride of all these different lists of supported
 systems and have one single list, and as many links to that list as we
 want. Then the list only needs to get updated in one place.

+1

 If we can get agreement on this, I'll do the work, but I'm not going
 to waste my time finding out the right information, if there are going
 to be endless arguments of what we support. To me, fully supporting
 what we can easily test on is the right way to proceed.

As I've stated in the past, I'm very supportive of basing our
supported platform list on an automated build process, like the build
bot we have now set up.

- Robert

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread Robert Bradshaw
On Mon, Oct 25, 2010 at 8:19 AM, David Kirkby david.kir...@onetel.net wrote:
 On 21 October 2010 01:33, David Roe r...@math.harvard.edu wrote:
 There are a number of tickets in trac about performance regressions in
 Sage.  I'm sure there are far more performance regressions which we don't
 know about because nobody noticed.


 I agree, and I've seen some comments from William that writing code
 one way or another can change things by a factor of 100.

 As someone writing library code, it's generally not obvious that one is
 about to introduce a performance regression (otherwise you'd probably not do
 it).

 Agreed.

 Consequently, I've been thinking recently about how to detect performance
 regressions automatically.  There are really two parts to the problem:
 gathering timing data on the Sage library, and analyzing that data to
 determine if regression have occurred (and how serious they are).


 Data gathering:

 One could modify local/bin/sage-doctest to allow the option of changing each
 doctest by wrapping it in a timeit() call.  This would then generate a
 timing datum for each doctest line.
 * these timings vary from run to run (presumably due to differing levels of
 load on the machine).  I don't know how to account for this, but usually
 it's a fairly small effect (on the order of 10% error).

 They would differ by a lot more than 10%. One of my machines is a Sage
 buildbot client. If that is building Sage, and I'm not, Sage will take
 about and hour to build and test. If I'm building Sage at the same or
 similar time, or will increase that by a factor of at least two.

 What is needed is to measure CPU time used. That should be relatively
 stable and not depend too much on system load, though even there I
 would not be surprised by changes of +/- 10%.

Yes, for sure, though it's probably worth having both. Of course when
we move from a pexpect interface to doing something natively that
would make the CPU time go up because the real work is no longer
hidden in a parallel process.

 * if you're testing against a previous version of sage, the doctest
 structure will have changed because people wrote more doctests.  And doctest
 lines depend on each other: you define variables that are used in later
 lines.  So inserting a line could make timings of later lines incomparable
 to the exact same line without the inserted line.  We might be able to parse
 the lines and check that various objects are actually the same (across
 different versions of sage, so this would require either a version-invariant
 hash or saving in one version, loading in the other and comparing.  And you
 would have to do that for each variable that occurs in the line), but that
 seems to be getting too complicated...

 Getting a checksum of each doctest would be easy. I suggest we use:

 $ cksum sometest.py  | awk '{print $1}'

 because that will be totally portable across all platforms. 'cksum' is
 32-bit checksum that's part of the POSIX standard and the algorithm is
 defined. So there's no worry about whether one has an md5 program, and
 if so what its called.

To be very useful, I think we need to be more granular than having
per-file tests. Just think about the number of files that get touched,
even a little bit, each release... Full doctest blocks should be
independent (though of course when looking at a doctest a line-by-line
time breakdown could be helpful.). It shouldn't be too hard to add
hooks into the unit test framework itself. With 1.5K test files and
several dozen doctests per file, changing from version to version, I
could easily see the birthday paradox being a problem with cksum (even
if it weren't weak), but from python we have md5 and sha1.

Also, I was talking to Craig Citro about this and he had the
interesting idea of creating some kind of a test object which would
be saved and then could be run into future versions of Sage and re-run
in. The idea of saving the tests that are run, and then running the
exact same tests (rather than worrying about correlation  of files and
tests) will make catching regressions much easier.

- Robert

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread William Stein
 Also, I was talking to Craig Citro about this and he had the
 interesting idea of creating some kind of a test object which would
 be saved and then could be run into future versions of Sage and re-run
 in. The idea of saving the tests that are run, and then running the
 exact same tests (rather than worrying about correlation  of files and
 tests) will make catching regressions much easier.

Hi,

Wow, that's an *extremely* good idea!  Nice work, Craig.
Basically, we could have one object that has:

(a) list of tests that got run.
(b) for each of several machines and sage versions:
- how long each test took

Regarding (a), this gets extracted from the doctests somehow for
starters, though could have some other tests thrown if we want.

I could easily imagine storing the above as a single entry in a
MongoDB collection (say):

   {'tests':[ordered list of input blocks of code that could be
extracted from doctests],
'timings':[{'machine':'sage.math.washington.edu',
'version':'sage-4.6.alpha3', 'timings':[a list of floats]},
   {'machine':'bsd.math.washington.edu',
'version':'sage-4.5.3', 'timings':[a list of floats]}]

Note that the ordered list of input blocks could stored using GridFS,
since it's bigger than 4MB:

wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ sage -grep sage:  a
wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ ls -lh a
-rw-r--r-- 1 wstein wstein 9.7M 2010-10-25 11:41 a
wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ wc -l a
133579 a

Alternatively, the list of input blocks could be stored in its own
collection, which would just get named by the tests field:

{'tests':'williams_test_suite_2010-10-25'}

The latter is nice, since it would make it much easier to make a web
app that allows for browsing through
the timing results, e.g,. sort them by longest to slowest, and easily
click to get the input that took a long time.

Another option:  have exactly one collection for each test suite, and
have all other data be in that collection:

Collection name: williams_test_suite-2010-10-25

Documents:

  * A document with a unique id, starting at 0, for each actual test
   {'id':0, 'code':'factor(2^127+1)'}

  * A document for each result of running the tests on an actual platform:
   {'machine':'bsd.math.washington.edu', 'version':'sage-4.5.3',
'timings':{0:1.3, 1:0.5,...} }
Here, the timings are stored as a mapping from id's to floats.

I think timing should all be done using the timeit modulo,  since
that is the Python standard module designed for exactly the purpose of
benchmarking. That ends the discussion about CPU time versus walltime,
etc.

The architecture of separating out the tests from the
timing/recording/web_view framework is very nice, because it makes it
easy to add in additional test suites.  E.g., I've had companies ask
me: Could paid support include the statement: 'in all future releases
of Sage, the following commands will run in at most the following
times on the following hardware'?  And they have specific commands
they care about.

 -- William

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Has the number of doctests fallen ?

2010-10-25 Thread javier
IIRC in the latest Cython release there was a bugfix that reduced the
access time for all cython non-cdef classes [1]. Given the amount of
time that is spent on function calls I wouldn't be surprised if that
change propagated to a general speedup in the doctests.

Cheers
J

[1] 
http://groups.google.com/group/sage-devel/browse_thread/thread/c97d36d23131f4c5/783574a2911c4be2?


On 25 oct, 08:55, Dr. David Kirkby david.kir...@onetel.net wrote:
 On 10/25/10 08:35 AM, koffie wrote:

  Are you testing all doctest (ie also the long ones) since it could be
  that some long doctests have been tagged as such in an update.

  Then there is ofcourse still the possibility that there was a merge
  off a ticket which improves general sage performance.

  The recently proposed regresion testing package would have been
  usefull right now :).

  Kind regards,
  Maarten Derickx

 Yes,

 I always run 'make ptestlong' but have noticed the testing time has fallen 
 from
 around 1800 s to around 1600 s.

 My OpenSolaris machine is a buildbot slave. If you look at the test time for 
 the
 latest buildbot

 http://build.sagemath.org/sage/builders/hawk%20full/builds/8/steps/sh...

 it is 1566 seconds. Yet I once tested Sage 100 times in a loop, and know it 
 was
 taking around 1800 s to test then. I think its dropped by around 200 seconds,
 though I can't precisely pin down when the change occurred.

 It would be useful if we could collect some statistics on the time to run the
 tests on certain hardware, though of course we need to look at CPU time, not
 wall time, as the latter will change with system load. But I know my system 
 was
 idle(ish) when tests were taking 1800 s.

 I always tend to have VirtualBox running, which does eat up a bit of CPU time,
 but I have that running now and still the tests are taking 200 s less at least
 some previous versions of Sage

 Dave

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread Jeroen Demeyer
On 2010-10-25 20:06, Robert Bradshaw wrote:
 To be very useful, I think we need to be more granular than having
 per-file tests. Just think about the number of files that get touched,
 even a little bit, each release... Full doctest blocks should be
 independent (though of course when looking at a doctest a line-by-line
 time breakdown could be helpful.). It shouldn't be too hard to add
 hooks into the unit test framework itself.
I agree.  checksum every doctest block and time that.  I guess this can
probably all be done from Python.

Jeroen.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] sqrt() returns wrong result for large inputs

2010-10-25 Thread David Roe
This is a good workaround, but the original problem can be traced to the
function sage.symbolic.expression.Expression.__int__

def __int__(self):
#FIXME: can we do better?
return int(self.n(prec=100))

Presumably you could adaptively estimate to higher precision until your
error interval included only one integer...
David

On Mon, Oct 25, 2010 at 12:13, John Cremona john.crem...@gmail.com wrote:

 When you do sqrt(2^m) when m is odd, say m=2*k+1, the returned value
 is symbolically 2*k * sqrt(2):

 sage: sqrt(2^101)
 1125899906842624*sqrt(2)

 Now using Integer() to round that will evaluate sqrt(2)
 approximately to standard precision, which is not enough.  Instead,
 use the isqrt() method for Integers:

 sage: a = 2^94533
 sage: b = a.isqrt()
 sage: a  b^2
 True
 sage: (b+1)^2  a
 True

 John

 On Mon, Oct 25, 2010 at 4:50 PM, Francois Maltey fmal...@nerim.fr wrote:
  Georg wrote :
 
  while calculating the integer part of square roots I realized that
  sqrt() returns wrong results for large inputs (although the sqrt()
  command itself accepts bignum values).
  example: int(sqrt(2^94533))
 
 
  int isn't a mathematical Sage type, but  Integer is a Sage type.
  And Integer (sqrt(2^1234567)) fails
 
  But floor over Integer seems fine :
 
  n=10001 ; res=floor(sqrt(2^n)) ; sign(res^2-2^n) ; sign((res+1)^2-2^n)
  I get -1 and 1.
 
  but it fails around n=3 or 4.
 
  You may also get a precise numerical approximation by the method
  _.n(digits=).
  By example : sqrt(2).n(digits=1). But in this case you must compute
 by
  pen the digit value.
 
  A very similar exercice : Is the number of digits of 123^456^789 even or
 odd
  ?
  Of course you must read 123^(456^789), not (123^456)^789 !
 
  I hope this help you...
 
  F. (in France)
 
  --
  To post to this group, send an email to sage-devel@googlegroups.com
  To unsubscribe from this group, send an email to
  sage-devel+unsubscr...@googlegroups.comsage-devel%2bunsubscr...@googlegroups.com
  For more options, visit this group at
  http://groups.google.com/group/sage-devel
  URL: http://www.sagemath.org
 

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.comsage-devel%2bunsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] sqrt() returns wrong result for large inputs

2010-10-25 Thread Burcin Erocal
On Mon, 25 Oct 2010 17:00:39 -0400
David Roe r...@math.harvard.edu wrote:

 This is a good workaround, but the original problem can be traced to
 the function sage.symbolic.expression.Expression.__int__
 
 def __int__(self):
 #FIXME: can we do better?
 return int(self.n(prec=100))
 
 Presumably you could adaptively estimate to higher precision until
 your error interval included only one integer...

This is #9953 on trac:

http://trac.sagemath.org/sage_trac/ticket/9953


Cheers,
Burcin

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] sqrt() returns wrong result for large inputs

2010-10-25 Thread David Roe
I posted a patch there that should fix it; I have to work on other stuff,
but if someone else wants to take over and write some doctests, make sure it
works in lots of cases...
David

On Mon, Oct 25, 2010 at 17:14, Burcin Erocal bur...@erocal.org wrote:

 On Mon, 25 Oct 2010 17:00:39 -0400
 David Roe r...@math.harvard.edu wrote:

  This is a good workaround, but the original problem can be traced to
  the function sage.symbolic.expression.Expression.__int__
 
  def __int__(self):
  #FIXME: can we do better?
  return int(self.n(prec=100))
 
  Presumably you could adaptively estimate to higher precision until
  your error interval included only one integer...

 This is #9953 on trac:

 http://trac.sagemath.org/sage_trac/ticket/9953


 Cheers,
 Burcin

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.comsage-devel%2bunsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Regression testing

2010-10-25 Thread Dr. David Kirkby

On 10/25/10 04:50 PM, Donald Alan Morrison wrote:



On Oct 25, 8:19 am, David Kirkbydavid.kir...@onetel.net  wrote:

Getting a checksum of each doctest would be easy. I suggest we use:

$ cksum sometest.py  | awk '{print $1}'

because that will be totally portable across all platforms. 'cksum' is
32-bit checksum that's part of the POSIX standard and the algorithm is
defined. So there's no worry about whether one has an md5 program, and
if so what its called.


http://docs.python.org/library/hashlib.html#module-hashlib

Python's standard library hashlib contains both MD5 and SHA1 Message
Digests.

Their advantage over the checksum (CRC) algorithm, is that the output
digest changes dramatically when only 1 input bit changes.



I'm not convinced it's important how many bits change in the output if the input 
changes by one bit.


But if Python has it, then by all means use that.

I was thinking it would be fairly trivial to process the ptestlong.log file to 
get a set of times and checksums. I think I could do it in a 20-30 line shell 
script.


But really we need CPU time for doctests to make this useful.

IMHO, it would be good if the output from the doctests shows real time, CPU 
time, and actual time/date. Then we could correlate failures with system logs, 
to see if we have run out of swap space or similar.


dave

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Supported platforms - once again

2010-10-25 Thread Dr. David Kirkby

On 10/25/10 06:21 PM, kcrisman wrote:

2. what is needed to run a buildbot?


Mitesh will know more, since he has set them up.

I know for me personally I just gave him an account (username
buildbot), and an IP address into which the buildbot can ssh.



Was there an announcement of this recently?  I might be able to make a
machine available for this, depending on what the buildbot would
actually do.  Mitesh or you can email offlist if desired.

- kcrisman



I can't recall if there was an announcement, but I was cc'ed on a ticket about 
this. There's some quite interesting looking graphs


Go to here

http://build.sagemath.org/sage/

then pick one of several sorts.

I find this Waterfall most informative

http://build.sagemath.org/sage/waterfall

I don't think the 'console'

http://build.sagemath.org/sage/console

is too useful - I suspect there's some information missing from that.

The buildslaves page

http://build.sagemath.org/sage/buildslaves

could perhaps be improved a bit, with more detailed information, though perhaps 
there's not the space.


I just emailed Misesh and offered to make my machine available, which is working 
quite well as far as I can see. Mitesh is happy with it, and it's not having any 
significant impact on me.


I might have more of an issue with uploading binaries from my machine, as the 
binaries are large and my upload bandwidth (which would be important for 
uploading binaries), is only 1/8th of my download bandwidth. That might be an 
issue for me. But whilst a release manager will be build-testing quite often, it 
is only rarely that binaries will be created - only one per Sage release, so one 
a month or so.




Dave

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Regression testing

2010-10-25 Thread Donald Alan Morrison


On Oct 25, 2:47 pm, Dr. David Kirkby david.kir...@onetel.net
wrote:
 On 10/25/10 04:50 PM, Donald Alan Morrison wrote:
  On Oct 25, 8:19 am, David Kirkbydavid.kir...@onetel.net  wrote:
  Getting a checksum of each doctest would be easy. I suggest we use:
  $ cksum sometest.py  | awk '{print $1}'
  because that will be totally portable across all platforms. 'cksum' is
  32-bit checksum that's part of the POSIX standard and the algorithm is
  defined. So there's no worry about whether one has an md5 program, and
  if so what its called.
 http://docs.python.org/library/hashlib.html#module-hashlib

  Python's standard library hashlib contains both MD5 and SHA1 Message
  Digests.

  Their advantage over the checksum (CRC) algorithm, is that the output
  digest changes dramatically when only 1 input bit changes.

 I'm not convinced it's important how many bits change in the output if the 
 input
 changes by one bit.

http://selenic.com/pipermail/mercurial/2009-April/025526.html
http://mercurial.selenic.com/wiki/Nodeid

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Supported platforms - once again

2010-10-25 Thread Robert Bradshaw
On Mon, Oct 25, 2010 at 3:04 PM, Dr. David Kirkby
david.kir...@onetel.net wrote:
 On 10/25/10 06:21 PM, kcrisman wrote:

 2. what is needed to run a buildbot?

 Mitesh will know more, since he has set them up.

 I know for me personally I just gave him an account (username
 buildbot), and an IP address into which the buildbot can ssh.


 Was there an announcement of this recently?  I might be able to make a
 machine available for this, depending on what the buildbot would
 actually do.  Mitesh or you can email offlist if desired.

 - kcrisman


 I can't recall if there was an announcement, but I was cc'ed on a ticket
 about this. There's some quite interesting looking graphs

 Go to here

 http://build.sagemath.org/sage/

 then pick one of several sorts.

 I find this Waterfall most informative

 http://build.sagemath.org/sage/waterfall

 I don't think the 'console'

 http://build.sagemath.org/sage/console

 is too useful - I suspect there's some information missing from that.

 The buildslaves page

 http://build.sagemath.org/sage/buildslaves

 could perhaps be improved a bit, with more detailed information, though
 perhaps there's not the space.

 I just emailed Misesh and offered to make my machine available, which is
 working quite well as far as I can see. Mitesh is happy with it, and it's
 not having any significant impact on me.

 I might have more of an issue with uploading binaries from my machine, as
 the binaries are large and my upload bandwidth (which would be important
 for uploading binaries), is only 1/8th of my download bandwidth. That might
 be an issue for me.

If it's uploaded via script, just have it upload it overnight when
you're not online. If you can't upload a Sage binary in 8 hours
(=twice dial-up speed) then, yeah, you might have issues :)

 But whilst a release manager will be build-testing quite
 often, it is only rarely that binaries will be created - only one per Sage
 release, so one a month or so.

Yep.

- Robert

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Has the number of doctests fallen ?

2010-10-25 Thread Robert Bradshaw
On Mon, Oct 25, 2010 at 1:33 PM, javier vengor...@gmail.com wrote:
 IIRC in the latest Cython release there was a bugfix that reduced the
 access time for all cython non-cdef classes [1]. Given the amount of
 time that is spent on function calls I wouldn't be surprised if that
 change propagated to a general speedup in the doctests.

True. On that note, speeding up sage startup by 100ms or so (which the
new Cython could have contributed to) would have a similar same
effect. Another plug for
http://trac.sagemath.org/sage_trac/ticket/8254 ...

I, for one, am happy that Sage is getting faster :).

 [1] 
 http://groups.google.com/group/sage-devel/browse_thread/thread/c97d36d23131f4c5/783574a2911c4be2?


 On 25 oct, 08:55, Dr. David Kirkby david.kir...@onetel.net wrote:
 On 10/25/10 08:35 AM, koffie wrote:

  Are you testing all doctest (ie also the long ones) since it could be
  that some long doctests have been tagged as such in an update.

  Then there is ofcourse still the possibility that there was a merge
  off a ticket which improves general sage performance.

  The recently proposed regresion testing package would have been
  usefull right now :).

  Kind regards,
  Maarten Derickx

 Yes,

 I always run 'make ptestlong' but have noticed the testing time has fallen 
 from
 around 1800 s to around 1600 s.

 My OpenSolaris machine is a buildbot slave. If you look at the test time for 
 the
 latest buildbot

 http://build.sagemath.org/sage/builders/hawk%20full/builds/8/steps/sh...

 it is 1566 seconds. Yet I once tested Sage 100 times in a loop, and know it 
 was
 taking around 1800 s to test then. I think its dropped by around 200 seconds,
 though I can't precisely pin down when the change occurred.

 It would be useful if we could collect some statistics on the time to run the
 tests on certain hardware, though of course we need to look at CPU time, not
 wall time, as the latter will change with system load. But I know my system 
 was
 idle(ish) when tests were taking 1800 s.

 I always tend to have VirtualBox running, which does eat up a bit of CPU 
 time,
 but I have that running now and still the tests are taking 200 s less at 
 least
 some previous versions of Sage

 Dave

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to 
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at 
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Google code: copy of the Sage source repository

2010-10-25 Thread William Stein
Hi,

Just for fun, I created this page:

   http://code.google.com/p/sagemath/

It has the Sage source code repo, so you can browse the history of Sage:

   http://code.google.com/p/sagemath/source/list

and files:

   http://code.google.com/p/sagemath/source/browse/

It's interesting because it's a completely different view of the HG
repo history of Sage than that provided by HG's own web frontend,
since Google wrote their own new web frontend (and also storage
backend) for HG.

It might also at some point be worth playing around with the code
review tools that are integrated into this.

 -- William

-- 
William Stein
Professor of Mathematics
University of Washington
http://wstein.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread Mitesh Patel
On 10/25/2010 01:54 PM, William Stein wrote:
 Also, I was talking to Craig Citro about this and he had the
 interesting idea of creating some kind of a test object which would
 be saved and then could be run into future versions of Sage and re-run
 in. The idea of saving the tests that are run, and then running the
 exact same tests (rather than worrying about correlation  of files and
 tests) will make catching regressions much easier.
 
 Wow, that's an *extremely* good idea!  Nice work, Craig.
 Basically, we could have one object that has:
 
 (a) list of tests that got run.
 (b) for each of several machines and sage versions:
 - how long each test took
 
 Regarding (a), this gets extracted from the doctests somehow for
 starters, though could have some other tests thrown if we want.
 
 I could easily imagine storing the above as a single entry in a
 MongoDB collection (say):
 
{'tests':[ordered list of input blocks of code that could be
 extracted from doctests],
 'timings':[{'machine':'sage.math.washington.edu',
 'version':'sage-4.6.alpha3', 'timings':[a list of floats]},
{'machine':'bsd.math.washington.edu',
 'version':'sage-4.5.3', 'timings':[a list of floats]}]
 
 Note that the ordered list of input blocks could stored using GridFS,
 since it's bigger than 4MB:
 
 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ sage -grep sage:  a
 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ ls -lh a
 -rw-r--r-- 1 wstein wstein 9.7M 2010-10-25 11:41 a
 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ wc -l a
 133579 a
 
 Alternatively, the list of input blocks could be stored in its own
 collection, which would just get named by the tests field:
 
 {'tests':'williams_test_suite_2010-10-25'}
 
 The latter is nice, since it would make it much easier to make a web
 app that allows for browsing through
 the timing results, e.g,. sort them by longest to slowest, and easily
 click to get the input that took a long time.
 
 Another option:  have exactly one collection for each test suite, and
 have all other data be in that collection:
 
 Collection name: williams_test_suite-2010-10-25
 
 Documents:
 
   * A document with a unique id, starting at 0, for each actual test
{'id':0, 'code':'factor(2^127+1)'}
 
   * A document for each result of running the tests on an actual platform:
{'machine':'bsd.math.washington.edu', 'version':'sage-4.5.3',
 'timings':{0:1.3, 1:0.5,...} }
 Here, the timings are stored as a mapping from id's to floats.


This last option seems most natural to me, though identical inputs
that appear in multiple suites would generally(?) get different ids in
the collections.  Would it be better to use a hash of the 'code' for the
'id', or can the database automatically ensure that different ids imply
different inputs?

Disclaimer: I'm not familiar with MongoDB.  Here's a brief introduction:

http://www.mongodb.org/display/DOCS/Introduction

In your experience, are queries fast?  For example, if we wanted to see
how timings vary across Sage versions and machines for a specific input?


 I think timing should all be done using the timeit modulo,  since
 that is the Python standard module designed for exactly the purpose of
 benchmarking. That ends the discussion about CPU time versus walltime,
 etc.
 
 The architecture of separating out the tests from the
 timing/recording/web_view framework is very nice, because it makes it
 easy to add in additional test suites.  E.g., I've had companies ask
 me: Could paid support include the statement: 'in all future releases
 of Sage, the following commands will run in at most the following
 times on the following hardware'?  And they have specific commands
 they care about.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Regression testing

2010-10-25 Thread Donald Alan Morrison


On Oct 25, 4:23 pm, Mitesh Patel qed...@gmail.com wrote:
[...]
 On 10/25/2010 01:54 PM, William Stein wrote:
    * A document with a unique id, starting at 0, for each actual test
         {'id':0, 'code':'factor(2^127+1)'}

    * A document for each result of running the tests on an actual platform:
         {'machine':'bsd.math.washington.edu', 'version':'sage-4.5.3',
  'timings':{0:1.3, 1:0.5,...} }
  Here, the timings are stored as a mapping from id's to floats.

 This last option seems most natural to me, though identical inputs
 that appear in multiple suites would generally(?) get different ids in
 the collections.  Would it be better to use a hash of the 'code' for the
 'id', or can the database automatically ensure that different ids imply
 different inputs?

http://www.mongodb.org/display/DOCS/Indexes#Indexes-UniqueIndexes

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread William Stein
On Mon, Oct 25, 2010 at 4:23 PM, Mitesh Patel qed...@gmail.com wrote:
 On 10/25/2010 01:54 PM, William Stein wrote:
 Also, I was talking to Craig Citro about this and he had the
 interesting idea of creating some kind of a test object which would
 be saved and then could be run into future versions of Sage and re-run
 in. The idea of saving the tests that are run, and then running the
 exact same tests (rather than worrying about correlation  of files and
 tests) will make catching regressions much easier.

 Wow, that's an *extremely* good idea!  Nice work, Craig.
 Basically, we could have one object that has:

     (a) list of tests that got run.
     (b) for each of several machines and sage versions:
             - how long each test took

 Regarding (a), this gets extracted from the doctests somehow for
 starters, though could have some other tests thrown if we want.

 I could easily imagine storing the above as a single entry in a
 MongoDB collection (say):

    {'tests':[ordered list of input blocks of code that could be
 extracted from doctests],
     'timings':[{'machine':'sage.math.washington.edu',
 'version':'sage-4.6.alpha3', 'timings':[a list of floats]},
                    {'machine':'bsd.math.washington.edu',
 'version':'sage-4.5.3', 'timings':[a list of floats]}]

 Note that the ordered list of input blocks could stored using GridFS,
 since it's bigger than 4MB:

 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ sage -grep sage:  a
 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ ls -lh a
 -rw-r--r-- 1 wstein wstein 9.7M 2010-10-25 11:41 a
 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ wc -l a
 133579 a

 Alternatively, the list of input blocks could be stored in its own
 collection, which would just get named by the tests field:

     {'tests':'williams_test_suite_2010-10-25'}

 The latter is nice, since it would make it much easier to make a web
 app that allows for browsing through
 the timing results, e.g,. sort them by longest to slowest, and easily
 click to get the input that took a long time.

 Another option:  have exactly one collection for each test suite, and
 have all other data be in that collection:

 Collection name: williams_test_suite-2010-10-25

 Documents:

   * A document with a unique id, starting at 0, for each actual test
        {'id':0, 'code':'factor(2^127+1)'}

   * A document for each result of running the tests on an actual platform:
        {'machine':'bsd.math.washington.edu', 'version':'sage-4.5.3',
 'timings':{0:1.3, 1:0.5,...} }
 Here, the timings are stored as a mapping from id's to floats.


 This last option seems most natural to me, though identical inputs
 that appear in multiple suites would generally(?) get different ids in
 the collections.  Would it be better to use a hash of the 'code' for the
 'id', or can the database automatically ensure that different ids imply
 different inputs?

Yes, the database can automatically ensure that different ids imply
different inputs.

So your change is to store all inputs in a single collection, with a
unique id for each.
Then in the collection corresponding to a given test suite, you have
one document
that has an ordered list of id's of inputs, and the the rest is as
before.  Thus:

1. A collection named tests with documents of the form:

{'id':5,  'for i in range(100,120):\n   factor(2^i+1)'}

This collection can grow larger and larger over time, and could start
with the input blocks of current sage doctests.

2. A collection named timings with documents of the form:

{'machine':'sage.math.washington.edu', 'version':'sage-4.6.alpha3',
'timings':{5:1.2, 7:1.1, 10:'crash', ...}}

and that's it.

Given those two collections, one can do queries to extract whatever we
want, run tests using a subset of the test id's, and also easily input
new information.

3. Oh, there could be a third collection that groups together some of
the tests, and it could be named test_groups, and could just contain
stuff like:

  {'name':'sage-4.6.alpha3', 'ids':[1,2,5,18, 97]}

and

  {'name':'boeing', 'ids':[95,96,97]}

so that it would be easy to run just the tests in a given group, or
sort out for a report these tests.

 -- William



 Disclaimer: I'm not familiar with MongoDB.  Here's a brief introduction:

 http://www.mongodb.org/display/DOCS/Introduction

 In your experience, are queries fast?  For example, if we wanted to see
 how timings vary across Sage versions and machines for a specific input?

In short yes, if you know what you're doing.  If you make an index,
then queries are superfast.  They are what you would expect otherwise
(i.e., linear time...).



-- 
William Stein
Professor of Mathematics
University of Washington
http://wstein.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Google code: copy of the Sage source repository

2010-10-25 Thread François Bissey
 Hi,
 
 Just for fun, I created this page:
 
http://code.google.com/p/sagemath/
 
 It has the Sage source code repo, so you can browse the history of Sage:
 
http://code.google.com/p/sagemath/source/list
 

I actually installed hgview to inspect changes on my machine and it looks a 
bit like that.

Francois

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Supported platforms - once again

2010-10-25 Thread Mitesh Patel
On 10/25/2010 11:55 AM, David Kirkby wrote:
 On 25 October 2010 17:26, Gonzalo Tornaria torna...@math.utexas.edu wrote:
 On Mon, Oct 25, 2010 at 12:09 PM, David Kirkby david.kir...@onetel.net 
 wrote:
 Since Minh has been using an external server (I think run by GNU) for
 Debian, we can probably add Debian at some point if we can get
 permission to run a buildbot slave there.

 1. Is there a reason for not running debian on a vm on boxen?
 
 To my knowledge there is currently not a buildbot on any VM. I believe
 there is a plan to do this. But at this very moment there are not any

There are several possible VMs, including Debian (32 and 64-bit), we
could add to or update on boxen's farm.  It's a matter of someone (or
several people) setting them up, installing the operating systems,
keeping them up to date, etc.  Unfortunately, this is a lot of work,
unless, perhaps, we distribute it.  Alternatively, we can run
buildslaves on machines administered by others.

 to my knowledge. Here's the list. As far as I'm aware the machines are
 either on
 
 a) *.math.washington.edu hosts
 b) hosts on skynet
 c) 'hawk which is my personal machine.
 
 2. what is needed to run a buildbot?
 
 Mitesh will know more, since he has set them up.

Basically, a machine should

 0. Already be set up to build Sage.

 1. Have an ssh-accessible 'buildbot' user account in which Sage builds.
 ssh is for setup, maintenance, and build/test postmortems.  Buildbot
uses a different protocol for master-slave communication.

 2. Have a system-wide installation of Python 2.4 or later, including
the Python development headers.  With this, I can 'easy_install
buildbot-slave' into a Python virtual environment [1], which makes it
easy to upgrade the package from the buildbot account.  But the machine
administrator could instead install the package globally.

[1] http://pypi.python.org/pypi/virtualenv

 I know for me personally I just gave him an account (username
 buildbot), and an IP address into which the buildbot can ssh.

 (it would feel pretty awkward not to support debian)
 
 Agreed, but at this moment in time there is not a regular Debian
 machine. Hence I suggest we do not consider Debian fully supported
 until such as time as we test on Debian.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Supported platforms - once again

2010-10-25 Thread Mitesh Patel
On 10/25/2010 05:04 PM, Dr. David Kirkby wrote:
 On 10/25/10 06:21 PM, kcrisman wrote:
 2. what is needed to run a buildbot?

 Mitesh will know more, since he has set them up.

 I know for me personally I just gave him an account (username
 buildbot), and an IP address into which the buildbot can ssh.


 Was there an announcement of this recently?  I might be able to make a

No, not really.

 machine available for this, depending on what the buildbot would
 actually do.  Mitesh or you can email offlist if desired.

Thanks!  I'll try to starting adding new contributed hosts in the coming
weeks.  If you're interested, please let me know by emailing me directly.

If I don't reply immediately, it's probably because I'm a bit Saged
out and should decompress for a spell after 4.6 is out.

 I can't recall if there was an announcement, but I was cc'ed on a ticket
 about this. There's some quite interesting looking graphs
 
 Go to here
 
 http://build.sagemath.org/sage/
 
 then pick one of several sorts.
 
 I find this Waterfall most informative
 
 http://build.sagemath.org/sage/waterfall
 
 I don't think the 'console'
 
 http://build.sagemath.org/sage/console
 
 is too useful - I suspect there's some information missing from that.

We're not yet using a few potentially useful Buildbot features.  These
would populate some of the empty pages/fields.  There's an evolving,
informal, and non-binding TODO list at

http://boxen.math.washington.edu/home/buildbot/TODO

See also

http://trac.sagemath.org/sage_trac/ticket/3524

 The buildslaves page
 
 http://build.sagemath.org/sage/buildslaves
 
 could perhaps be improved a bit, with more detailed information, though
 perhaps there's not the space.

There's more space on the individual pages, e.g.,

http://build.sagemath.org/sage/buildslaves/hawk-1

But I haven't yet filled out this information for most.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] question about calculus/riemann.pyx extension

2010-10-25 Thread François Bissey
Hi,

I am looking at the code and tests for the class Riemann_Map in 
calculus/riemann.pyx and I have a hard time understanding how it can be 
working at all.

The init method starts with:

def __init__(self, fs, fprimes, a, int N=500, int ncorners=4, opp=False):

Initializes the ``Riemann_Map`` class. See the class ``Riemann_Map``
for full documentation on the input of this initialization method.

TESTS::

sage: m = Riemann_Map([lambda t: e^(I*t) - 0.5*e^(-I*t)], [lambda 
t: I*e^(I*t) + 0.5*I*e^(-I*t)], 0)  # long time (4 sec)


--
so basically fs is a complex function and fprimes is its derivative and the 
test included (as well as some other in the riemann.pyx and interpolators.pyx 
files) shows that perfectly. fs is given as a lambda function and in some tests
fprimes is explicitly defined as its derivative.
Then in the code of the init method we have:

 self.f = fs[0]

and later

self.B = len(fs) # number of boundaries of the figure


which suggests that fs should instead be an array. If I try to go through the
steps of the initialization process by hand using the data from the test I get 
error messages:
sage: import sage.calculus.riemann
sage: fs = lambda t: e^(I*t) - 0.5*e^(-I*t)
sage: fs[0]
--- 

TypeError Traceback (most recent call last) 



/home/francois/ipython console in module()  



TypeError: 'function' object is unsubscriptable 

sage: len(fs)   

--- 

TypeError Traceback (most recent call last) 



/home/francois/ipython console in module()  



TypeError: object of type 'function' has no len()   



Anyone can enlighten me on this matter?

Francois

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] question about calculus/riemann.pyx extension

2010-10-25 Thread David Roe
Note that fs in the example is a list of length 1.
David

On Mon, Oct 25, 2010 at 23:14, François Bissey f.r.bis...@massey.ac.nzwrote:

 Hi,

 I am looking at the code and tests for the class Riemann_Map in
 calculus/riemann.pyx and I have a hard time understanding how it can be
 working at all.

 The init method starts with:

def __init__(self, fs, fprimes, a, int N=500, int ncorners=4,
 opp=False):

Initializes the ``Riemann_Map`` class. See the class ``Riemann_Map``
for full documentation on the input of this initialization method.

TESTS::

sage: m = Riemann_Map([lambda t: e^(I*t) - 0.5*e^(-I*t)],
 [lambda
 t: I*e^(I*t) + 0.5*I*e^(-I*t)], 0)  # long time (4 sec)


 --
 so basically fs is a complex function and fprimes is its derivative and the
 test included (as well as some other in the riemann.pyx and
 interpolators.pyx
 files) shows that perfectly. fs is given as a lambda function and in some
 tests
 fprimes is explicitly defined as its derivative.
 Then in the code of the init method we have:

 self.f = fs[0]

 and later

self.B = len(fs) # number of boundaries of the figure


 which suggests that fs should instead be an array. If I try to go through
 the
 steps of the initialization process by hand using the data from the test I
 get
 error messages:
 sage: import sage.calculus.riemann
 sage: fs = lambda t: e^(I*t) - 0.5*e^(-I*t)
 sage: fs[0]
 ---
 TypeError Traceback (most recent call last)

 /home/francois/ipython console in module()

 TypeError: 'function' object is unsubscriptable
 sage: len(fs)
 ---
 TypeError Traceback (most recent call last)

 /home/francois/ipython console in module()

 TypeError: object of type 'function' has no len()


 Anyone can enlighten me on this matter?

 Francois

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.comsage-devel%2bunsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread Robert Bradshaw
On Mon, Oct 25, 2010 at 11:54 AM, William Stein wst...@gmail.com wrote:
 Also, I was talking to Craig Citro about this and he had the
 interesting idea of creating some kind of a test object which would
 be saved and then could be run into future versions of Sage and re-run
 in. The idea of saving the tests that are run, and then running the
 exact same tests (rather than worrying about correlation  of files and
 tests) will make catching regressions much easier.

 Hi,

 Wow, that's an *extremely* good idea!  Nice work, Craig.
 Basically, we could have one object that has:

    (a) list of tests that got run.
    (b) for each of several machines and sage versions:
            - how long each test took

 Regarding (a), this gets extracted from the doctests somehow for
 starters, though could have some other tests thrown if we want.

 I could easily imagine storing the above as a single entry in a
 MongoDB collection (say):

   {'tests':[ordered list of input blocks of code that could be
 extracted from doctests],
    'timings':[{'machine':'sage.math.washington.edu',
 'version':'sage-4.6.alpha3', 'timings':[a list of floats]},
                   {'machine':'bsd.math.washington.edu',
 'version':'sage-4.5.3', 'timings':[a list of floats]}]

 Note that the ordered list of input blocks could stored using GridFS,
 since it's bigger than 4MB:

 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ sage -grep sage:  a
 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ ls -lh a
 -rw-r--r-- 1 wstein wstein 9.7M 2010-10-25 11:41 a
 wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ wc -l a
 133579 a

 Alternatively, the list of input blocks could be stored in its own
 collection, which would just get named by the tests field:

    {'tests':'williams_test_suite_2010-10-25'}

 The latter is nice, since it would make it much easier to make a web
 app that allows for browsing through
 the timing results, e.g,. sort them by longest to slowest, and easily
 click to get the input that took a long time.

 Another option:  have exactly one collection for each test suite, and
 have all other data be in that collection:

 Collection name: williams_test_suite-2010-10-25

 Documents:

  * A document with a unique id, starting at 0, for each actual test
       {'id':0, 'code':'factor(2^127+1)'}

  * A document for each result of running the tests on an actual platform:
       {'machine':'bsd.math.washington.edu', 'version':'sage-4.5.3',
 'timings':{0:1.3, 1:0.5,...} }
 Here, the timings are stored as a mapping from id's to floats.

+1. My only hesitance with this is that it requires either an internet
connection or mongodb to participate, both of which are optional
features of Sage. Of course, people could store their timings locally
as lists of dicts, and then optionally upload to mongodb (requiring
only pymongo, a much ligher dependency, or even via a json front end).

 I think timing should all be done using the timeit modulo,  since
 that is the Python standard module designed for exactly the purpose of
 benchmarking. That ends the discussion about CPU time versus walltime,
 etc.

 The architecture of separating out the tests from the
 timing/recording/web_view framework is very nice, because it makes it
 easy to add in additional test suites.  E.g., I've had companies ask
 me: Could paid support include the statement: 'in all future releases
 of Sage, the following commands will run in at most the following
 times on the following hardware'?  And they have specific commands
 they care about.

OTOH, I think that generating timing data as part of a standard test
run could be very valuable. This precludes the use of timeit in
general for long-ish running tests. Not that the two ideas are
incompatible.

- Robert

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Supported platforms - once again

2010-10-25 Thread Dima Pasechnik
I have been testing Sage on Debian (64 bits) on an ad hoc basis, and I
have enough hardware power (a virtual host on a VMWare server)
to run a testbot, if a setup is available and not too hard to install.
(unfortunately it's behind a campus firewall, so it can't be ssh'd
into from outside without a university VPN access, blah blah blah, so
it doesn't make much sense for me to offer access to it for other Sage
people)


On Oct 26, 9:47 am, Mitesh Patel qed...@gmail.com wrote:
 On 10/25/2010 11:55 AM, David Kirkby wrote:

  On 25 October 2010 17:26, Gonzalo Tornaria torna...@math.utexas.edu wrote:
  On Mon, Oct 25, 2010 at 12:09 PM, David Kirkby david.kir...@onetel.net 
  wrote:
  Since Minh has been using an external server (I think run by GNU) for
  Debian, we can probably add Debian at some point if we can get
  permission to run a buildbot slave there.

  1. Is there a reason for not running debian on a vm on boxen?

  To my knowledge there is currently not a buildbot on any VM. I believe
  there is a plan to do this. But at this very moment there are not any

 There are several possible VMs, including Debian (32 and 64-bit), we
 could add to or update on boxen's farm.  It's a matter of someone (or
 several people) setting them up, installing the operating systems,
 keeping them up to date, etc.  Unfortunately, this is a lot of work,
 unless, perhaps, we distribute it.  Alternatively, we can run
 buildslaves on machines administered by others.

  to my knowledge. Here's the list. As far as I'm aware the machines are
  either on

  a) *.math.washington.edu hosts
  b) hosts on skynet
  c) 'hawk which is my personal machine.

  2. what is needed to run a buildbot?

  Mitesh will know more, since he has set them up.

 Basically, a machine should

  0. Already be set up to build Sage.

  1. Have an ssh-accessible 'buildbot' user account in which Sage builds.
  ssh is for setup, maintenance, and build/test postmortems.  Buildbot
 uses a different protocol for master-slave communication.

  2. Have a system-wide installation of Python 2.4 or later, including
 the Python development headers.  With this, I can 'easy_install
 buildbot-slave' into a Python virtual environment [1], which makes it
 easy to upgrade the package from the buildbot account.  But the machine
 administrator could instead install the package globally.

 [1]http://pypi.python.org/pypi/virtualenv



  I know for me personally I just gave him an account (username
  buildbot), and an IP address into which the buildbot can ssh.
  (it would feel pretty awkward not to support debian)

  Agreed, but at this moment in time there is not a regular Debian
  machine. Hence I suggest we do not consider Debian fully supported
  until such as time as we test on Debian.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread David Roe
I think if you set both number and repeat to 1 in sage.misc.sage_timeit, it
will only run once (though I could be wrong).

We should think about a way to automate uploading of timing data if someone
doesn't have MongoDB installed.  For example, we could have the test script
which ran doctests have the option of sending an e-mail somewhere.  Or make
pymongo standard in Sage.
David

On Mon, Oct 25, 2010 at 23:26, Robert Bradshaw rober...@math.washington.edu
 wrote:

 On Mon, Oct 25, 2010 at 11:54 AM, William Stein wst...@gmail.com wrote:
  Also, I was talking to Craig Citro about this and he had the
  interesting idea of creating some kind of a test object which would
  be saved and then could be run into future versions of Sage and re-run
  in. The idea of saving the tests that are run, and then running the
  exact same tests (rather than worrying about correlation  of files and
  tests) will make catching regressions much easier.
 
  Hi,
 
  Wow, that's an *extremely* good idea!  Nice work, Craig.
  Basically, we could have one object that has:
 
 (a) list of tests that got run.
 (b) for each of several machines and sage versions:
 - how long each test took
 
  Regarding (a), this gets extracted from the doctests somehow for
  starters, though could have some other tests thrown if we want.
 
  I could easily imagine storing the above as a single entry in a
  MongoDB collection (say):
 
{'tests':[ordered list of input blocks of code that could be
  extracted from doctests],
 'timings':[{'machine':'sage.math.washington.edu',
  'version':'sage-4.6.alpha3', 'timings':[a list of floats]},
{'machine':'bsd.math.washington.edu',
  'version':'sage-4.5.3', 'timings':[a list of floats]}]
 
  Note that the ordered list of input blocks could stored using GridFS,
  since it's bigger than 4MB:
 
  wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ sage -grep sage:  a
  wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ ls -lh a
  -rw-r--r-- 1 wstein wstein 9.7M 2010-10-25 11:41 a
  wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ wc -l a
  133579 a
 
  Alternatively, the list of input blocks could be stored in its own
  collection, which would just get named by the tests field:
 
 {'tests':'williams_test_suite_2010-10-25'}
 
  The latter is nice, since it would make it much easier to make a web
  app that allows for browsing through
  the timing results, e.g,. sort them by longest to slowest, and easily
  click to get the input that took a long time.
 
  Another option:  have exactly one collection for each test suite, and
  have all other data be in that collection:
 
  Collection name: williams_test_suite-2010-10-25
 
  Documents:
 
   * A document with a unique id, starting at 0, for each actual test
{'id':0, 'code':'factor(2^127+1)'}
 
   * A document for each result of running the tests on an actual platform:
{'machine':'bsd.math.washington.edu', 'version':'sage-4.5.3',
  'timings':{0:1.3, 1:0.5,...} }
  Here, the timings are stored as a mapping from id's to floats.

 +1. My only hesitance with this is that it requires either an internet
 connection or mongodb to participate, both of which are optional
 features of Sage. Of course, people could store their timings locally
 as lists of dicts, and then optionally upload to mongodb (requiring
 only pymongo, a much ligher dependency, or even via a json front end).

  I think timing should all be done using the timeit modulo,  since
  that is the Python standard module designed for exactly the purpose of
  benchmarking. That ends the discussion about CPU time versus walltime,
  etc.
 
  The architecture of separating out the tests from the
  timing/recording/web_view framework is very nice, because it makes it
  easy to add in additional test suites.  E.g., I've had companies ask
  me: Could paid support include the statement: 'in all future releases
  of Sage, the following commands will run in at most the following
  times on the following hardware'?  And they have specific commands
  they care about.

 OTOH, I think that generating timing data as part of a standard test
 run could be very valuable. This precludes the use of timeit in
 general for long-ish running tests. Not that the two ideas are
 incompatible.

 - Robert

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.comsage-devel%2bunsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Regression testing

2010-10-25 Thread Robert Bradshaw
On Mon, Oct 25, 2010 at 8:39 PM, David Roe r...@math.harvard.edu wrote:
 I think if you set both number and repeat to 1 in sage.misc.sage_timeit, it
 will only run once (though I could be wrong).

Yes, though it'd probably be both cheap and valuable to run fast
commands more than once (but less than the default timit parameters,
unless this is explicitly a timing run).

 We should think about a way to automate uploading of timing data if someone
 doesn't have MongoDB installed.  For example, we could have the test script
 which ran doctests have the option of sending an e-mail somewhere.  Or make
 pymongo standard in Sage.

With the way the lmfdb group is going, it may make sense to make
PyMongo a standard package (despite being so easy to install
manually). A simple http server accepting json data wouldn't be too
hard to throw up either, now that we've entered the realm of running a
service (mongod).

- Robert

 On Mon, Oct 25, 2010 at 23:26, Robert Bradshaw
 rober...@math.washington.edu wrote:

 On Mon, Oct 25, 2010 at 11:54 AM, William Stein wst...@gmail.com wrote:
  Also, I was talking to Craig Citro about this and he had the
  interesting idea of creating some kind of a test object which would
  be saved and then could be run into future versions of Sage and re-run
  in. The idea of saving the tests that are run, and then running the
  exact same tests (rather than worrying about correlation  of files and
  tests) will make catching regressions much easier.
 
  Hi,
 
  Wow, that's an *extremely* good idea!  Nice work, Craig.
  Basically, we could have one object that has:
 
     (a) list of tests that got run.
     (b) for each of several machines and sage versions:
             - how long each test took
 
  Regarding (a), this gets extracted from the doctests somehow for
  starters, though could have some other tests thrown if we want.
 
  I could easily imagine storing the above as a single entry in a
  MongoDB collection (say):
 
    {'tests':[ordered list of input blocks of code that could be
  extracted from doctests],
     'timings':[{'machine':'sage.math.washington.edu',
  'version':'sage-4.6.alpha3', 'timings':[a list of floats]},
                    {'machine':'bsd.math.washington.edu',
  'version':'sage-4.5.3', 'timings':[a list of floats]}]
 
  Note that the ordered list of input blocks could stored using GridFS,
  since it's bigger than 4MB:
 
  wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ sage -grep sage:  a
  wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ ls -lh a
  -rw-r--r-- 1 wstein wstein 9.7M 2010-10-25 11:41 a
  wst...@sage:~/build/sage-4.6.alpha3/devel/sage$ wc -l a
  133579 a
 
  Alternatively, the list of input blocks could be stored in its own
  collection, which would just get named by the tests field:
 
     {'tests':'williams_test_suite_2010-10-25'}
 
  The latter is nice, since it would make it much easier to make a web
  app that allows for browsing through
  the timing results, e.g,. sort them by longest to slowest, and easily
  click to get the input that took a long time.
 
  Another option:  have exactly one collection for each test suite, and
  have all other data be in that collection:
 
  Collection name: williams_test_suite-2010-10-25
 
  Documents:
 
   * A document with a unique id, starting at 0, for each actual test
        {'id':0, 'code':'factor(2^127+1)'}
 
   * A document for each result of running the tests on an actual
  platform:
        {'machine':'bsd.math.washington.edu', 'version':'sage-4.5.3',
  'timings':{0:1.3, 1:0.5,...} }
  Here, the timings are stored as a mapping from id's to floats.

 +1. My only hesitance with this is that it requires either an internet
 connection or mongodb to participate, both of which are optional
 features of Sage. Of course, people could store their timings locally
 as lists of dicts, and then optionally upload to mongodb (requiring
 only pymongo, a much ligher dependency, or even via a json front end).

  I think timing should all be done using the timeit modulo,  since
  that is the Python standard module designed for exactly the purpose of
  benchmarking. That ends the discussion about CPU time versus walltime,
  etc.
 
  The architecture of separating out the tests from the
  timing/recording/web_view framework is very nice, because it makes it
  easy to add in additional test suites.  E.g., I've had companies ask
  me: Could paid support include the statement: 'in all future releases
  of Sage, the following commands will run in at most the following
  times on the following hardware'?  And they have specific commands
  they care about.

 OTOH, I think that generating timing data as part of a standard test
 run could be very valuable. This precludes the use of timeit in
 general for long-ish running tests. Not that the two ideas are
 incompatible.

 - Robert

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 

[sage-devel] Re: Regression testing

2010-10-25 Thread Nick Alexander
 One could modify local/bin/sage-doctest to allow the option of changing each
 doctest by wrapping it in a timeit() call.  This would then generate a
 timing datum for each doctest line.

I did this, a long long time ago.  Not clear whether it was ever
merged.  See:

http://trac.sagemath.org/sage_trac/ticket/3476

Nick

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org