[sage-devel] Re: posible licence issue raised by GPL-v3

2007-07-30 Thread William Stein

On 7/29/07, Alec Mihailovs [EMAIL PROTECTED] wrote:
 From: Bobby Moretti [EMAIL PROTECTED]
 
  It would be one thing if SAGE was just a distribution of software,
  with a package management system. But SAGE contains (lots) of code
  that wraps these libraries and provides a unified interface to them.
  I'm fairly confident that this falls under the GPL's concept of
  'linking'.

 That's not exactly clear (at least for me). Anyway - it seems mostly a
 theoretical problem. From practical point of view, if SAGE would use
 different FSF licences for different parts of it, it seems impossible that,
 say, PARI, or GAP, would sue it for that. Axiom - maybe (just a joke :), but
 it doesn't seem to be a part of SAGE.

I take the copyright and licensing issues with SAGE extremely seriously,
and I am committed to not violating any copyright or license statements
in anything released as part of SAGE.This is a basic principle of
respect for other open source software authors and project to which
I believe the SAGE project should very carefully adhere.

Also, SAGE is three separate but complementary things:
  (1) a distribution of open source math software,
  (2) a new mathematical software library that ties together (1), and
  (3) a way to use most existing mathematical software via a common
  interface.

It would be possible to distribute (1) without very many worries
about licenses.  It is not possible to legally distribute (2) without
carefully respecting what various software licenses say about derived
works.  I sometimes worry about how (3) fits into things, but hopefully
the situation is similar to how one can legally use a program from
bash -- but are there weird legal issues with doing this:
 sage: mathematica(2) + gap(2)
 4

 -- William

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] gmp and mpfr performance in sage

2007-07-30 Thread Jonathan Bober

Hello.

While timing the code that I wrote to compute p(n), I noticed that, in
the latest version, it computes p(10^9) in:

- approximately 2m 30s if I link to the gmp and mpfr included in Ubuntu
(gmp version 3.something, I think)

- approximately 3m 30s if I link to the gmp and mpfr included in sage
2.7.1 (built from source)

- approximately 2m 18s if I link to the newest versions of gmp and mpfr
that I just downloaded and compiled.

I am fairly certain that that sage uses the newest or almost newest
versions of gmp and mpfr, which leads me to believe that they must not
compiled with ideal optimizations turned on. (I did compile sage from
source.)

I haven't looked into this too much, but I would guess the build setting
are such that gmp and mpfr are built for a generic processor, so that
binaries can be compiled and posted for download. If this is the case, I
think that it would be much better if the default build settings were to
use code optimized for the specific processor that the code is built on,
and to have a generic build option to use for compiled binaries for
distribution.

Also, there is an --enable-fat for the ./configure script for gmp, that
apparently compiles processor specific code for all x86 cpus, and
selects the right code at run time. I would guess that this is what the
Ubuntu build uses, and if my assumptions about how sage builds gmp are
right, then that would explain why the Ubuntu version of gmp is faster.

Anyway, I could be wrong about the reasons, but there is almost
definitely something that causes my code to run slower when I link it to
the sage build of gmp, and if this problem is widespread, then it
probably causes a lot of slowdown throughout sage.


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: 3 feature request for multivariate polynomials

2007-07-30 Thread Martin Albrecht

Hi Didier,

I hope you don't mind that I have some remarks about your patches

The f.coefficients() patch is only against MPolynomial_libsingular but is 
implemented generally enough to be pushed down to MPolynomial such that 
MPolynomial_polydict may benefit from it as well. Also, using f.dict() and 
f.exponents() is very slow from a MPolynomial_libsingular point of view. To 
make it faster, one could just walk through internal Singular C list of 
monomials directly. 

The R.random_element method on the other hand seems to be specialized for 
MPolynomial_polydict only, i.e. you'd loose the speed advantage of 
MPolynomial_libsingular by constructing an MPolynomial_polydict in any case. 

Martin

-- 
name: Martin Albrecht
_pgp: http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x8EF0DC99
_www: http://www.informatik.uni-bremen.de/~malb
_jab: [EMAIL PROTECTED]


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: gmp and mpfr performance in sage

2007-07-30 Thread William Stein

On 7/30/07, Jonathan Bober [EMAIL PROTECTED] wrote:

 While timing the code that I wrote to compute p(n), I noticed that, in
 the latest version, it computes p(10^9) in:

 - approximately 2m 30s if I link to the gmp and mpfr included in Ubuntu
 (gmp version 3.something, I think)

 - approximately 3m 30s if I link to the gmp and mpfr included in sage
 2.7.1 (built from source)

 - approximately 2m 18s if I link to the newest versions of gmp and mpfr
 that I just downloaded and compiled.

Precisely which operating system and processor are you using?

 I am fairly certain that that sage uses the newest or almost newest
 versions of gmp and mpfr, which leads me to believe that they must not
 compiled with ideal optimizations turned on. (I did compile sage from
 source.)

You might want to take a look at spkg-install's in the gmp and
mpfr spkg's.

 I haven't looked into this too much, but I would guess the build setting
 are such that gmp and mpfr are built for a generic processor, so that
 binaries can be compiled and posted for download. If this is the case, I
 think that it would be much better if the default build settings were to
 use code optimized for the specific processor that the code is built on,
 and to have a generic build option to use for compiled binaries for
 distribution.

If so, it is not intentional.  Probably if it were the case, the slowdown
would like be even more than you observed.

 Also, there is an --enable-fat for the ./configure script for gmp, that
 apparently compiles processor specific code for all x86 cpus, and
 selects the right code at run time. I would guess that this is what the
 Ubuntu build uses, and if my assumptions about how sage builds gmp are
 right, then that would explain why the Ubuntu version of gmp is faster.

 Anyway, I could be wrong about the reasons, but there is almost
 definitely something that causes my code to run slower when I link it to
 the sage build of gmp, and if this problem is widespread, then it
 probably causes a lot of slowdown throughout sage.

There is clearly something seriously wrong based on your above timings,
and I hope we get to the bottom of it.   I'm really glad you pointed
this out and have a clearly reproducible test case.

By the way, on my MacBook Pro 2.33Ghz running OS x (and SAGE's GMP),
using your latest partitions code,  it does p(10^9) in 1m 35s !!  Wow.

sage: time v=number_of_partitions(10^9)
CPU times: user 94.72 s, sys: 0.27 s, total: 94.99 s
Wall time: 95.32
sage: len(str(v))
35219
sage: v%11
4


By the way -- people wanting to upgrade sage-2.7.2 to have the latest
version of Jonothan's code, can just do hg_sage.pull() and
sage -br.

 -- William

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] SAGE and ATLAS

2007-07-30 Thread Kate Minola

William,

In the discussion

Problem building linbox on Gentoo Linux (gcc 4.2.0)

you stated:

: There is also http://sagemath.org/SAGEbin/linux/64bit/
: however that binary is not built against ATLAS, whereas if
: you have ATLAS on your system and build SAGE from source
: you'll get a SAGE that is faster at linear algebra.

To get SAGE to build using ATLAS, must ATLAS be installed
in a standard place?  If so, what is that place?  And what version
of ATLAS are you using?

-- 
Kate Minola
University of Maryland, College Park

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: 3 feature request for multivariate polynomials

2007-07-30 Thread Carl Witty

On Jul 27, 9:20 pm, didier  deshommes [EMAIL PROTECTED] wrote:
 Hi there,
 I'm trying to work with multivariate polynomials in SAGE and here are
 3 features that I would like. Assume f is a multi-poly:
  * f.coefficients() for multivariate polynomials. I would like to get
 all the coefficients of f in a list, according to the term order
 attached to its ring (this  would basically the equivalent of the
 univariate case). For example:
 {{{
 sage: # lex ordering
 sage: R.x,y = MPolynomialRing(QQ,2,order='lex')
 sage: f=23*x^6*y^7 + x^3*y+6
 sage: f
 23*x^6*y^7 + x^3*y+6
 sage: f.coefficients()
  [23, 1, 6]

 }}}

 Another example where we use revlex ordering:
 {{{
 sage: # revlex ordering
 sage: R.x,y = MPolynomialRing(QQ,2,order='revlex')
 sage: f=23*x^6*y^7 + x^3*y+6
 sage: f
  6 + x^3*y + 23*x^6*y^7
 sage: f.coefficients()
 [6,1,23]

 }}}

 Does such function make sense?

It seems pretty strange to me, mostly because you lose too much
information by eliding zeroes.  As far as I can tell, given
MPolynomialRing(QQ,2,order='lex'), all of the following polynomials:

  3*x^2 + 1
  3*x^5 + x
  3*y^7 + 1
  3*y + 1

would have a coefficients() list of [3, 1].  Is that true, and if so,
is this really a useful function?

Carl


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: gmp and mpfr performance in sage

2007-07-30 Thread William Stein

On 7/30/07, David Harvey [EMAIL PROTECTED] wrote:
 Hi, I haven't been following closely, but I wonder if it's a static vs
 shared thing. But usually that shouldn't account for such a large
 difference, so that's probably not the issue.

Do you build and link in the dynamic version of GMP?  There
might be a speed difference between static and dynamic.

William

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: 3 feature request for multivariate polynomials

2007-07-30 Thread Carl Witty

On Jul 30, 12:26 pm, didier deshommes [EMAIL PROTECTED] wrote:
 2007/7/30, Carl Witty [EMAIL PROTECTED]:

  It seems pretty strange to me, mostly because you lose too much
  information by eliding zeroes.  As far as I can tell, given
  MPolynomialRing(QQ,2,order='lex'), all of the following polynomials:

3*x^2 + 1
3*x^5 + x
3*y^7 + 1
3*y + 1

  would have a coefficients() list of [3, 1].  Is that true, and if so,
  is this really a useful function?

 For me it makes sense because I just need a method that iterates over
 the coefficients of a polynomial. Having the ordering respected is a
 little extra that I think helps the user. I could put the zeros in
 there, but her are my own subjective reasons not to:
  - I think of multivariate polynomials as sparse polynomials, so I
 think coefficients() with the 0s omitted is OK.
  - Maple does the same thing :) (I know, I know: not an argument...)
  - Putting these zeros involves generating all the degree exponents,
 which is slower. It can be done, but generating all the coefficients
 this way for something like
 f = x^6*y^12*z^2
 makes a big list made mostly of zeros.

OK, that all makes sense.  I guess I just had a failure of
imagination...in my code (dealing with univariate polynomials), when I
want the coefficients of a polynomial, I expect this coefficient list
(which I extract with list(P)) to have all the information from the
original polynomial, and I couldn't think of any uses for just the non-
zero coefficients.

 Here's a compromise: a paramater, (say all_coefficients) could be
 specified to have an explicit list. Thoughts?

This could be done for degrevlex ordering, but usually not for lex
ordering...you'd need to insert an infinite number of zeroes.
Probably it's best to just go with your original specification.

Carl


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: gmp and mpfr performance in sage

2007-07-30 Thread Jonathan Bober

Short answer: This has occurred to me, and I don't think that it is the
problem.

I'll try to document this carefully and give a more detailed answer
later.

On Mon, 2007-07-30 at 14:25 -0700, William Stein wrote:
 On 7/30/07, David Harvey [EMAIL PROTECTED] wrote:
  Hi, I haven't been following closely, but I wonder if it's a static vs
  shared thing. But usually that shouldn't account for such a large
  difference, so that's probably not the issue.
 
 Do you build and link in the dynamic version of GMP?  There
 might be a speed difference between static and dynamic.
 
 William
 
  
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: computing the number of partitions of an integer

2007-07-30 Thread Bill Hart

Wow!! Excellent work indeed.

In fact on 64 bit X86 systems you could actually use the 128 bit long
doubles to give you a little bit more precision (I believe it only
gives you 80 bits including exponent and sign, so probably 64 bit
mantissa).

It would be interesting to see the time for Mathematica on a 32 bit
X86 machine, since this would tell us if that is what they do.

Certainly I think you are right that any remaining optimization would
be in making sure it uses no unnecessary precision. Here is a page
giving information about the remainder of the series after N terms:

http://mathworld.wolfram.com/PartitionFunctionP.html(see eqn 26).

Also in Pari, I noted that the computation of the Psi function could
dramatically slow the whole computation. I was surprised to find it
figured in the runtime. It may be worthwhile checking if this is
slowing things down at all. It should be computed relatively quickly
if implemented correctly. The main issue was again using the minimum
possible precision.

Bill.

On 28 Jul, 23:51, Jonathan Bober [EMAIL PROTECTED] wrote:
 I've been working on a from-scratch implementation (attached). Right now
 it runs a faster than Ralf Stephan's part.c, but not as fast as we would
 like. (And it seems to work, although I can't guarantee that right
 now.)

 On my Core Duo 1.8 ghz , it computes p(10^8) in about 17 seconds,
 compared to about 70 seconds for the part.c previously posted. However,
 it took about 270 seconds for p(10^9). (I don't have Mathematica, so I
 can't give that comparison.) On the other hand, I don't know how much
 faster sage.math is than my laptop, but since it is 64 bit, it might run
 the code much faster.

 I think that there is still a good amount of optimization that can be
 done to make this faster. Some things that might a lot help include
 better error estimates for the tail end of the series (if such estimates
 exist) and, in general, going over everything carefully to try to make
 sure that no unneeded precision is ever used. (Once the code decides
 that no more than 53 bits of precision are needed, it switches over to
 computing with C doubles, and the rest of the computation finishes
 instantly.)

 Note that this is C++ code, but it could be switched to pure C quite
 easily, since is doesn't actually use any real C++.

  part.cc
 22KDownload


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: gmp and mpfr performance in sage

2007-07-30 Thread Jonathan Bober

Here are some examples of timings with different compilation options.
(I'm using 3*10^8) here because it takes long enough to see the
difference, but short enough to conveniently run lots of tests. 

After running hg_sage.pull() to get the newest version, of the code, I
get:

sage: time a = number_of_partitions(3, algorithm='bober')
CPU times: user 46.81 s, sys: 0.04 s, total: 46.85 s
Wall time: 47.19

Now I copy the code somewhere else and compile it, linking it to the
Ubuntu-installed libraries:

[EMAIL PROTECTED]:~/sage-2.7.1/sage-2.7.1/devel/sage-bober/sage/combinat$ cp 
partitions_c.cc ~/temp/
[EMAIL PROTECTED]:~/sage-2.7.1/sage-2.7.1/devel/sage-bober/sage/combinat$ cd 
~/temp
[EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -lgmp -lmpfr
[EMAIL PROTECTED]:~/temp$ ls -l a.out
-rwxr-xr-x 1 bober bober 27529 2007-07-30 20:01 a.out   -- Look at size of 
file to make sure we aren't linking statically
[EMAIL PROTECTED]:~/temp$ time ./a.out 3
[...]
real0m36.171s
user0m36.110s
sys 0m0.016s

Now do the same thing, but link statically:

[EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -lmpfr -lgmp -static
[EMAIL PROTECTED]:~/temp$ ls -l a.out
-rwxr-xr-x 1 bober bober 1452497 2007-07-30 20:08 a.out-- Much bigger 
binary
[EMAIL PROTECTED]:~/temp$ time ./a.out 3
[...]
real0m34.240s
user0m34.146s
sys 0m0.020s

Now we build with the libraries included in sage. (Note that
sage does not build a shared library version of mprf, so
this binary is bigger than first we build.) The time this
takes to run is comparable to the time it took to run from
within sage, so we know the overhead isn't from sage.

[EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 
-L/home/bober/sage-2.7.1/sage-2.7.1/local/lib 
-I/home/bober/sage-2.7.1/sage-2.7.1/local/lib -lmpfr -lgmp
[EMAIL PROTECTED]:~/temp$ ls -l a.out
-rwxr-xr-x 1 bober bober 150003 2007-07-30 20:14 a.out
[EMAIL PROTECTED]:~/temp$ time ./a.out 3
[...]
real0m46.675s
user0m46.515s
sys 0m0.116s

Now build a static version of the library. (It looks like sage
only builds a shared library version of gmp, so I'm not sure if
this really works the way it is supposed to, but it runs at a
similar speed.)

[EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 
-L/home/bober/sage-2.7.1/sage-2.7.1/local/lib 
-I/home/bober/sage-2.7.1/sage-2.7.1/local/lib -lmpfr -lgmp -static
[EMAIL PROTECTED]:~/temp$ ls -l a.out
-rwxr-xr-x 1 bober bober 1479519 2007-07-30 20:19 a.out
[EMAIL PROTECTED]:~/temp$ time ./a.out 3
[...]
real0m43.592s
user0m43.443s
sys 0m0.012s

Now we build using the gmp 4.2.1 and mfpr 2.2.1 that I just built.

[EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -L/home/bober/local/lib/ 
-I/home/bober/local/include/ -lmpfr -lgmp
[EMAIL PROTECTED]:~/temp$ ls -l a.out
-rwxr-xr-x 1 bober bober 114849 2007-07-30 20:26 a.out
[EMAIL PROTECTED]:~/temp$ time ./a.out 3
[...]
real0m35.630s
user0m35.206s
sys 0m0.060s

And one more time, linking statically to those libraries

[EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -L/home/bober/local/lib/ 
-I/home/bober/local/include/ -lmpfr -lgmp -static
[EMAIL PROTECTED]:~/temp$ ls -l a.out
-rwxr-xr-x 1 bober bober 1401979 2007-07-30 20:29 a.out
[EMAIL PROTECTED]:~/temp$ time ./a.out 3
[...]
real0m33.924s
user0m33.354s
sys 0m0.052s



On Mon, 2007-07-30 at 17:32 -0400, Jonathan Bober wrote:
 Short answer: This has occurred to me, and I don't think that it is the
 problem.
 
 I'll try to document this carefully and give a more detailed answer
 later.
 
 On Mon, 2007-07-30 at 14:25 -0700, William Stein wrote:
  On 7/30/07, David Harvey [EMAIL PROTECTED] wrote:
   Hi, I haven't been following closely, but I wonder if it's a static vs
   shared thing. But usually that shouldn't account for such a large
   difference, so that's probably not the issue.
  
  Do you build and link in the dynamic version of GMP?  There
  might be a speed difference between static and dynamic.
  
  William
  
   
  
  
 
 
  
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: computing the number of partitions of an integer

2007-07-30 Thread Bill Hart



On 31 Jul, 01:24, Bill Hart [EMAIL PROTECTED] wrote:
 It would be interesting to see the time for Mathematica on a 32 bit
 X86 machine, since this would tell us if that is what they do.

Doh! I should have read William's timings more carefully. He gives the
times for a 32 bit machine. So I guess Mathematica doesn't use 80 bit
long doubles on a 64 bit X86 then. Still it is an option for us.

Once there is a stable version of the new code which seems to give
correct results, I'll take a closer look and see if I can spot any
obvious speed improvements. I can't promise anything. I suspect my
fundamental mistake was not realising that you still needed quite a
bit of multi-precision code for quite a few terms. In fact now that I
think about it, I don't see why I thought you could compute all the
s(h,k)'s using single limb arithmetic.

It is the multi-precision stuff that is slowing it down, no doubt.
mpfr has a 15x overhead over ordinary double precision, even at 53
bits, or so I have read. I guess there is a lot of branching to ensure
the accuracy of arithmetic. Whilst that is needed for many
applications, it probably isn't here. Sadly there don't seem to be any
decent open source alternatives for when that accuracy is not
required. I have a similar problem in some code I am currently
writing. I need precisely quad precision, so mpfr is out of the
question.

Bill.


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: computing the number of partitions of an integer

2007-07-30 Thread didier deshommes

2007/7/30, Bill Hart [EMAIL PROTECTED]:
 I have a similar problem in some code I am currently
 writing. I need precisely quad precision, so mpfr is out of the
 question.

Hi Bill,
You might want to consider Yozo Hida's quaddouble C/C++ package here:
http://www.cs.berkeley.edu/~yozo/

There is also a wrapper for it in SAGE.

didier

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: gmp and mpfr performance in sage

2007-07-30 Thread David Harvey

Did you compile the ubuntu GMP library yourself, or do they come as 
packaged binaries? (sorry I don't know anything about ubuntu)

If you compiled them yourself, what is the CFLAGS string that GMP's 
configure program produces? Is it the same as what the GMP inside SAGE 
produces? In fact it would be interesting to see some other settings, 
like which mpn subdirectory is activated, etc.

This is a very puzzling problem, I hope we can get to the bottom of it.

david

On Jul 30, 2007, at 5:36 PM, Jonathan Bober wrote:


 Here are some examples of timings with different compilation options.
 (I'm using 3*10^8) here because it takes long enough to see the
 difference, but short enough to conveniently run lots of tests.

 After running hg_sage.pull() to get the newest version, of the code, I
 get:

 sage: time a = number_of_partitions(3, algorithm='bober')
 CPU times: user 46.81 s, sys: 0.04 s, total: 46.85 s
 Wall time: 47.19

 Now I copy the code somewhere else and compile it, linking it to the
 Ubuntu-installed libraries:

 [EMAIL PROTECTED]:~/sage-2.7.1/sage-2.7.1/devel/sage-bober/sage/combinat$ cp 
 partitions_c.cc ~/temp/
 [EMAIL PROTECTED]:~/sage-2.7.1/sage-2.7.1/devel/sage-bober/sage/combinat$ cd 
 ~/temp
 [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -lgmp -lmpfr
 [EMAIL PROTECTED]:~/temp$ ls -l a.out
 -rwxr-xr-x 1 bober bober 27529 2007-07-30 20:01 a.out -- Look at size 
 of file to make sure we aren't linking statically
 [EMAIL PROTECTED]:~/temp$ time ./a.out 3
 [...]
 real0m36.171s
 user0m36.110s
 sys 0m0.016s

 Now do the same thing, but link statically:

 [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -lmpfr -lgmp -static
 [EMAIL PROTECTED]:~/temp$ ls -l a.out
 -rwxr-xr-x 1 bober bober 1452497 2007-07-30 20:08 a.out-- Much 
 bigger binary
 [EMAIL PROTECTED]:~/temp$ time ./a.out 3
 [...]
 real0m34.240s
 user0m34.146s
 sys 0m0.020s

 Now we build with the libraries included in sage. (Note that
 sage does not build a shared library version of mprf, so
 this binary is bigger than first we build.) The time this
 takes to run is comparable to the time it took to run from
 within sage, so we know the overhead isn't from sage.

 [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 
 -L/home/bober/sage-2.7.1/sage-2.7.1/local/lib 
 -I/home/bober/sage-2.7.1/sage-2.7.1/local/lib -lmpfr -lgmp
 [EMAIL PROTECTED]:~/temp$ ls -l a.out
 -rwxr-xr-x 1 bober bober 150003 2007-07-30 20:14 a.out
 [EMAIL PROTECTED]:~/temp$ time ./a.out 3
 [...]
 real0m46.675s
 user0m46.515s
 sys 0m0.116s

 Now build a static version of the library. (It looks like sage
 only builds a shared library version of gmp, so I'm not sure if
 this really works the way it is supposed to, but it runs at a
 similar speed.)

 [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 
 -L/home/bober/sage-2.7.1/sage-2.7.1/local/lib 
 -I/home/bober/sage-2.7.1/sage-2.7.1/local/lib -lmpfr -lgmp -static
 [EMAIL PROTECTED]:~/temp$ ls -l a.out
 -rwxr-xr-x 1 bober bober 1479519 2007-07-30 20:19 a.out
 [EMAIL PROTECTED]:~/temp$ time ./a.out 3
 [...]
 real0m43.592s
 user0m43.443s
 sys 0m0.012s

 Now we build using the gmp 4.2.1 and mfpr 2.2.1 that I just built.

 [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -L/home/bober/local/lib/ 
 -I/home/bober/local/include/ -lmpfr -lgmp
 [EMAIL PROTECTED]:~/temp$ ls -l a.out
 -rwxr-xr-x 1 bober bober 114849 2007-07-30 20:26 a.out
 [EMAIL PROTECTED]:~/temp$ time ./a.out 3
 [...]
 real0m35.630s
 user0m35.206s
 sys 0m0.060s

 And one more time, linking statically to those libraries

 [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -L/home/bober/local/lib/ 
 -I/home/bober/local/include/ -lmpfr -lgmp -static
 [EMAIL PROTECTED]:~/temp$ ls -l a.out
 -rwxr-xr-x 1 bober bober 1401979 2007-07-30 20:29 a.out
 [EMAIL PROTECTED]:~/temp$ time ./a.out 3
 [...]
 real0m33.924s
 user0m33.354s
 sys 0m0.052s



--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: computing the number of partitions of an integer

2007-07-30 Thread Bill Hart

Hi Didier,

Thanks. I also just found:

http://www.nongnu.org/hpalib/

which fascinates me. Has anyone used it?

Bill.


On 31 Jul, 01:46, didier deshommes [EMAIL PROTECTED] wrote:
 2007/7/30, Bill Hart [EMAIL PROTECTED]:

  I have a similar problem in some code I am currently
  writing. I need precisely quad precision, so mpfr is out of the
  question.

 Hi Bill,
 You might want to consider Yozo Hida's quaddouble C/C++ package 
 here:http://www.cs.berkeley.edu/~yozo/

 There is also a wrapper for it in SAGE.

 didier


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: gmp and mpfr performance in sage

2007-07-30 Thread Jonathan Bober

I didn't compile the Ubuntu version myself, but I did compile the
versions with the timings listed last in the email.

I don't want to attach all of this to the list, so see

http://www.math.lsa.umich.edu/~bober/sage_stuff/

for the output from configure and make for these builds of gmp and mpfr,
and also for the install.log file from my sage installation (I also
split off the relevant gmp and mpfr parts of the install log.)

The compiler options seem to basically be the same, except for the
different targets. I don't know what is going on.

On Mon, 2007-07-30 at 17:47 -0700, David Harvey wrote:
 Did you compile the ubuntu GMP library yourself, or do they come as 
 packaged binaries? (sorry I don't know anything about ubuntu)
 
 If you compiled them yourself, what is the CFLAGS string that GMP's 
 configure program produces? Is it the same as what the GMP inside SAGE 
 produces? In fact it would be interesting to see some other settings, 
 like which mpn subdirectory is activated, etc.
 
 This is a very puzzling problem, I hope we can get to the bottom of it.
 
 david
 
 On Jul 30, 2007, at 5:36 PM, Jonathan Bober wrote:
 
 
  Here are some examples of timings with different compilation options.
  (I'm using 3*10^8) here because it takes long enough to see the
  difference, but short enough to conveniently run lots of tests.
 
  After running hg_sage.pull() to get the newest version, of the code, I
  get:
 
  sage: time a = number_of_partitions(3, algorithm='bober')
  CPU times: user 46.81 s, sys: 0.04 s, total: 46.85 s
  Wall time: 47.19
 
  Now I copy the code somewhere else and compile it, linking it to the
  Ubuntu-installed libraries:
 
  [EMAIL PROTECTED]:~/sage-2.7.1/sage-2.7.1/devel/sage-bober/sage/combinat$ 
  cp 
  partitions_c.cc ~/temp/
  [EMAIL PROTECTED]:~/sage-2.7.1/sage-2.7.1/devel/sage-bober/sage/combinat$ 
  cd 
  ~/temp
  [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -lgmp -lmpfr
  [EMAIL PROTECTED]:~/temp$ ls -l a.out
  -rwxr-xr-x 1 bober bober 27529 2007-07-30 20:01 a.out   -- Look at 
  size 
  of file to make sure we aren't linking statically
  [EMAIL PROTECTED]:~/temp$ time ./a.out 3
  [...]
  real0m36.171s
  user0m36.110s
  sys 0m0.016s
 
  Now do the same thing, but link statically:
 
  [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -lmpfr -lgmp -static
  [EMAIL PROTECTED]:~/temp$ ls -l a.out
  -rwxr-xr-x 1 bober bober 1452497 2007-07-30 20:08 a.out-- Much 
  bigger binary
  [EMAIL PROTECTED]:~/temp$ time ./a.out 3
  [...]
  real0m34.240s
  user0m34.146s
  sys 0m0.020s
 
  Now we build with the libraries included in sage. (Note that
  sage does not build a shared library version of mprf, so
  this binary is bigger than first we build.) The time this
  takes to run is comparable to the time it took to run from
  within sage, so we know the overhead isn't from sage.
 
  [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 
  -L/home/bober/sage-2.7.1/sage-2.7.1/local/lib 
  -I/home/bober/sage-2.7.1/sage-2.7.1/local/lib -lmpfr -lgmp
  [EMAIL PROTECTED]:~/temp$ ls -l a.out
  -rwxr-xr-x 1 bober bober 150003 2007-07-30 20:14 a.out
  [EMAIL PROTECTED]:~/temp$ time ./a.out 3
  [...]
  real0m46.675s
  user0m46.515s
  sys 0m0.116s
 
  Now build a static version of the library. (It looks like sage
  only builds a shared library version of gmp, so I'm not sure if
  this really works the way it is supposed to, but it runs at a
  similar speed.)
 
  [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 
  -L/home/bober/sage-2.7.1/sage-2.7.1/local/lib 
  -I/home/bober/sage-2.7.1/sage-2.7.1/local/lib -lmpfr -lgmp -static
  [EMAIL PROTECTED]:~/temp$ ls -l a.out
  -rwxr-xr-x 1 bober bober 1479519 2007-07-30 20:19 a.out
  [EMAIL PROTECTED]:~/temp$ time ./a.out 3
  [...]
  real0m43.592s
  user0m43.443s
  sys 0m0.012s
 
  Now we build using the gmp 4.2.1 and mfpr 2.2.1 that I just built.
 
  [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -L/home/bober/local/lib/ 
  -I/home/bober/local/include/ -lmpfr -lgmp
  [EMAIL PROTECTED]:~/temp$ ls -l a.out
  -rwxr-xr-x 1 bober bober 114849 2007-07-30 20:26 a.out
  [EMAIL PROTECTED]:~/temp$ time ./a.out 3
  [...]
  real0m35.630s
  user0m35.206s
  sys 0m0.060s
 
  And one more time, linking statically to those libraries
 
  [EMAIL PROTECTED]:~/temp$ g++ partitions_c.cc -O3 -L/home/bober/local/lib/ 
  -I/home/bober/local/include/ -lmpfr -lgmp -static
  [EMAIL PROTECTED]:~/temp$ ls -l a.out
  -rwxr-xr-x 1 bober bober 1401979 2007-07-30 20:29 a.out
  [EMAIL PROTECTED]:~/temp$ time ./a.out 3
  [...]
  real0m33.924s
  user0m33.354s
  sys 0m0.052s
 
 
 
  
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: