Re: [sage-devel] multithreading performance issues

2016-10-06 Thread Jonathan Bober
I understand the reasons why OpenBLAS shouldn't be multithreading
everything, and why it shouldn't necessarily use all available cpu cores
when it does do multihreading, but the point is: it currently uses all or
one, and sometimes it decides to use multithreading even when using 2
threads doesn't really seem to give me a benefit. So I guess there are two
points to consider. One is a "public service announcement" that if things
don't change, then in Sage 7.4 users might want to strongly consider
setting OpenBLAS to be single threaded. The other is that we may want to
reconsider the OpenBLAS defaults in Sage.

One possibility might be to expose the openblas_set_num_threads function at
the top level, and keep the default at 1. Another possibility is to build
OpenBLAS single threaded by default and force someone compiling Sage to
pass some option for multithreaded OpenBLAS; that way, at least, only
"advanced" users will run into and have to deal with the sub-par
multithreading behavior.

On Wed, Oct 5, 2016 at 10:34 AM, Clement Pernet <clement.per...@gmail.com>
wrote:

> To follow up on Jean-Pierre summary of the situation:
>
> The current version of fflas-ffpack in sage (v2.2.2) uses the BLAS
> provided as is. Running it with a multithreaded BLAS may result in a slower
> code than with a single threaded BLAS. This is very likely due to memory
> transfer and coherence problems.
>
> More generally, we strongly suggest to use a single threaded BLAS and let
> fflas-ffpack deal with the parallelization. This is common practice for
> example with parallel versions of LAPACK.
>
> Therefore, after the discussion https://trac.sagemath.org/ticket/21323 we
> have decided to let fflas-ffpack the possibility to force the number of
> threads that OpenBLAS can use at runtime. In this context we will force it
> to 1.
> This is available upsteam and I plan to update sage's fflas-ffpack
> whenever we release v2.3.0.
>
> Clément
>
>
> Le 05/10/2016 à 11:24, Jean-Pierre Flori a écrit :
>
>> Currently OpenBlas does what it wants for multithreading.
>> We hesitated to disable it but prefered to wait and think about it:
>> see https://trac.sagemath.org/ticket/21323.
>>
>> You can still influence its use of threads setting OPENBLAS_NUM_THREADS.
>> See the trac ticket, just note that this is not Sage specific.
>> And as you discovered, it seems it is also influenced by
>> OMP_NUM_THREADS...
>>
>> On Wednesday, October 5, 2016 at 9:28:23 AM UTC+2, tdumont wrote:
>>
>> What is the size of the matrix you use ?
>> Whatever you do, openmp in blas is interesting only if you compute
>> with
>> large matrices.
>> If your computations are embedded  in an @parallel and launch n
>> processes, be careful  that your  OMP_NUM_THREADS be less or equal to
>> ncores/n.
>>
>> My experience is (I am doing numerical computations)  that there are
>> very few cases where using openmp in blas libraries is interesting.
>> Parallelism should generally be searched at a higher level.
>>
>> One of the interest of multithreaded blas is for constructors: with
>> Intel's mkl blas, you can obtain the maximum possible performances of
>> tah machines  when you use DGEMM (ie product of matrices), due to the
>> high arithmetic intensity of matrix vector products. On my 2x8 core
>> sandy bridge à 2.7GHZ, I have obtained more that 300 giga flops, but
>> with matrices of size > 1000 ! And this is only true for DGEMM
>>
>> t.d.
>>
>> Le 04/10/2016 à 20:26, Jonathan Bober a écrit :
>> > See the following timings: If I start Sage with OMP_NUM_THREADS=1, a
>> > particular computation takes 1.52 cpu seconds and 1.56 wall seconds.
>> >
>> > The same computation without OMP_NUM_THREADS set takes 12.8 cpu
>> seconds
>> > and 1.69 wall seconds. This is particularly devastating when I'm
>> running
>> > with @parallel to use all of my cpu cores.
>> >
>> > My guess is that this is Linbox related, since these computations do
>> > some exact linear algebra, and Linbox can do some multithreading,
>> which
>> > perhaps uses OpenMP.
>> >
>> > jb12407@lmfdb1:~$ OMP_NUM_THREADS=1 sage
>> > [...]
>> > SageMath version 7.4.beta6, Release Date: 2016-09-24
>> > [...]
>> > Warning: this is a prerelease version, and it may be unstable.
>> > [...]
>> > sage: %time M = ModularSymbols(5113, 2, -1)
>> > CPU times: user 509 ms, sys: 21 ms, total: 530 ms
>> &g

Re: [sage-devel] multithreading performance issues

2016-10-04 Thread Jonathan Bober
I've done a few more tests finding bad performance (and some decent
improvements with a few threads). Also, I double checked that the default
behavior for me seems to be the same as setting OMP_NUM_THREADS=64. I
wonder if others who have a recent development version of Sage see similar
results. I'm using OpenBLAS 0.2.19, which is now
https://trac.sagemath.org/ticket/21627. (I suppose I ought to try this on
my laptop.)

(These tests are a bit messed up because I neglected to ignore the time to
generate the random matrices.)

I don't know what these say about what sensible defaults should be set, but
I think I'm adding OMP_NUM_THREADS=1 to my bashrc, since I don't think I
use OpenMP for anything else.

Computing eigenvalues:

jb12407@lmfdb1:~/test$ cat omptest.py
import os
import sys

size = sys.argv[1]

for n in range(1, 65):
os.system('OMP_NUM_THREADS={n} sage -c "import time; m =
random_matrix(RDF,{size}); s = time.time(); e = m.eigenvalues(); print({n},
time.clock(), time.time() - s)"'.format(n=n, size=size))

jb12407@lmfdb1:~/test$ python omptest.py 1000
(1, 8.28, 5.560720920562744)
(2, 13.71, 5.4581358432769775)
(3, 18.12, 5.155802011489868)
(4, 24.12, 5.381717205047607)
(5, 29.33, 5.332219123840332)
(6, 34.29, 5.307264089584351)
(7, 38.93, 5.198814153671265)
(8, 44.84, 5.271445989608765)
(9, 51.63, 5.453015089035034)
(10, 57.66, 5.515641927719116)
[...]
(61, 422.21, 6.9586780071258545)
(62, 419.21, 6.779545068740845)
(63, 427.15, 6.788045167922974)
(64, 448.9, 7.169056177139282)

Matrix multiplication:

jb12407@lmfdb1:~/test$ cat omptest2.py
import os
import sys

size = sys.argv[1]

for n in range(1, 65):
os.system('OMP_NUM_THREADS={n} sage -c "import time; m =
random_matrix(RDF,{size}); s = time.time(); m2 = m^30; print({n},
time.clock(), time.time() - s)"'.format(n=n, size=size))

(1, 3.52, 0.7552590370178223)
(2, 3.66, 0.41131114959716797)
(3, 3.82, 0.31482601165771484)
(4, 4.02, 0.2474370002746582)
(5, 4.2, 0.21387481689453125)
(6, 4.47, 0.19179105758666992)
(7, 4.53, 0.17720603942871094)
(8, 4.89, 0.17597389221191406)
(9, 5.15, 0.17040705680847168)
(10, 5.26, 0.17317700386047363)
[...]
(60, 18.88, 0.17498207092285156)
(61, 18.49, 0.1627058982849121)
(62, 20.46, 0.19742107391357422)
(63, 20.07, 0.18258190155029297)
(64, 20.76, 0.18776202201843262)

Matrix multiplication with a bigger matrix:

jb12407@lmfdb1:~/test$ python omptest2.py 5000
(1, 99.97, 90.38103914260864)
(2, 101.71, 46.28921890258789)
(3, 103.96, 31.841789960861206)
(4, 107.98, 24.800616025924683)
(5, 108.59, 20.051285982131958)
(6, 112.46, 17.170204877853394)
(7, 116.25, 15.497264862060547)
(8, 125.38, 14.533391952514648)
(9, 130.57, 13.497469902038574)
(10, 123.67, 11.505426168441772)
[...]
(60, 779.12, 12.92886209487915)
(61, 875.74, 14.310442924499512)
(62, 869.82, 14.241307973861694)
(63, 813.99, 13.089143991470337)
(64, 728.52, 11.443121910095215)



On Tue, Oct 4, 2016 at 9:40 PM, Jonathan Bober <jwbo...@gmail.com> wrote:

> On Tue, Oct 4, 2016 at 9:03 PM, William Stein <wst...@gmail.com> wrote:
>
>> On Tue, Oct 4, 2016 at 12:58 PM, Jonathan Bober <jwbo...@gmail.com>
>> wrote:
>> > No, in 7.3 Sage isn't multithreading in this example:
>> >
>> > jb12407@lmfdb1:~$ sage73
>> > sage: %time M = ModularSymbols(5113, 2, -1)
>> > CPU times: user 599 ms, sys: 25 ms, total: 624 ms
>> > Wall time: 612 ms
>> > sage: %time S = M.cuspidal_subspace().new_subspace()
>> > CPU times: user 1.32 s, sys: 89 ms, total: 1.41 s
>> > Wall time: 1.44 s
>> >
>> > I guess the issue may be OpenBLAS rather than Linbox, then, since LinBox
>> > uses BLAS. I misread https://trac.sagemath.org/ticket/21323, which I
>> now
>> > realize says "LinBox parallel routines (not yet exposed in SageMath)",
>> when
>> > I thought that the cause may be LinBox. My Sage 7.3. uses the system
>> ATLAS,
>> > and I don't know whether that might sometimes use multithreading.
>>
>> If you care about performance you should build ATLAS from source.  You
>> can (somehow... I can't remember how) specify how many cores it will
>> use, and it will greatly benefit from multithreading.
>>
>>
> Yes, I probably should. But for linear algebra I've generally been happy
> with "reasonable" performance. (And the system ATLAS, though not
> specifically tuned, is at least compiled for sse3 and x86_64.)
>
> Also, setting the number of threads isn't so easy. In this case I want to
> minimize cpu time, rather than wall time, because I can run 64 processes in
> parallel. Single threading should always do that, and in some problems some
> extra threads don't hurt, but here they definitely do.
>
> Possibly part of the issue here is that OpenBLAS seems to use either 1
> thread or #CPU threads, which 

Re: [sage-devel] multithreading performance issues

2016-10-04 Thread Jonathan Bober
On Tue, Oct 4, 2016 at 9:03 PM, William Stein <wst...@gmail.com> wrote:

> On Tue, Oct 4, 2016 at 12:58 PM, Jonathan Bober <jwbo...@gmail.com> wrote:
> > No, in 7.3 Sage isn't multithreading in this example:
> >
> > jb12407@lmfdb1:~$ sage73
> > sage: %time M = ModularSymbols(5113, 2, -1)
> > CPU times: user 599 ms, sys: 25 ms, total: 624 ms
> > Wall time: 612 ms
> > sage: %time S = M.cuspidal_subspace().new_subspace()
> > CPU times: user 1.32 s, sys: 89 ms, total: 1.41 s
> > Wall time: 1.44 s
> >
> > I guess the issue may be OpenBLAS rather than Linbox, then, since LinBox
> > uses BLAS. I misread https://trac.sagemath.org/ticket/21323, which I now
> > realize says "LinBox parallel routines (not yet exposed in SageMath)",
> when
> > I thought that the cause may be LinBox. My Sage 7.3. uses the system
> ATLAS,
> > and I don't know whether that might sometimes use multithreading.
>
> If you care about performance you should build ATLAS from source.  You
> can (somehow... I can't remember how) specify how many cores it will
> use, and it will greatly benefit from multithreading.
>
>
Yes, I probably should. But for linear algebra I've generally been happy
with "reasonable" performance. (And the system ATLAS, though not
specifically tuned, is at least compiled for sse3 and x86_64.)

Also, setting the number of threads isn't so easy. In this case I want to
minimize cpu time, rather than wall time, because I can run 64 processes in
parallel. Single threading should always do that, and in some problems some
extra threads don't hurt, but here they definitely do.

Possibly part of the issue here is that OpenBLAS seems to use either 1
thread or #CPU threads, which is 64 in this case. In my case using 2
threads might improve the wall time a trifle for a single process, but is
bad for total performance.


> >
> >
> > On Tue, Oct 4, 2016 at 8:06 PM, Francois Bissey
> > <francois.bis...@canterbury.ac.nz> wrote:
> >>
> >> openmp is disabled in linbox/ffpack-fflas so it must come from somewhere
> >> else.
> >> Only R seems to be linked to libgomp (openmp) on my vanilla install.
> >> Curiosity: do you observe the same behaviour in 7.3?
> >>
> >> François
> >>
> >> > On 5/10/2016, at 07:26, Jonathan Bober <jwbo...@gmail.com> wrote:
> >> >
> >> > See the following timings: If I start Sage with OMP_NUM_THREADS=1, a
> >> > particular computation takes 1.52 cpu seconds and 1.56 wall seconds.
> >> >
> >> > The same computation without OMP_NUM_THREADS set takes 12.8 cpu
> seconds
> >> > and 1.69 wall seconds. This is particularly devastating when I'm
> running
> >> > with @parallel to use all of my cpu cores.
> >> >
> >> > My guess is that this is Linbox related, since these computations do
> >> > some exact linear algebra, and Linbox can do some multithreading,
> which
> >> > perhaps uses OpenMP.
> >> >
> >> > jb12407@lmfdb1:~$ OMP_NUM_THREADS=1 sage
> >> > [...]
> >> > SageMath version 7.4.beta6, Release Date: 2016-09-24
> >> > [...]
> >> > Warning: this is a prerelease version, and it may be unstable.
> >> > [...]
> >> > sage: %time M = ModularSymbols(5113, 2, -1)
> >> > CPU times: user 509 ms, sys: 21 ms, total: 530 ms
> >> > Wall time: 530 ms
> >> > sage: %time S = M.cuspidal_subspace().new_subspace()
> >> > CPU times: user 1.42 s, sys: 97 ms, total: 1.52 s
> >> > Wall time: 1.56 s
> >> >
> >> >
> >> > jb12407@lmfdb1:~$ sage
> >> > [...]
> >> > SageMath version 7.4.beta6, Release Date: 2016-09-24
> >> > [...]
> >> > sage: %time M = ModularSymbols(5113, 2, -1)
> >> > CPU times: user 570 ms, sys: 18 ms, total: 588 ms
> >> > Wall time: 591 ms
> >> > sage: %time S = M.cuspidal_subspace().new_subspace()
> >> > CPU times: user 3.76 s, sys: 9.01 s, total: 12.8 s
> >> > Wall time: 1.69 s
> >> >
> >> >
> >> > --
> >> > You received this message because you are subscribed to the Google
> >> > Groups "sage-devel" group.
> >> > To unsubscribe from this group and stop receiving emails from it, send
> >> > an email to sage-devel+unsubscr...@googlegroups.com.
> >> > To post to this group, send email to sage-devel@googlegroups.com.
> >> > Visit this group at https://groups.google.com/group/sage-devel.
> >> > For more

Re: [sage-devel] multithreading performance issues

2016-10-04 Thread Jonathan Bober
No, in 7.3 Sage isn't multithreading in this example:

jb12407@lmfdb1:~$ sage73
sage: %time M = ModularSymbols(5113, 2, -1)
CPU times: user 599 ms, sys: 25 ms, total: 624 ms
Wall time: 612 ms
sage: %time S = M.cuspidal_subspace().new_subspace()
CPU times: user 1.32 s, sys: 89 ms, total: 1.41 s
Wall time: 1.44 s

I guess the issue may be OpenBLAS rather than Linbox, then, since LinBox
uses BLAS. I misread https://trac.sagemath.org/ticket/21323, which I now
realize says "LinBox parallel routines (not yet exposed in SageMath)", when
I thought that the cause may be LinBox. My Sage 7.3. uses the system ATLAS,
and I don't know whether that might sometimes use multithreading.


On Tue, Oct 4, 2016 at 8:06 PM, Francois Bissey <
francois.bis...@canterbury.ac.nz> wrote:

> openmp is disabled in linbox/ffpack-fflas so it must come from somewhere
> else.
> Only R seems to be linked to libgomp (openmp) on my vanilla install.
> Curiosity: do you observe the same behaviour in 7.3?
>
> François
>
> > On 5/10/2016, at 07:26, Jonathan Bober <jwbo...@gmail.com> wrote:
> >
> > See the following timings: If I start Sage with OMP_NUM_THREADS=1, a
> particular computation takes 1.52 cpu seconds and 1.56 wall seconds.
> >
> > The same computation without OMP_NUM_THREADS set takes 12.8 cpu seconds
> and 1.69 wall seconds. This is particularly devastating when I'm running
> with @parallel to use all of my cpu cores.
> >
> > My guess is that this is Linbox related, since these computations do
> some exact linear algebra, and Linbox can do some multithreading, which
> perhaps uses OpenMP.
> >
> > jb12407@lmfdb1:~$ OMP_NUM_THREADS=1 sage
> > [...]
> > SageMath version 7.4.beta6, Release Date: 2016-09-24
> > [...]
> > Warning: this is a prerelease version, and it may be unstable.
> > [...]
> > sage: %time M = ModularSymbols(5113, 2, -1)
> > CPU times: user 509 ms, sys: 21 ms, total: 530 ms
> > Wall time: 530 ms
> > sage: %time S = M.cuspidal_subspace().new_subspace()
> > CPU times: user 1.42 s, sys: 97 ms, total: 1.52 s
> > Wall time: 1.56 s
> >
> >
> > jb12407@lmfdb1:~$ sage
> > [...]
> > SageMath version 7.4.beta6, Release Date: 2016-09-24
> > [...]
> > sage: %time M = ModularSymbols(5113, 2, -1)
> > CPU times: user 570 ms, sys: 18 ms, total: 588 ms
> > Wall time: 591 ms
> > sage: %time S = M.cuspidal_subspace().new_subspace()
> > CPU times: user 3.76 s, sys: 9.01 s, total: 12.8 s
> > Wall time: 1.69 s
> >
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "sage-devel" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to sage-devel+unsubscr...@googlegroups.com.
> > To post to this group, send email to sage-devel@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sage-devel.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


[sage-devel] multithreading performance issues

2016-10-04 Thread Jonathan Bober
See the following timings: If I start Sage with OMP_NUM_THREADS=1, a
particular computation takes 1.52 cpu seconds and 1.56 wall seconds.

The same computation without OMP_NUM_THREADS set takes 12.8 cpu seconds and
1.69 wall seconds. This is particularly devastating when I'm running with
@parallel to use all of my cpu cores.

My guess is that this is Linbox related, since these computations do some
exact linear algebra, and Linbox can do some multithreading, which perhaps
uses OpenMP.

jb12407@lmfdb1:~$ OMP_NUM_THREADS=1 sage
[...]
SageMath version 7.4.beta6, Release Date: 2016-09-24
[...]
Warning: this is a prerelease version, and it may be unstable.
[...]
sage: %time M = ModularSymbols(5113, 2, -1)
CPU times: user 509 ms, sys: 21 ms, total: 530 ms
Wall time: 530 ms
sage: %time S = M.cuspidal_subspace().new_subspace()
CPU times: user 1.42 s, sys: 97 ms, total: 1.52 s
Wall time: 1.56 s


jb12407@lmfdb1:~$ sage
[...]
SageMath version 7.4.beta6, Release Date: 2016-09-24
[...]
sage: %time M = ModularSymbols(5113, 2, -1)
CPU times: user 570 ms, sys: 18 ms, total: 588 ms
Wall time: 591 ms
sage: %time S = M.cuspidal_subspace().new_subspace()
CPU times: user 3.76 s, sys: 9.01 s, total: 12.8 s
Wall time: 1.69 s

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-30 Thread Jonathan Bober
I got around to trying the latest OpenBLAS as Jean-Pierre Flori suggested,
and the segfaults went away. (And all tests pass.) I guess that from my
perspective this means I should open an "update to OpenBLAS 0.2.19" ticket.
Or maybe it should be an "Update OpenBLAS to some working version" ticket.
I don't know much about OpenBLAS, so I don't really know if 2.19 will work
for me but will be broken for someone else.

(OpenBLAS 2.15, the current version, is from 11 months ago.)

On Tue, Sep 27, 2016 at 9:48 AM, Jean-Pierre Flori <jpfl...@gmail.com>
wrote:

>
>
> On Monday, September 26, 2016 at 11:23:49 PM UTC+2, Jonathan Bober wrote:
>>
>> On Mon, Sep 26, 2016 at 10:10 PM, Jean-Pierre Flori <jpf...@gmail.com>
>> wrote:
>>
>>> I suspect that perhaps the copy I have the works is working because I
>>>> built it as sage 7.3 at some point with SAGE_ATLAS_LIB set and then rebuilt
>>>> it on the develop branch, which didn't get rid of the atlas symlinks that
>>>> were already setup. So maybe it isn't actually using openBLAS.
>>>>
>>> SAGE_ATLAS_LIB is just used for ATLAS, not OpenBlas.
>>>
>>
>> 1. So does that mean that on a clean build of the current development
>> branch (or a recent enough beta) SAGE_ATLAS_LIB has, by default, no effect?
>>
> Yes.
>
>>
>> 2. Either way, I suspect that doesn't necessarily mean that nothing
>> strange happens if I build Sage 7.3 (with SAGE_ATLAS_LIB set) and then do a
>> 'git checkout develop', then build again without first doing a distclean.
>>
>  Yes.
> I'm not exactly in what situation you end up when going this way because:
> * at 7.3, you ran configure/make which set up Atlas as the blas provider,
> build it and link everything to it
> * when checking out develop openblas became the default but I'm not sure
> this actually changed anything if configure was not run again before
> make... the best way to now for sure is to see if libraries depending on
> blas got rebuilt (in particular atlas and openblas do not provide binary
> compatible libraries, you have to link to libs with different names)
> Maybe Jeroen or Volker have a better idea of what situation you end up in.
>
> But if you run "make distclean" then you can be sure that openblas will be
> built and linked to...
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread Jonathan Bober
On Tue, Sep 27, 2016 at 8:34 PM, 'Bill Hart' via sage-devel <
sage-devel@googlegroups.com> wrote:

>
>
> On Tuesday, 27 September 2016 20:53:28 UTC+2, Jonathan Bober wrote:
>>
>> On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel <
>> sage-...@googlegroups.com> wrote:
>>
>>> I'm pretty sure the charpoly routine in Flint is much more recent that 2
>>> years. Are you referring to a Sage implementation on top of Flint
>>> arithmetic or something?
>>>
>>
>> It is just a problem with Sage.
>>
>
> Sure, I realised the problem was in Sage. I just wasn't sure if the
> algorithm itself is implemented in Flint or Sage.
>
>
>> Sorry, I thought I was clear about that. I assume that no one has been
>> using the algorithm='flint' option in Sage in the last two years, which
>> makes sense, because most people aren't going to bother changing the
>> default.
>>
>>
>>> The only timing that I can find right at the moment had us about 5x
>>> faster than Sage. It's not in a released version of Flint though, just in
>>> master.
>>>
>>
>> That sounds really nice. On my laptop with current Sage, it might be the
>> other way around. With Sage 7.3 on my laptop, with this particular matrix,
>> I get
>>
>
> Yes, Sage/Linbox was about 2.5x times faster than the old charpoly routine
> in Flint, I believe. The new one is quite recent and much quicker.
>
>
>> sage: %time f = A.charpoly(algorithm='flint')
>> CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
>> Wall time: 1min 24s
>>
>> sage: %time f = A.charpoly(algorithm='linbox')
>> CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
>> Wall time: 13.3 s
>>
>> However, perhaps the average runtime with linbox is infinity. (Also, this
>> in an out of date Linbox.)
>>
>> I think that Linbox may be "cheating" in a way that Flint is not. I'm
>> pretty sure both implementations work mod p (or p^n?) for a bunch of p and
>> reconstruct. From my reading of the Flint source code (actually, I didn't
>> check the version in Sage) and comments from Clement Pernet, I think that
>> Flint uses an explicit Hadamard bound to determine how many primes to use,
>> while Linbox just waits for the CRT'd polynomial to stabilize for a few
>> primes.
>>
>
> Ouch!
>
> Yes, in the new code we use an explicit proven bound. I can't quite recall
> all the details now, but I recall it is multimodular.
>
> I would give it a good amount of testing before trusting it. We've done
> quite a lot of serious testing of it and the test code is nontrivial, but
> some real world tests are much more likely to shake out any bugs, including
> the possibility I screwed up the implementation of the bound.
>

Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I
didn't look at what Sage has until now (flint 2.5.2, which looks likes it
uses a fairly simple O(n^4) algorithm). I had previously looked at the
source code of the version of flint that I've actually been using myself,
which is from June. As I now recall (after reading an email I sent in June)
I'm using a "non-released" version precisely for the nmod_mat_charpoly()
function, which doesn't exist in the most recent release (which I guess
might be 2.5.2, but flintlib.org seems to be having problems at the moment).

I've actually done some fairly extensive real world semi-testing of
nmod_mat_charpoly() in the last few months (for almost the same reasons
that have lead me to investigate Sage/Linbox) but not fmpz_mat_charpoly().
"semi" means that I haven't actually checked that the answers are correct.
I'm actually computing characteristic polynomials of integer matrices, but
writing down the integer matrices is too expensive, so I'm computing the
polynomials more efficiently mod p and then CRTing. Also, I'm doing exactly
what I think Linbox does, in that I am just waiting for the polynomials to
stabilize. Step 2, when it eventually happens, will separately compute the
roots of these polynomials numerically, which will (heuristically) verify
that they are correct. (Step 3 might involve actually proving somehow that
everything is correct, but I strongly fear that it might involve confessing
that everything is actually only "obviously" correct.) Once step 2 happens,
I'll either report some problems or let you know that everything went well.


>
>
>> I have no idea how much of a difference that makes in this case.
>>
>>
>>> Bill.
>>>
>>> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
>>>>
>>>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein

Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread Jonathan Bober
On Tue, Sep 27, 2016 at 8:02 PM, William Stein <wst...@gmail.com> wrote:

> On Tue, Sep 27, 2016 at 11:53 AM, Jonathan Bober <jwbo...@gmail.com>
> wrote:
> > On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel
> > <sage-devel@googlegroups.com> wrote:
> >>
> >> I'm pretty sure the charpoly routine in Flint is much more recent that 2
> >> years. Are you referring to a Sage implementation on top of Flint
> arithmetic
> >> or something?
> >
> >
> > It is just a problem with Sage. Sorry, I thought I was clear about that.
> I
> > assume that no one has been using the algorithm='flint' option in Sage in
> > the last two years, which makes sense, because most people aren't going
> to
> > bother changing the default.
> >
> >>
> >> The only timing that I can find right at the moment had us about 5x
> faster
> >> than Sage. It's not in a released version of Flint though, just in
> master.
> >
> >
> > That sounds really nice. On my laptop with current Sage, it might be the
> > other way around. With Sage 7.3 on my laptop, with this particular
> matrix, I
> > get
> >
> > sage: %time f = A.charpoly(algorithm='flint')
> > CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
> > Wall time: 1min 24s
> >
> > sage: %time f = A.charpoly(algorithm='linbox')
> > CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
> > Wall time: 13.3 s
> >
> > However, perhaps the average runtime with linbox is infinity. (Also,
> this in
> > an out of date Linbox.)
> >
> > I think that Linbox may be "cheating" in a way that Flint is not. I'm
> pretty
> > sure both implementations work mod p (or p^n?) for a bunch of p and
> > reconstruct. From my reading of the Flint source code (actually, I didn't
> > check the version in Sage) and comments from Clement Pernet, I think that
> > Flint uses an explicit Hadamard bound to determine how many primes to
> use,
> > while Linbox just waits for the CRT'd polynomial to stabilize for a few
>
> If it is really doing this, then it should definitely not be the
> default algorithm for Sage, unless proof=False is explicitly
> specified.   Not good.
>
>
Yes, I've had the same thought, which is actually part of the reason I took
the time to write this. I hope that Clement, or someone else who knows,
will notice and confirm or deny. Also, eventually I will probably try to
read the Linbox source code. It is possible that I am wrong. (I guess that
there is some certification step, and that it is somewhat heuristic, but
maybe it is more definitive than that.)


> William
>
> > primes. I have no idea how much of a difference that makes in this case.
> >
> >>
> >> Bill.
> >>
> >> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
> >>>
> >>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein <wst...@gmail.com>
> wrote:
> >>>>
> >>>> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober <jwb...@gmail.com>
> >>>> wrote:
> >>>> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein <wst...@gmail.com>
> >>>> > wrote:
> >>>>
> >>>> >>
> >>>> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober <jwb...@gmail.com>
> >>>> >> wrote:
> >>>> >> > In the matrix_integer_dense charpoly() function, there is a note
> in
> >>>> >> > the
> >>>> >> > docstring which says "Linbox charpoly disabled on 64-bit
> machines,
> >>>> >> > since
> >>>> >> > it
> >>>> >> > hangs in many cases."
> >>>> >> >
> >>>> >> > As far as I can tell, that is not true, in the sense that (1) I
> >>>> >> > have
> >>>> >> > 64-bit
> >>>> >> > machines, and Linbox charpoly is not disabled, (2)
> >>>> >> > charpoly(algorithm='flint') is so horribly broken that if it were
> >>>> >> > ever
> >>>> >> > used
> >>>> >> > it should be quickly noticed that it is broken, and (3) I can't
> see
> >>>> >> > anywhere
> >>>> >> > where it is actually disabled.
> >>>> >> >
> >>>> >> > So I actually just submitted a patch which removes this note
> while
> >>>> >> > fixing
> >>>>

Re: [sage-devel] Re: memory management issue: deleted variables not released

2016-09-27 Thread Jonathan Bober
I just noticed this thread because of your recent reply, and happened to
read through. (I haven't regularly read sage-devel for a while.)

As to your original email: I think there is a subtle python memory
management issue there. If you run

sage: BIG=myfunction(somevars)
sage: BIG=myfunction(somevars)

then on the second invocation of the function, I'm pretty sure that the way
Python works, it calculates the result of the function call and then
assigns it to the variable BIG. In between, the garbage collector will
probably run sometimes, but because the variable BIG has not yet been
reassigned, the garbage collector might not clean it up. So it seems
reasonable to me that

sage: BIG=myfunction(somevars)
sage: BIG = 0
sage: BIG=myfunction(somevars)

may behave differently.

Having said all that... It doesn't sound right that running the function
once costs %50 of ram., and running it twice (with the BIG = 0) in between,
costs 75%. However, there are certainly situations where that can happen.
As was mentioned, Sage caches some computations, and that can occasionally
lead to unwanted memory use. Additionally, when running this sort of short
test, it seems a good idea to manually invoke the python garbage collector
(import gc; gc.collect()) before conclusively declaring that there is a
memory leak.

The _best_ way to help (and get help) and to get attention, if there is
really a memory leak, is to write a short loop that looks something like

while 1:
x = some_simple_function()
gc.collect()
print get_memory_usage()

and outputs an increasing sequence of numbers.

Going from some complicated code to a simple loop like that may be an
arduous debugging task in itself, and is something I would consider a
valuable service to Sage if it really finds a bug. In the intermediate
regime, just sharing some code could be useful, if you are willing and
able. There are at least a few people (such as myself, during the
occasionally periods while I am paying attention) with >4 GB of ram and 10
minutes of cpu cycles to spare, who may be willing to help.

Finally (and this is the reason that I read through this thread and
replied), there was a change in the way that Sage manages PARI memory usage
(between 7.0 and 7.1, I think. See https://trac.sagemath.org/ticket/19883)
which probably affects a very small number of users, but affects them very
badly. (I know about this because it affects me.) If on your machine with
100 GB of ram, the output of 'cat /proc/sys/vm/overcommit_memory' is 2,
then it affects you. Alternatively, if overcommit_memory is 0, then it is
possible you are misreading the memory usage: the virtual memory usage will
be high, but not the actual memory usage. The problem will hopefully be
fixed by 7.4 (see https://trac.sagemath.org/ticket/21582), but the high
virtual memory usage confusion will probably persist. Of course, it is also
quite possible that you've found some other bad problem that popped up
between 7.0 and 7.1.

On Tue, Sep 27, 2016 at 9:44 PM, Denis  wrote:

>
> Tried but it didn't work out. MathCloud admins say they can't help. Tried
> also at SageCell but the calculation wouldn't end either way after several
> hours. Any ideas?
>
> Denis
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread Jonathan Bober
On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel <
sage-devel@googlegroups.com> wrote:

> I'm pretty sure the charpoly routine in Flint is much more recent that 2
> years. Are you referring to a Sage implementation on top of Flint
> arithmetic or something?
>

It is just a problem with Sage. Sorry, I thought I was clear about that. I
assume that no one has been using the algorithm='flint' option in Sage in
the last two years, which makes sense, because most people aren't going to
bother changing the default.


> The only timing that I can find right at the moment had us about 5x faster
> than Sage. It's not in a released version of Flint though, just in master.
>

That sounds really nice. On my laptop with current Sage, it might be the
other way around. With Sage 7.3 on my laptop, with this particular matrix,
I get

sage: %time f = A.charpoly(algorithm='flint')
CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
Wall time: 1min 24s

sage: %time f = A.charpoly(algorithm='linbox')
CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
Wall time: 13.3 s

However, perhaps the average runtime with linbox is infinity. (Also, this
in an out of date Linbox.)

I think that Linbox may be "cheating" in a way that Flint is not. I'm
pretty sure both implementations work mod p (or p^n?) for a bunch of p and
reconstruct. From my reading of the Flint source code (actually, I didn't
check the version in Sage) and comments from Clement Pernet, I think that
Flint uses an explicit Hadamard bound to determine how many primes to use,
while Linbox just waits for the CRT'd polynomial to stabilize for a few
primes. I have no idea how much of a difference that makes in this case.


> Bill.
>
> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
>>
>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein <wst...@gmail.com> wrote:
>>
>>> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober <jwb...@gmail.com>
>>> wrote:
>>> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein <wst...@gmail.com>
>>> wrote:
>>>
>>> >>
>>> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober <jwb...@gmail.com>
>>> wrote:
>>> >> > In the matrix_integer_dense charpoly() function, there is a note in
>>> the
>>> >> > docstring which says "Linbox charpoly disabled on 64-bit machines,
>>> since
>>> >> > it
>>> >> > hangs in many cases."
>>> >> >
>>> >> > As far as I can tell, that is not true, in the sense that (1) I have
>>> >> > 64-bit
>>> >> > machines, and Linbox charpoly is not disabled, (2)
>>> >> > charpoly(algorithm='flint') is so horribly broken that if it were
>>> ever
>>> >> > used
>>> >> > it should be quickly noticed that it is broken, and (3) I can't see
>>> >> > anywhere
>>> >> > where it is actually disabled.
>>> >> >
>>> >> > So I actually just submitted a patch which removes this note while
>>> >> > fixing
>>> >> > point (2). (Trac #21596).
>>> >> >
>>> >> > However...
>>> >> >
>>> >> > In some testing I'm noticing problems with charpoly(), so I'm
>>> wondering
>>> >> > where that message came from, and who knows something about it.
>>> >>
>>> >> Do you know about "git blame", or the "blame" button when viewing any
>>> >> file here: https://github.com/sagemath/sage/tree/master/src
>>> >
>>> >
>>> > Ah, yes. Of course I know about that. And it was you!
>>> >
>>> > You added that message here:
>>>
>>> Dang... I had a bad feeling that would be the conclusion :-)
>>>
>>
>> Well, I'm sure you've done one or two things in the meantime that will
>> allow me to forgive this one oversight.
>>
>>
>>> In my defense, Linbox/FLINT have themselves changed a lot over the
>>> years...  We added Linbox in 2007, I think.
>>>
>>>
>> Yes. As I said, this comment, and the design change, is ancient. In some
>> limiting testing, linbox tends to be faster than flint, but has very high
>> variance in the timings. (I haven't actually checked flint much.) Right now
>> I'm running the following code on 64 cores, which should test linbox:
>>
>> import time
>>
>> @parallel
>> def test(n):
>> start = time.clock()
>> f = B.charpoly()
>> end = time.clock()
>>

Re: [sage-devel] linbox 64-bit charpoly

2016-09-26 Thread Jonathan Bober
On Tue, Sep 27, 2016 at 4:18 AM, William Stein <wst...@gmail.com> wrote:

> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober <jwbo...@gmail.com> wrote:
> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein <wst...@gmail.com>
> wrote:
> >>
> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober <jwbo...@gmail.com>
> wrote:
> >> > In the matrix_integer_dense charpoly() function, there is a note in
> the
> >> > docstring which says "Linbox charpoly disabled on 64-bit machines,
> since
> >> > it
> >> > hangs in many cases."
> >> >
> >> > As far as I can tell, that is not true, in the sense that (1) I have
> >> > 64-bit
> >> > machines, and Linbox charpoly is not disabled, (2)
> >> > charpoly(algorithm='flint') is so horribly broken that if it were ever
> >> > used
> >> > it should be quickly noticed that it is broken, and (3) I can't see
> >> > anywhere
> >> > where it is actually disabled.
> >> >
> >> > So I actually just submitted a patch which removes this note while
> >> > fixing
> >> > point (2). (Trac #21596).
> >> >
> >> > However...
> >> >
> >> > In some testing I'm noticing problems with charpoly(), so I'm
> wondering
> >> > where that message came from, and who knows something about it.
> >>
> >> Do you know about "git blame", or the "blame" button when viewing any
> >> file here: https://github.com/sagemath/sage/tree/master/src
> >
> >
> > Ah, yes. Of course I know about that. And it was you!
> >
> > You added that message here:
>
> Dang... I had a bad feeling that would be the conclusion :-)


Well, I'm sure you've done one or two things in the meantime that will
allow me to forgive this one oversight.


> In my defense, Linbox/FLINT have themselves changed a lot over the
> years...  We added Linbox in 2007, I think.
>
>
Yes. As I said, this comment, and the design change, is ancient. In some
limiting testing, linbox tends to be faster than flint, but has very high
variance in the timings. (I haven't actually checked flint much.) Right now
I'm running the following code on 64 cores, which should test linbox:

import time

@parallel
def test(n):
start = time.clock()
f = B.charpoly()
end = time.clock()
runtime = end - start
if f != g:
print n, 'ohno'
return runtime, 'ohno'
else:
return runtime, 'ok'

A = load('hecke_matrix')
A._clear_cache()
B, denom = A._clear_denom()
g = B.charpoly()
B._clear_cache()

import sys

for result in test(range(10)):
print result[0][0][0], ' '.join([str(x) for x in result[1]])
sys.stdout.flush()

where the file hecke_matrix was produced by

sage: M = ModularSymbols(3633, 2, -1)
sage: S = M.cuspidal_subspace().new_subspace()
sage: H = S.hecke_matrix(2)
sage: H.save('hecke_matrix')

and the results are interesting:

jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | head
30 27.98 ok
64 28.0 ok
2762 28.02 ok
2790 28.02 ok
3066 28.02 ok
3495 28.03 ok
3540 28.03 ok
292 28.04 ok
437 28.04 ok
941 28.04 ok

jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | tail
817 2426.04 ok
1487 2466.3 ok
1440 2686.43 ok
459 2745.74 ok
776 2994.01 ok
912 3166.9 ok
56 3189.98 ok
546 3278.22 ok
1008 3322.74 ok
881 3392.73 ok

jb12407@lmfdb5:~/sage-bug$ python analyze_output.py test_output3
average time: 53.9404572616
unfinished: [490, 523, 1009, 1132, 1274, 1319, 1589, 1726, 1955, 2019,
2283, 2418, 2500, 2598, 2826, 2979, 2982, 3030, 3057, 3112, 3166, 3190,
3199, 3210, 3273, 3310, 3358, 3401, 3407, 3434, 3481, 3487, 3534, 3546,
3593, 3594, 3681, 3685, 3695, 3748, 3782, 3812, 3858, 3864, 3887]

There hasn't yet been an ohno, but on a similar run of 5000 tests computing
A.charpoly() instead of B I have 1 ohno and 4 still running after 5 hours.
(So I'm expecting an error in the morning...)

I think that maybe I was getting a higher error rate in Sage 7.3. The
current beta is using a newer linbox, so maybe it fixed something, but
maybe it isn't quite fixed.

Maybe I should use a small matrix to run more tests more quickly, but this
came from a "real world" example.


> --
> William (http://wstein.org)
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-26 Thread Jonathan Bober
On Mon, Sep 26, 2016 at 11:52 PM, William Stein <wst...@gmail.com> wrote:

> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober <jwbo...@gmail.com> wrote:
> > In the matrix_integer_dense charpoly() function, there is a note in the
> > docstring which says "Linbox charpoly disabled on 64-bit machines, since
> it
> > hangs in many cases."
> >
> > As far as I can tell, that is not true, in the sense that (1) I have
> 64-bit
> > machines, and Linbox charpoly is not disabled, (2)
> > charpoly(algorithm='flint') is so horribly broken that if it were ever
> used
> > it should be quickly noticed that it is broken, and (3) I can't see
> anywhere
> > where it is actually disabled.
> >
> > So I actually just submitted a patch which removes this note while fixing
> > point (2). (Trac #21596).
> >
> > However...
> >
> > In some testing I'm noticing problems with charpoly(), so I'm wondering
> > where that message came from, and who knows something about it.
>
> Do you know about "git blame", or the "blame" button when viewing any
> file here: https://github.com/sagemath/sage/tree/master/src


Ah, yes. Of course I know about that. And it was you!

You added that message here:
https://github.com/sagemath/sage/commit/ce8c59b53cb46c338ca89cebc50e26ff0e0643cc
and didn't remove it here:
https://github.com/sagemath/sage/commit/7030ad944f6023825fbd4a30e1c800d18a2c0b4c

(though it isn't completely clear to me that the second commit really comes
after the first, since date's might not tell the full story.)

Anyway, it seems likely that this comment is ancient history.

Using flint to compute the characteristic polynomial has only been broken
for 2 years (though not because of brokenness in filnt). I'll have to keep
testing to see if linbox is broken.

>
>
> >
> > The problems seem the likely cause of #21579, though I haven't actually
> been
> > able to conclusively blame that on linbox yet. I'm also not sure I've
> seen
> > linbox's charpoly() hang, exactly, but I do see very erratic behavior in
> how
> > long it takes: on a fixed matrix I have it typically takes 30 seconds,
> but
> > I've also seen it return the correct answer after 50 minutes.
> >
> > (I've also got the wrong answer sometimes, but there are some conversions
> > going on, and I've so far only seen the wrong answer for rational
> matrices,
> > which is why I've not get blamed linbox, though I am certainly leaning
> > towards blaming it.)
> >
> > Note: I'm currently testing on 7.4.beta6, so this is after the recent
> linbox
> > upgrade. But I was also having some problems before that. It is possible
> > that the recent upgrade made errors less likely, though.
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "sage-devel" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to sage-devel+unsubscr...@googlegroups.com.
> > To post to this group, send email to sage-devel@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sage-devel.
> > For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> William (http://wstein.org)
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


[sage-devel] linbox 64-bit charpoly

2016-09-26 Thread Jonathan Bober
In the matrix_integer_dense charpoly() function, there is a note in the
docstring which says "Linbox charpoly disabled on 64-bit machines, since it
hangs in many cases."

As far as I can tell, that is not true, in the sense that (1) I have 64-bit
machines, and Linbox charpoly is not disabled, (2)
charpoly(algorithm='flint') is so horribly broken that if it were ever used
it should be quickly noticed that it is broken, and (3) I can't see
anywhere where it is actually disabled.

So I actually just submitted a patch which removes this note while fixing
point (2). (Trac #21596).

However...

In some testing I'm noticing problems with charpoly(), so I'm wondering
where that message came from, and who knows something about it.

The problems seem the likely cause of #21579, though I haven't actually
been able to conclusively blame that on linbox yet. I'm also not sure I've
seen linbox's charpoly() hang, exactly, but I do see very erratic behavior
in how long it takes: on a fixed matrix I have it typically takes 30
seconds, but I've also seen it return the correct answer after 50 minutes.

(I've also got the wrong answer sometimes, but there are some conversions
going on, and I've so far only seen the wrong answer for rational matrices,
which is why I've not get blamed linbox, though I am certainly leaning
towards blaming it.)

Note: I'm currently testing on 7.4.beta6, so this is after the recent
linbox upgrade. But I was also having some problems before that. It is
possible that the recent upgrade made errors less likely, though.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-26 Thread Jonathan Bober
On Mon, Sep 26, 2016 at 10:10 PM, Jean-Pierre Flori 
wrote:

> I suspect that perhaps the copy I have the works is working because I
>> built it as sage 7.3 at some point with SAGE_ATLAS_LIB set and then rebuilt
>> it on the develop branch, which didn't get rid of the atlas symlinks that
>> were already setup. So maybe it isn't actually using openBLAS.
>>
> SAGE_ATLAS_LIB is just used for ATLAS, not OpenBlas.
>

1. So does that mean that on a clean build of the current development
branch (or a recent enough beta) SAGE_ATLAS_LIB has, by default, no effect?

2. Either way, I suspect that doesn't necessarily mean that nothing strange
happens if I build Sage 7.3 (with SAGE_ATLAS_LIB set) and then do a 'git
checkout develop', then build again without first doing a distclean.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-26 Thread Jonathan Bober
I suspect that perhaps the copy I have the works is working because I built
it as sage 7.3 at some point with SAGE_ATLAS_LIB set and then rebuilt it on
the develop branch, which didn't get rid of the atlas symlinks that were
already setup. So maybe it isn't actually using openBLAS.

On Mon, Sep 26, 2016 at 7:11 PM, Jonathan Bober <jwbo...@gmail.com> wrote:

> I've rebuilt again from scratch, twice. Once after clearing all my
> possibly questionable environment variables (except for LD_LIBRARY_PATH,
> since that is needed for gcc-6.1.0, as set by module add gcc-6.1.0 for me).
> The second time I did "CFLAGS=-mno-xop make". I can see that flag took
> effect by examining the OpenBLAS build log, for example.
>
> Question: Does SAGE_ATLAS_LIB have any effect on the develop branch? I
> don't know what I did differently to get a build that doesn't segfault.
>
> On Mon, Sep 26, 2016 at 3:29 PM, 'Martin R' via sage-devel <
> sage-devel@googlegroups.com> wrote:
>
>> I forgot:
>>
>> martin@Martin-Laptop:~/sage-develop$ lscpu
>> Architecture:  x86_64
>> CPU op-mode(s):32-bit, 64-bit
>> Byte Order:Little Endian
>> CPU(s):4
>> On-line CPU(s) list:   0-3
>> Thread(s) pro Kern:1
>> Kern(e) pro Socket:4
>> Socket(s): 1
>> NUMA-Knoten:   1
>> Anbieterkennung:   GenuineIntel
>> Prozessorfamilie:  6
>> Modell:55
>> Model name:Intel(R) Celeron(R) CPU  N2940  @ 1.83GHz
>> Stepping:  8
>> CPU MHz:   535.104
>> CPU max MHz:   2249,1001
>> CPU min MHz:   499,8000
>> BogoMIPS:  3660.80
>> Virtualisierung:   VT-x
>> L1d Cache: 24K
>> L1i Cache: 32K
>> L2 Cache:  1024K
>> NUMA node0 CPU(s): 0-3
>>
>> Am Montag, 26. September 2016 16:28:16 UTC+2 schrieb Martin R:
>>>
>>> On my computer, 7.4.beta6 doesn't seem to compile openblas successfully
>>> either.  After make distclean and make I get an error (log attached).
>>>
>>> real 135m22.614s
>>> user 407m51.656s
>>> sys 21m22.276s
>>> ***
>>> Error building Sage.
>>>
>>> The following package(s) may have failed to build (not necessarily
>>> during this run of 'make all'):
>>>
>>> * package: openblas-0.2.15
>>>   log file: /home/martin/sage-develop/logs/pkgs/openblas-0.2.15.log
>>>   build directory: /home/martin/sage-develop/loca
>>> l/var/tmp/sage/build/openblas-0.2.15
>>>
>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "sage-devel" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sage-devel+unsubscr...@googlegroups.com.
>> To post to this group, send email to sage-devel@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sage-devel.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-26 Thread Jonathan Bober
I've rebuilt again from scratch, twice. Once after clearing all my possibly
questionable environment variables (except for LD_LIBRARY_PATH, since that
is needed for gcc-6.1.0, as set by module add gcc-6.1.0 for me). The second
time I did "CFLAGS=-mno-xop make". I can see that flag took effect by
examining the OpenBLAS build log, for example.

Question: Does SAGE_ATLAS_LIB have any effect on the develop branch? I
don't know what I did differently to get a build that doesn't segfault.

On Mon, Sep 26, 2016 at 3:29 PM, 'Martin R' via sage-devel <
sage-devel@googlegroups.com> wrote:

> I forgot:
>
> martin@Martin-Laptop:~/sage-develop$ lscpu
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):4
> On-line CPU(s) list:   0-3
> Thread(s) pro Kern:1
> Kern(e) pro Socket:4
> Socket(s): 1
> NUMA-Knoten:   1
> Anbieterkennung:   GenuineIntel
> Prozessorfamilie:  6
> Modell:55
> Model name:Intel(R) Celeron(R) CPU  N2940  @ 1.83GHz
> Stepping:  8
> CPU MHz:   535.104
> CPU max MHz:   2249,1001
> CPU min MHz:   499,8000
> BogoMIPS:  3660.80
> Virtualisierung:   VT-x
> L1d Cache: 24K
> L1i Cache: 32K
> L2 Cache:  1024K
> NUMA node0 CPU(s): 0-3
>
> Am Montag, 26. September 2016 16:28:16 UTC+2 schrieb Martin R:
>>
>> On my computer, 7.4.beta6 doesn't seem to compile openblas successfully
>> either.  After make distclean and make I get an error (log attached).
>>
>> real 135m22.614s
>> user 407m51.656s
>> sys 21m22.276s
>> ***
>> Error building Sage.
>>
>> The following package(s) may have failed to build (not necessarily
>> during this run of 'make all'):
>>
>> * package: openblas-0.2.15
>>   log file: /home/martin/sage-develop/logs/pkgs/openblas-0.2.15.log
>>   build directory: /home/martin/sage-develop/loca
>> l/var/tmp/sage/build/openblas-0.2.15
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-26 Thread Jonathan Bober
On Mon, Sep 26, 2016 at 1:50 PM, leif <not.rea...@online.de> wrote:

> Jean-Pierre Flori wrote:
> >
> >
> > On Monday, September 26, 2016 at 11:47:00 AM UTC+2, Jonathan Bober wrote:
> >
> > On Mon, Sep 26, 2016 at 9:44 AM, Dima Pasechnik <dim...@gmail.com
> > > wrote:
> >
> >
> >
> > On Monday, September 26, 2016 at 2:22:51 AM UTC, Jonathan Bober
> > wrote:
> >
> > I recompiled with gcc 6.1.0, and get the same segfault. I
> > did a ./sage -i gdb to get a better crash report, which is
> > attached. I don't know if it is useful. Also, it mentions
> > some gcc 5.1.0 paths, which seems odd. I don't know if that
> > indicates that something is broken on my end.
>
> If in doubt, try building with '-mno-xop' in CFLAGS.  AFAICT, *every*
> GCC version supporting Bulldozer+ is broken w.r.t. mixing AVX and XOP
> instructions (at higher optimization levels).  (I haven't tried 6.x yet
> though IIRC; GMP-ECM 6.4 built with '-march=native -O3' was an example
> exposing this in the past, 7.x no longer does.)
>
>
I can just set the CFLAGS variable, and Sage will use it (or add to it)?

Meanwhile, rebuilding produced the error again. By rebuilding from scratch,
I really mean the following set of commands (which has some output and
irrelevant commands omitted, but is otherwise an actual log of what I typed
in, and not just a reproduction from my faulty memory).

jb12407@lmfdb5:~/tmp$ rm -rf sage
jb12407@lmfdb5:~/tmp$ git clone g...@github.com:sagemath/sage.git
jb12407@lmfdb5:~/tmp$ cd sage
jb12407@lmfdb5:~/tmp/sage$ gcc -v
Using built-in specs.
COLLECT_GCC=/opt/gcc-6.1.0/bin/gcc
COLLECT_LTO_WRAPPER=/opt/gcc-6.1.0/libexec/gcc/x86_64-pc-linux-gnu/6.1.0/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: ../source-links/configure --prefix=/opt/gcc-6.1.0
--enable-languages=c,c++,fortran
Thread model: posix
gcc version 6.1.0 (GCC)
jb12407@lmfdb5:~/tmp/sage$ git checkout develop
jb12407@lmfdb5:~/tmp/sage$ export MAKE='make -j40'
jb12407@lmfdb5:~/tmp/sage$ unset SAGE_ATLAS_LIB
jb12407@lmfdb5:~/tmp/sage$ make

(This terminates in an error building the documentation because of memory
overcommit issues, but it builds everything else fine. That's a nice side
effect if the overcommit problems, IMHO ;)

Finally...

jb12407@lmfdb5:~/tmp/sage$ ./sage

┌┐
│ SageMath version 7.4.beta6, Release Date: 2016-09-24   │
│ Type "notebook()" for the browser-based notebook interface.│
│ Type "help()" for help.│
└┘
┏┓
┃ Warning: this is a prerelease version, and it may be unstable. ┃
┗┛
This looks like the first time you are running Sage.
Updating various hardcoded paths...
(Please wait at most a few minutes.)
DO NOT INTERRUPT THIS.
Done updating paths.
sage: m = random_matrix(RDF,500)
sage: e = m.eigenvalues()
BOOM!

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-26 Thread Jonathan Bober
I now seem to have built a version that doesn't segfault on this, without
intentionally changing anything. So I don't know what is going on. I may
try another build from scratch, just for the sake of my sanity.

On Mon, Sep 26, 2016 at 11:09 AM, Jean-Pierre Flori <jpfl...@gmail.com>
wrote:

>
>
> On Monday, September 26, 2016 at 11:47:00 AM UTC+2, Jonathan Bober wrote:
>>
>> On Mon, Sep 26, 2016 at 9:44 AM, Dima Pasechnik <dim...@gmail.com> wrote:
>>
>>>
>>>
>>> On Monday, September 26, 2016 at 2:22:51 AM UTC, Jonathan Bober wrote:
>>>>
>>>> I recompiled with gcc 6.1.0, and get the same segfault. I did a ./sage
>>>> -i gdb to get a better crash report, which is attached. I don't know if it
>>>> is useful. Also, it mentions some gcc 5.1.0 paths, which seems odd. I don't
>>>> know if that indicates that something is broken on my end.
>>>>
>>>
>>> I am mostly guessing, but mentioning gcc 5.1.0 in the log does sound to
>>> me as if your toolchain might be broken after update to gcc 6.1 (perhaps
>>> it's debugging versions of libs...)
>>>
>>
>> No, I think I was confused and used the wrong installation of sage to
>> produce that crash report. Maybe I typed sage instead of ./sage. The
>> correct crash report is attached now.
>>
> The backtrace does not say much to me...
> Maybe you could try to update openblas? our version is already outdated.
> And hopefully this might have been fixed in a newer version (though I did
> not find anything looking very related after a few google searches).
> Just download the latest openblas  tarball into upstream, edit
> build/pkgs/openblas/package-version.txt to reflect the version change and
> run sage --package fix-checksum openblas to update the checksum.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-26 Thread Jonathan Bober
On Mon, Sep 26, 2016 at 9:44 AM, Dima Pasechnik <dimp...@gmail.com> wrote:

>
>
> On Monday, September 26, 2016 at 2:22:51 AM UTC, Jonathan Bober wrote:
>>
>> I recompiled with gcc 6.1.0, and get the same segfault. I did a ./sage -i
>> gdb to get a better crash report, which is attached. I don't know if it is
>> useful. Also, it mentions some gcc 5.1.0 paths, which seems odd. I don't
>> know if that indicates that something is broken on my end.
>>
>
> I am mostly guessing, but mentioning gcc 5.1.0 in the log does sound to me
> as if your toolchain might be broken after update to gcc 6.1 (perhaps it's
> debugging versions of libs...)
>

No, I think I was confused and used the wrong installation of sage to
produce that crash report. Maybe I typed sage instead of ./sage. The
correct crash report is attached now.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.
GNU gdb (GDB) 7.8
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
[New LWP 42725]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
To enable execution of this file add
	add-auto-load-safe-path /opt/gcc-6.1.0/lib64/libstdc++.so.6.0.22-gdb.py
line to your configuration file "/home/jb12407/.gdbinit".
To completely disable this security protection add
	set auto-load safe-path /
line to your configuration file "/home/jb12407/.gdbinit".
For more information about this security protection see the
"Auto-loading safe path" section in the GDB manual.  E.g., run from the shell:
	info "(gdb)Auto-loading safe path"
0x00361e20f37d in waitpid () from /lib64/libpthread.so.0

Stack backtrace
---
No symbol table info available.
#1  0x7f200ce23ef8 in print_enhanced_backtrace ()
at build/src/cysignals/implementation.c:401
parent_pid = 42675
pid = 
#2  0x7f200ce246ca in sigdie (sig=sig@entry=11, 
s=s@entry=0x7f200ce2d420 "Unhandled SIGSEGV: A segmentation fault occurred.") at build/src/cysignals/implementation.c:420
No locals.
#3  0x7f200ce27267 in cysigs_signal_handler (sig=11)
at build/src/cysignals/implementation.c:213
inside = 
#4  
No symbol table info available.
#5  0x7f2002133cb8 in daxpy_k ()
   from /data/home/jb12407/tmp/sage/local/lib/libopenblas.so.0
No symbol table info available.
#6  0x7f2001f3f366 in trmv_kernel ()
   from /data/home/jb12407/tmp/sage/local/lib/libopenblas.so.0
No symbol table info available.
#7  0x7f20020eee89 in exec_blas ()
   from /data/home/jb12407/tmp/sage/local/lib/libopenblas.so.0
No symbol table info available.
#8  0x7f2001f3f6e8 in dtrmv_thread_NLU ()
   from /data/home/jb12407/tmp/sage/local/lib/libopenblas.so.0
No symbol table info available.
#9  0x7f2001efe693 in dtrmv_ ()
   from /data/home/jb12407/tmp/sage/local/lib/libopenblas.so.0
No symbol table info available.
#10 0x7f2002382c45 in dlahr2_ ()
   from /data/home/jb12407/tmp/sage/local/lib/libopenblas.so.0
No symbol table info available.
#11 0x7f200234a5d8 in dgehrd_ ()
   from /data/home/jb12407/tmp/sage/local/lib/libopenblas.so.0
No symbol table info available.
#12 0x7f2002344ecb in dgeev_ ()
   from /data/home/jb12407/tmp/sage/local/lib/libopenblas.so.0
No symbol table info available.
#13 0x7f19581a4b27 in f2py_rout__flapack_dgeev (capi_self=, 
capi_args=, capi_keywds=, 
f2py_func=0x7f2002344a70 )
at build/src.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/linalg/_flapackmodule.c:24535
capi_buildvalue = 0x0
f2py_success = 1
compute_vl = 0
compute_vl_capi = 0x18e6170
compute_vr = 0
compute_vr_capi = 0x18e6170
n = 500
a = 
a_Dims = {500, 500}
capi_a_tmp = 0x7f1f7274a490
capi_a_intent = 
capi_ov

Re: [sage-devel] Re: openblas segfault?

2016-09-25 Thread Jonathan Bober
On Mon, Sep 26, 2016 at 2:00 AM, Dima Pasechnik  wrote:

surely one can mess everything up with env. vars :-)
>

Yes, of course, I should have been more specific. I mean 'standard'
environment variables (which users like me might use in wrong or 'hackish'
ways).  e.g., something like

module add gcc-6.1.0-x86_64

export PATH=$HOME/bin:/data/local/bin:/opt/gcc-6.1.0/bin:$PATH
export LD_LIBRARY_PATH=$HOME/lib:/data/local/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$HOME/lib:/data/local/lib
export C_INCLUDE_PATH=$HOME/include:/data/local/include
export CPLUS_INCLUDE_PATH=$HOME/include:/data/local/include
export LD_RUN_PATH=$HOME/lib:/data/local/lib
export SAGE_ATLAS_LIB=/usr/lib64/atlas-sse3/
export STATIC_LIB_DIR=/data/local/lib/

(Actually, I unset SAGE_ATLAS_LIB before the latest build, since the
current development branch has switched to OpenBLAS. (I think I want to say
hooray about that, but will at least wait until it doesn't segfault.))

How much attention (if any) does Sage pay to these environment variables?

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-25 Thread Jonathan Bober
I recompiled with gcc 6.1.0, and get the same segfault. I did a ./sage -i
gdb to get a better crash report, which is attached. I don't know if it is
useful. Also, it mentions some gcc 5.1.0 paths, which seems odd. I don't
know if that indicates that something is broken on my end.

On Mon, Sep 26, 2016 at 2:06 AM, Dima Pasechnik <dimp...@gmail.com> wrote:

>
>
> On Sunday, September 25, 2016 at 10:48:51 PM UTC, Jonathan Bober wrote:
>>
>> That's a good point. It looks like 5.1.0. I will try rebuilding from
>> scratch with SAGE_INSTALL_GCC=yes, since that's easier than actually trying
>> to figure out what's going on.
>>
>
> it might well be that 5.1 cannot build Sage's gcc (4.9.3), as it's too
> new...
> So you'd need to downgrade system gcc if you really want to use Sage's gcc.
> I'd rather install gcc 5.4 or 6.1 on the system.
>
>
>
>
>> (Also, it is past time for me to update gcc, since it looks like I can
>> easily have 6.1.0 on this system.) I will report back in 90 minutes, unless
>> I don't, in which case I'll report back in 10 to 24 hours.
>>
>> On Sun, Sep 25, 2016 at 11:31 PM, Dima Pasechnik <dim...@gmail.com>
>> wrote:
>>
>>> Most probably gcc version used to build Sage is also quite important...
>>>
>>>
>>> On Sunday, September 25, 2016 at 10:16:23 PM UTC, Jonathan Bober wrote:
>>>
>>>> jb12407@lmfdb5:~$ lscpu
>>>> Architecture:  x86_64
>>>> CPU op-mode(s):32-bit, 64-bit
>>>> Byte Order:Little Endian
>>>> CPU(s):64
>>>> On-line CPU(s) list:   0-63
>>>> Thread(s) per core:2
>>>> Core(s) per socket:8
>>>> Socket(s): 4
>>>> NUMA node(s):  8
>>>> Vendor ID: AuthenticAMD
>>>> CPU family:21
>>>> Model: 2
>>>> Model name:AMD Opteron(tm) Processor 6380
>>>> Stepping:  0
>>>> CPU MHz:   2500.065
>>>> BogoMIPS:  4999.42
>>>> Virtualization:AMD-V
>>>> L1d cache: 16K
>>>> L1i cache: 64K
>>>> L2 cache:  2048K
>>>> L3 cache:  6144K
>>>> NUMA node0 CPU(s): 0-7
>>>> NUMA node1 CPU(s): 8-15
>>>> NUMA node2 CPU(s): 16-23
>>>> NUMA node3 CPU(s): 24-31
>>>> NUMA node4 CPU(s): 32-39
>>>> NUMA node5 CPU(s): 40-47
>>>> NUMA node6 CPU(s): 48-55
>>>> NUMA node7 CPU(s): 56-63
>>>>
>>>> (Old software...)
>>>>
>>>> jb12407@lmfdb5:~$ lsb_release -a
>>>> LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-
>>>> noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-
>>>> amd64:printing-4.0-noarch
>>>> Distributor ID: Scientific
>>>> Description: Scientific Linux release 6.8 (Carbon)
>>>> Release: 6.8
>>>> Codename: Carbon
>>>>
>>>> jb12407@lmfdb5:~$ uname -r
>>>> 2.6.32-573.3.1.el6.x86_64
>>>>
>>>> On Sun, Sep 25, 2016 at 11:03 PM, Dima Pasechnik <dim...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Sunday, September 25, 2016 at 9:33:42 PM UTC, Jonathan Bober wrote:
>>>>>>
>>>>>> I'm getting a segfault on the current development branch of sage (see
>>>>>> below). Is this a known issue and/or effecting anyone else?
>>>>>>
>>>>>> This does not happen for me in Sage 7.3, although it does happen if I
>>>>>> try to go "back in time" with a 'git checkout 7.3' and then rebuild. So
>>>>>> just to be certain I built a few copies from scratch.
>>>>>>
>>>>>> This is from a doctest in src/doc/*/a_tour_of_sage/index.rst, easily
>>>>>> reproducible. (Notice the * there. I guess Sage separately tests the
>>>>>> documentation in each language...)
>>>>>>
>>>>>> sage: m = random_matrix(RDF,500)
>>>>>> sage: m.eigenvalues()
>>>>>> 
>>>>>> 
>>>>>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>>>>>> kages/cysignals/signals.so(+0x4635)[0x7f6d34da4635]
>>>>>> /data/

Re: [sage-devel] Re: [sagemath-admins] trac not responding

2016-09-25 Thread Jonathan Bober
On Mon, Sep 26, 2016 at 1:54 AM, Dima Pasechnik <dimp...@gmail.com> wrote:

>
>
> On Monday, September 26, 2016 at 12:40:23 AM UTC, Jonathan Bober wrote:
>>
>> Is git still down, or do is the problem just that I don't know what I am
>> doing?
>>
>> jb12407@lmfdb5:/data/local/sage/sage-7.3$ git trac checkout 21596
>>
>
> Make sure you have the latest `git trac` installed.
>

I haven't tried to properly contribute a patch to Sage in years. Hence, I
installed git trac today (or was it yesterday?). (And also I don't quite
know what I am doing.)


> There is no branch on #21596, thus nothing to checkout.
> In this case `git trac` attempts to create a new branch on a server in
> some way, which fails
> for some reason.
>

That's an indication that I don't know what I am doing, but the command
actually fails on a git fetch, and it also failed earlier on a different
ticket that did have a branch. Also,...

jb12407@lmfdb5:/data/local/sage/sage-7.3$ git remote -v
github  g...@github.com:sagemath/sage.git (fetch)
github  g...@github.com:sagemath/sage.git (push)
tracgit://trac.sagemath.org/sage.git (fetch)
tracg...@trac.sagemath.org:sage.git (push)

jb12407@lmfdb5:/data/local/sage/sage-7.3$ git fetch trac
trac.sagemath.org[0: 104.197.143.230]: errno=Connection refused
fatal: unable to connect a socket (Connection refused)

I think that was something that did work for me earlier today, or maybe
yesterday. Meanwhile,


jb12407@lmfdb5:/data/local/sage/sage-7.3$ ssh g...@trac.sagemath.org info
The authenticity of host 'trac.sagemath.org (104.197.143.230)' can't be
established.
RSA key fingerprint is 4c:99:ad:ce:59:f4:82:bc:28:fa:1b:7a:47:d9:1b:74.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'trac.sagemath.org,104.197.143.230' (RSA) to the
list of known hosts.
hello bober, this is git@trac running gitolite3 3.5.3.1-2 (Debian) on git
1.9.1

 R Wsage

Oh, maybe I need to ssh first to get the host key? Nope, still doesn't
work...

jb12407@lmfdb5:/data/local/sage/sage-7.3$ git fetch trac
trac.sagemath.org[0: 104.197.143.230]: errno=Connection refused
fatal: unable to connect a socket (Connection refused)


> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: [sagemath-admins] trac not responding

2016-09-25 Thread Jonathan Bober
Is git still down, or do is the problem just that I don't know what I am
doing?

jb12407@lmfdb5:/data/local/sage/sage-7.3$ git trac checkout 21596
Loading ticket #21596...
Newly created local branch:
t/21596/matrix_charpoly_algorithm__flint___destroys_the_polynomial_ring_generator
Traceback (most recent call last):
  File "/home/jb12407/bin/git-trac", line 18, in 
cmdline.launch()
  File "/data/home/jb12407/git-trac-command/git_trac/cmdline.py", line 215,
in launch
app.checkout(args.ticket_or_branch, args.branch_name)
  File "/data/home/jb12407/git-trac-command/git_trac/app.py", line 116, in
checkout
self._checkout_ticket(int(ticket_or_branch), branch_name)
  File "/data/home/jb12407/git-trac-command/git_trac/app.py", line 134, in
_checkout_ticket
self.repo.create(local)
  File "/data/home/jb12407/git-trac-command/git_trac/git_repository.py",
line 145, in create
self.git.fetch('trac', starting_branch)
  File "/data/home/jb12407/git-trac-command/git_trac/git_interface.py",
line 341, in meth
return self.execute(git_cmd, *args, **kwds)
  File "/data/home/jb12407/git-trac-command/git_trac/git_interface.py",
line 328, in execute
popen_stderr=subprocess.PIPE)
  File "/data/home/jb12407/git-trac-command/git_trac/git_interface.py",
line 263, in _run
raise GitError(result)
git_trac.git_error.GitError: git returned with non-zero exit code (128)
when executing "git fetch trac develop"
STDERR: trac.sagemath.org[0: 104.197.143.230]: errno=Connection refused
STDERR: fatal: unable to connect a socket (Connection refused)


On Sun, Sep 25, 2016 at 5:03 PM, Volker Braun  wrote:

> The buildbot is offline, too... http://build.sagedev.org
>
> There shouldn't be a connection but its certainly an odd coincidence
>
>
>
>
> On Sunday, September 25, 2016 at 5:22:19 PM UTC+2, William wrote:
>>
>> On Sun, Sep 25, 2016 at 8:13 AM, Dima Pasechnik 
>> wrote:
>> >
>> > I almost cannot use either web interface, or git server.
>> > (however I can ssh to the host, although it is slow...)
>> > I see a lot of apache activity...
>> >
>> > Does anyone do anything heavy?
>>
>>
>> I doubt this is good:
>>
>> wstein@trac:~$ ps ax|grep git-upload-pack|wc -l71
>>
>> and
>>
>> wstein@trac:~$ ps ax |grep "git clone" |wc -l33
>>
>> It's yet again the situation where so many git operations are being
>> performed that none of them finish, things time out, and it tries them
>> all again at once, or something.  This work needs to be rewritten to
>> use a lock or queue or something...  or just make the entire machine
>> way more powerful (spend more money).
>>
>> I'm now going to try to do a lot of aggressive killing of git
>> processes, restarting of apache, etc.
>>
>> William
>>
>> >
>> >
>> > --
>> >
>> > ---
>> > You received this message because you are subscribed to the Google
>> Groups "sagemath-admins" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to sagemath-admi...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>>
>> --
>> William (http://wstein.org)
>>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Re: openblas segfault?

2016-09-25 Thread Jonathan Bober
Unfortunately, didn't work correctly. Running

SAGE_INSTALL_GCC=yes make

seems to do something wrong to gmp. e.g., while building GMP-ECM...

checking if gmp.h version and libgmp version are the same... (6.1.0/5.1.3)
no
configure: error: 'gmp.h' and 'libgmp' have different versions, you have to
reinstall GMP properly.
Error configuring GMP-ECM.

Is it possible that some environment variable I've set has messed something
up?

Also, I ran make distclean instead of really starting from scratch. Is that
relevant?

On Sun, Sep 25, 2016 at 11:48 PM, Jonathan Bober <jwbo...@gmail.com> wrote:

> That's a good point. It looks like 5.1.0. I will try rebuilding from
> scratch with SAGE_INSTALL_GCC=yes, since that's easier than actually trying
> to figure out what's going on. (Also, it is past time for me to update gcc,
> since it looks like I can easily have 6.1.0 on this system.) I will report
> back in 90 minutes, unless I don't, in which case I'll report back in 10 to
> 24 hours.
>
> On Sun, Sep 25, 2016 at 11:31 PM, Dima Pasechnik <dimp...@gmail.com>
> wrote:
>
>> Most probably gcc version used to build Sage is also quite important...
>>
>>
>> On Sunday, September 25, 2016 at 10:16:23 PM UTC, Jonathan Bober wrote:
>>
>>> jb12407@lmfdb5:~$ lscpu
>>> Architecture:  x86_64
>>> CPU op-mode(s):32-bit, 64-bit
>>> Byte Order:Little Endian
>>> CPU(s):64
>>> On-line CPU(s) list:   0-63
>>> Thread(s) per core:2
>>> Core(s) per socket:8
>>> Socket(s): 4
>>> NUMA node(s):  8
>>> Vendor ID: AuthenticAMD
>>> CPU family:21
>>> Model: 2
>>> Model name:AMD Opteron(tm) Processor 6380
>>> Stepping:  0
>>> CPU MHz:   2500.065
>>> BogoMIPS:  4999.42
>>> Virtualization:AMD-V
>>> L1d cache: 16K
>>> L1i cache: 64K
>>> L2 cache:  2048K
>>> L3 cache:  6144K
>>> NUMA node0 CPU(s): 0-7
>>> NUMA node1 CPU(s): 8-15
>>> NUMA node2 CPU(s): 16-23
>>> NUMA node3 CPU(s): 24-31
>>> NUMA node4 CPU(s): 32-39
>>> NUMA node5 CPU(s): 40-47
>>> NUMA node6 CPU(s): 48-55
>>> NUMA node7 CPU(s): 56-63
>>>
>>> (Old software...)
>>>
>>> jb12407@lmfdb5:~$ lsb_release -a
>>> LSB Version: :base-4.0-amd64:base-4.0-noarc
>>> h:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics
>>> -4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
>>> Distributor ID: Scientific
>>> Description: Scientific Linux release 6.8 (Carbon)
>>> Release: 6.8
>>> Codename: Carbon
>>>
>>> jb12407@lmfdb5:~$ uname -r
>>> 2.6.32-573.3.1.el6.x86_64
>>>
>>> On Sun, Sep 25, 2016 at 11:03 PM, Dima Pasechnik <dim...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Sunday, September 25, 2016 at 9:33:42 PM UTC, Jonathan Bober wrote:
>>>>>
>>>>> I'm getting a segfault on the current development branch of sage (see
>>>>> below). Is this a known issue and/or effecting anyone else?
>>>>>
>>>>> This does not happen for me in Sage 7.3, although it does happen if I
>>>>> try to go "back in time" with a 'git checkout 7.3' and then rebuild. So
>>>>> just to be certain I built a few copies from scratch.
>>>>>
>>>>> This is from a doctest in src/doc/*/a_tour_of_sage/index.rst, easily
>>>>> reproducible. (Notice the * there. I guess Sage separately tests the
>>>>> documentation in each language...)
>>>>>
>>>>> sage: m = random_matrix(RDF,500)
>>>>> sage: m.eigenvalues()
>>>>> 
>>>>> 
>>>>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>>>>> kages/cysignals/signals.so(+0x4635)[0x7f6d34da4635]
>>>>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>>>>> kages/cysignals/signals.so(+0x4685)[0x7f6d34da4685]
>>>>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>>>>> kages/cysignals/signals.so(+0x7107)[0x7f6d34da7107]
>>>>> /lib64/libpthread.so.0[0x30f700f7e0]
>>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(d
>>>>> axpy_k+0x48)[0x7f

Re: [sage-devel] Re: openblas segfault?

2016-09-25 Thread Jonathan Bober
That's a good point. It looks like 5.1.0. I will try rebuilding from
scratch with SAGE_INSTALL_GCC=yes, since that's easier than actually trying
to figure out what's going on. (Also, it is past time for me to update gcc,
since it looks like I can easily have 6.1.0 on this system.) I will report
back in 90 minutes, unless I don't, in which case I'll report back in 10 to
24 hours.

On Sun, Sep 25, 2016 at 11:31 PM, Dima Pasechnik <dimp...@gmail.com> wrote:

> Most probably gcc version used to build Sage is also quite important...
>
>
> On Sunday, September 25, 2016 at 10:16:23 PM UTC, Jonathan Bober wrote:
>
>> jb12407@lmfdb5:~$ lscpu
>> Architecture:  x86_64
>> CPU op-mode(s):32-bit, 64-bit
>> Byte Order:Little Endian
>> CPU(s):64
>> On-line CPU(s) list:   0-63
>> Thread(s) per core:2
>> Core(s) per socket:8
>> Socket(s): 4
>> NUMA node(s):  8
>> Vendor ID: AuthenticAMD
>> CPU family:21
>> Model: 2
>> Model name:AMD Opteron(tm) Processor 6380
>> Stepping:  0
>> CPU MHz:   2500.065
>> BogoMIPS:  4999.42
>> Virtualization:AMD-V
>> L1d cache: 16K
>> L1i cache: 64K
>> L2 cache:  2048K
>> L3 cache:  6144K
>> NUMA node0 CPU(s): 0-7
>> NUMA node1 CPU(s): 8-15
>> NUMA node2 CPU(s): 16-23
>> NUMA node3 CPU(s): 24-31
>> NUMA node4 CPU(s): 32-39
>> NUMA node5 CPU(s): 40-47
>> NUMA node6 CPU(s): 48-55
>> NUMA node7 CPU(s): 56-63
>>
>> (Old software...)
>>
>> jb12407@lmfdb5:~$ lsb_release -a
>> LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-
>> noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-
>> amd64:printing-4.0-noarch
>> Distributor ID: Scientific
>> Description: Scientific Linux release 6.8 (Carbon)
>> Release: 6.8
>> Codename: Carbon
>>
>> jb12407@lmfdb5:~$ uname -r
>> 2.6.32-573.3.1.el6.x86_64
>>
>> On Sun, Sep 25, 2016 at 11:03 PM, Dima Pasechnik <dim...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Sunday, September 25, 2016 at 9:33:42 PM UTC, Jonathan Bober wrote:
>>>>
>>>> I'm getting a segfault on the current development branch of sage (see
>>>> below). Is this a known issue and/or effecting anyone else?
>>>>
>>>> This does not happen for me in Sage 7.3, although it does happen if I
>>>> try to go "back in time" with a 'git checkout 7.3' and then rebuild. So
>>>> just to be certain I built a few copies from scratch.
>>>>
>>>> This is from a doctest in src/doc/*/a_tour_of_sage/index.rst, easily
>>>> reproducible. (Notice the * there. I guess Sage separately tests the
>>>> documentation in each language...)
>>>>
>>>> sage: m = random_matrix(RDF,500)
>>>> sage: m.eigenvalues()
>>>> 
>>>> 
>>>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>>>> kages/cysignals/signals.so(+0x4635)[0x7f6d34da4635]
>>>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>>>> kages/cysignals/signals.so(+0x4685)[0x7f6d34da4685]
>>>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>>>> kages/cysignals/signals.so(+0x7107)[0x7f6d34da7107]
>>>> /lib64/libpthread.so.0[0x30f700f7e0]
>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>>>> daxpy_k+0x48)[0x7f6d29e9eab8]
>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(+
>>>> 0xc9c3e)[0x7f6d29ca9c3e]
>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>>>> exec_blas+0xd9)[0x7f6d29e590f9]
>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>>>> dtrmv_thread_NLU+0x204)[0x7f6d29ca9f34]
>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>>>> dtrmv_+0x155)[0x7f6d29c67b75]
>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>>>> dlahr2_+0x451)[0x7f6d2a0eecb1]
>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>>>> dgehrd_+0x47d)[0x7f6d2a0b6c8d]
>>>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>>>> dgeev_+0x430)[0x7f6d2a0b1470]
>>>> /data/home/jb12407/sources/sage/local/lib/pytho

Re: [sage-devel] Re: openblas segfault?

2016-09-25 Thread Jonathan Bober
jb12407@lmfdb5:~$ lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):64
On-line CPU(s) list:   0-63
Thread(s) per core:2
Core(s) per socket:8
Socket(s): 4
NUMA node(s):  8
Vendor ID: AuthenticAMD
CPU family:21
Model: 2
Model name:AMD Opteron(tm) Processor 6380
Stepping:  0
CPU MHz:   2500.065
BogoMIPS:  4999.42
Virtualization:AMD-V
L1d cache: 16K
L1i cache: 64K
L2 cache:  2048K
L3 cache:  6144K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
NUMA node6 CPU(s): 48-55
NUMA node7 CPU(s): 56-63

(Old software...)

jb12407@lmfdb5:~$ lsb_release -a
LSB Version:
:base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: Scientific
Description: Scientific Linux release 6.8 (Carbon)
Release: 6.8
Codename: Carbon

jb12407@lmfdb5:~$ uname -r
2.6.32-573.3.1.el6.x86_64

On Sun, Sep 25, 2016 at 11:03 PM, Dima Pasechnik <dimp...@gmail.com> wrote:

>
>
> On Sunday, September 25, 2016 at 9:33:42 PM UTC, Jonathan Bober wrote:
>>
>> I'm getting a segfault on the current development branch of sage (see
>> below). Is this a known issue and/or effecting anyone else?
>>
>> This does not happen for me in Sage 7.3, although it does happen if I try
>> to go "back in time" with a 'git checkout 7.3' and then rebuild. So just to
>> be certain I built a few copies from scratch.
>>
>> This is from a doctest in src/doc/*/a_tour_of_sage/index.rst, easily
>> reproducible. (Notice the * there. I guess Sage separately tests the
>> documentation in each language...)
>>
>> sage: m = random_matrix(RDF,500)
>> sage: m.eigenvalues()
>> 
>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>> kages/cysignals/signals.so(+0x4635)[0x7f6d34da4635]
>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>> kages/cysignals/signals.so(+0x4685)[0x7f6d34da4685]
>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>> kages/cysignals/signals.so(+0x7107)[0x7f6d34da7107]
>> /lib64/libpthread.so.0[0x30f700f7e0]
>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>> daxpy_k+0x48)[0x7f6d29e9eab8]
>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(+
>> 0xc9c3e)[0x7f6d29ca9c3e]
>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>> exec_blas+0xd9)[0x7f6d29e590f9]
>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>> dtrmv_thread_NLU+0x204)[0x7f6d29ca9f34]
>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>> dtrmv_+0x155)[0x7f6d29c67b75]
>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>> dlahr2_+0x451)[0x7f6d2a0eecb1]
>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>> dgehrd_+0x47d)[0x7f6d2a0b6c8d]
>> /data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(
>> dgeev_+0x430)[0x7f6d2a0b1470]
>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>> kages/scipy/linalg/_flapack.so(+0x2cde7)[0x7f66808b4de7]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(PyObject_Call+0x43)[0x7f6d4244aed3]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(PyEval_EvalFrameEx+0x388a)[0x7f6d424fd5ba]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(PyEval_EvalFrameEx+0x5654)[0x7f6d424ff384]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(+0x839fc)[0x7f6d4247b9fc]
>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>> kages/sage/matrix/matrix_double_dense.so(+0x9949)[0x7f6682759949]
>> /data/home/jb12407/sources/sage/local/lib/python2.7/site-pac
>> kages/sage/matrix/matrix_double_dense.so(+0x5161b)[0x7f66827a161b]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(PyEval_EvalFrameEx+0x5331)[0x7f6d424ff061]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
>> /data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.
>> 0(PyEval_EvalCode+0x1

[sage-devel] openblas segfault?

2016-09-25 Thread Jonathan Bober
I'm getting a segfault on the current development branch of sage (see
below). Is this a known issue and/or effecting anyone else?

This does not happen for me in Sage 7.3, although it does happen if I try
to go "back in time" with a 'git checkout 7.3' and then rebuild. So just to
be certain I built a few copies from scratch.

This is from a doctest in src/doc/*/a_tour_of_sage/index.rst, easily
reproducible. (Notice the * there. I guess Sage separately tests the
documentation in each language...)

sage: m = random_matrix(RDF,500)
sage: m.eigenvalues()

/data/home/jb12407/sources/sage/local/lib/python2.7/site-packages/cysignals/signals.so(+0x4635)[0x7f6d34da4635]
/data/home/jb12407/sources/sage/local/lib/python2.7/site-packages/cysignals/signals.so(+0x4685)[0x7f6d34da4685]
/data/home/jb12407/sources/sage/local/lib/python2.7/site-packages/cysignals/signals.so(+0x7107)[0x7f6d34da7107]
/lib64/libpthread.so.0[0x30f700f7e0]
/data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(daxpy_k+0x48)[0x7f6d29e9eab8]
/data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(+0xc9c3e)[0x7f6d29ca9c3e]
/data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(exec_blas+0xd9)[0x7f6d29e590f9]
/data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(dtrmv_thread_NLU+0x204)[0x7f6d29ca9f34]
/data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(dtrmv_+0x155)[0x7f6d29c67b75]
/data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(dlahr2_+0x451)[0x7f6d2a0eecb1]
/data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(dgehrd_+0x47d)[0x7f6d2a0b6c8d]
/data/home/jb12407/sources/sage/local/lib/libopenblas.so.0(dgeev_+0x430)[0x7f6d2a0b1470]
/data/home/jb12407/sources/sage/local/lib/python2.7/site-packages/scipy/linalg/_flapack.so(+0x2cde7)[0x7f66808b4de7]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyObject_Call+0x43)[0x7f6d4244aed3]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x388a)[0x7f6d424fd5ba]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5654)[0x7f6d424ff384]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(+0x839fc)[0x7f6d4247b9fc]
/data/home/jb12407/sources/sage/local/lib/python2.7/site-packages/sage/matrix/matrix_double_dense.so(+0x9949)[0x7f6682759949]
/data/home/jb12407/sources/sage/local/lib/python2.7/site-packages/sage/matrix/matrix_double_dense.so(+0x5161b)[0x7f66827a161b]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5331)[0x7f6d424ff061]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCode+0x19)[0x7f6d42500789]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x509e)[0x7f6d424fedce]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5654)[0x7f6d424ff384]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5654)[0x7f6d424ff384]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5654)[0x7f6d424ff384]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5654)[0x7f6d424ff384]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5654)[0x7f6d424ff384]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x5654)[0x7f6d424ff384]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x81c)[0x7f6d4250066c]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyEval_EvalCode+0x19)[0x7f6d42500789]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyRun_FileExFlags+0x8a)[0x7f6d42523e5a]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(PyRun_SimpleFileExFlags+0xd7)[0x7f6d42525207]
/data/home/jb12407/sources/sage/local/lib/libpython2.7.so.1.0(Py_Main+0xc3e)[0x7f6d4253b70e]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x30f6c1ed1d]
python[0x4006f1]

Re: [sage-devel] Re: [sage-support] huge virtual memory size when launching 7.3

2016-09-21 Thread Jonathan Bober
On Wed, Sep 21, 2016 at 4:41 PM, Jeroen Demeyer 
wrote:

I tested this with a small stand-alone C program: the flags
> PROT_NONE | MAP_PRIVATE | MAP_ANONYMOUS
> allow to allocate huge amounts of virtual memory, even with overcommit=2.


I also tried this. And I alsoo found that the memory does count against the
commit limit once mprotect() is called with PROT_READ | PROT_WRITE. Also,
with PROT_NONE I tried allocating 100 TB (with PROT_NONE, again) to 200
different processes at the same time, and I couldn't even notice any effect
of this in /proc/meminfo. I haven't actually found this behavior documented
anywhere, and it would be good to know that it works similarly on OS X.

There is still a small cost: the memory does count against the process as
far as ulimit -v is concerned. But that doesn't seem like much of an issue
to me.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


[sage-devel] Re: [sage-support] huge virtual memory size when launching 7.3

2016-09-21 Thread Jonathan Bober
(I've swtiched from sage-support to to sage-devel.)

I can test and review. I think I know what to do and I would have just
tried to implement this myself, except that it would take me a while to
figure out how to make the change in such a way that it fits into the Sage
build process.

I spent some time trying to understand the issue and what this PARI stack
is all about. What I think I understand is something like:

- PARI uses it's own internal stack for quick memory allocation
- At the "top level", if I'm using PARI as a C-library, or through the GP
interpreter, the stack is empty between function calls, so it can be
resized.
- While in use, though, the stack can't really be resized, because
references to memory allocated on the stack don't use the stack pointer,
and resizing might require moving the stack
- If the stack runs out of space, PARI throws some error, and once upon a
time Sage would notice this error, increase the stack size, and repeat
- Now instead Sage just sets the stack size to be really big, which
probably is generally fine because modern systems generally don't
allocation memory until it is actually touched

(Is that all correct?)

Maybe this change in Sage corresponds somewhat to a change in the way that
PARI manages its stack. But from reading the PARI source code I can't
actually see that PARI is managing the stack in any way that is actually
distinguishable from just calling malloc() to allocate the stack and then
using it until it is full. Probably the use of MAP_NORESERVE is intended to
have some effect, but on Linux MAP_NORESERVE appears to do nothing.
Meanwhile, the paristack_resize() function just does some arithmetic, and
doesn't actually touch the allocated stack space at all. Luckily, the
paristack_resize() function does exist, though, and probably gets properly
used even though it doesn't really do anything, so a call to mprotect can
be added there, and meanwhile the mmap call can use PROT_NONE. Maybe those
are the only spots where the PARI source code needs to be changed.

(I'm not completely sure I'm correct about all of that.)

I'm probably going to try to modify a clean copy of PARI to do this, or
just write some completely separate test code to check that an mmap call
with PROT_NONE will work like we think it will work.

On Tue, Sep 20, 2016 at 12:02 PM, Jeroen Demeyer <jdeme...@cage.ugent.be>
wrote:

> On 2016-09-20 12:54, Jonathan Bober wrote:
>
>>  From reading what you've sent, I guess that what you have in mind is
>> calling mmap with PROT_NONE and then calling mprotect() to change that
>> to read/write whenever growing the stack? That seems like it might be a
>> reasonable thing to do (though I'm only basing that on spending a few
>> minutes reading what you sent, not from any actual knowledge that I had
>> before).
>>
>
> Yes, that is my idea.
>
> I'm willing to devote some time (not today) to figuring out what the
>> right thing to do is (maybe the above is already the right thing) /
>> implementing this / reviewing this.
>>
>
> I don't mind implementing it. What I *do* mind is that I implement it and
> that the patch rots away on Sage Trac in needs_review state (highly
> specialized patches like these have a higher chance of that). That's why I
> asked for a commitment to review.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-support" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-support+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-supp...@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-support.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


[sage-devel] introspection is slow

2012-05-29 Thread Jonathan Bober
see http://trac.sagemath.org/sage_trac/ticket/13057

This seems to be a bad regression. On reasonable machines it can take
5 seconds to get the docstring of an object by using '?'. Does anyone
have any idea what happened? I don't see any changes to
sage/misc/sagedoc.py or sage/misc/sageinspect.py in the last few
months that seem like they would cause this.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Upgrading trac

2012-03-21 Thread Jonathan Bober
Perhaps you just had a flaky connection, and the fact that the same
file kept failing was just a coincidence?

There are some errors in the trac log like

[...]
  File /usr/lib/python2.5/cgi.py, line 691, in read_lines
self.read_lines_to_outerboundary()
  File /usr/lib/python2.5/cgi.py, line 719, in read_lines_to_outerboundary
line = self.fp.readline(116)
IOError: request data read error

which I think would happen if your connection was terminated while you
were uploading the file. Otherwise, this sounds like a strange bug in
trac.

On Wed, Mar 21, 2012 at 11:54 AM, David Roe r...@math.harvard.edu wrote:
 I have no idea what's going on, but I succeeded in uploading it.  It's now
 attached to 12717.
 David


 On Wed, Mar 21, 2012 at 11:23, Florent Hivert florent.hiv...@lri.fr wrote:

      Hi David,

   I don't know exactly what happened, but the folder
   /var/trac/sage_trac/attachments/ticket/12717 didn't exist even though
   the
   ticket existed.  I tried attaching a file to the ticket, succeeded,
   and
   now the folder exists.  Try again?

 Thanks for your help.

  I tried Firefox, Konqueror and Opera. None of them seems to work. Any
  Idea ?

 This is very strange ! I was successful in adding a random patch but trac
 seems to refuse the one patch I want to add. The offending patch is at
 [0], if
 someone has an idea of what is happening.

 Cheers,

 Florent

 [0]
 http://combinat.sagemath.org/patches/file/084e2b6120ff/trac_12717-latex_builtin_constants-fh.patch

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Sage requires SSL

2012-03-20 Thread Jonathan Bober
On Tue, Mar 20, 2012 at 10:28 AM, Keshav Kini keshav.k...@gmail.com wrote:
 I guess Purple Sage will have to switch to git when Sage does, since
 their code bases are deeply connected. lmfdb, which I hadn't heard of
 before, looks to be a separate codebase (like sagenb is). Is it
 switching to git too? If not, then do we still need to ship Mercurial in
 Sage to allow people to hack on lmfdb, or will they need to start using
 systemwide versions of Mercurial?

 -Keshav

Actually, the code base of psage is not deeply connected to sage. It
really is just a python library that more or less depends on all of
sage being present. (Or at least a lot of sage.) It could probably
switch to git without much of an issue. As for the LMFDB, I haven't
really thought about it at all. I would like it if it used git, but
I'm not going push for people to switch without much reason.

I don't think Sage _needs_ to ship mercurial if it switches to git,
but it would be nice for me, and probably other, if it did. As I've
noted before, mercurial seems to be a PITA to build from source when I
don't have root access, and the last time I tried to do it I gave up
and decided to use Sage's mercurial. (git was easy to build, though.)

In this (still hypothetical) situation, since most of the work has
already been done, mercurial could probably also just stick around as
an optional package, at least until it breaks from lack of
maintenance.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Upgrading trac

2012-03-20 Thread Jonathan Bober
On Tue, Mar 20, 2012 at 3:58 AM, leif not.rea...@online.de wrote:
 On 20 Mrz., 10:45, P Purkayastha ppu...@gmail.com wrote:
 On Tuesday, March 20, 2012 4:43:41 PM UTC+8, Jeroen Demeyer wrote:

  On 2012-03-20 09:17, David Roe wrote:
   What link is broken?  I'm finding the link to

 http://trac.sagemath.org/sage_trac/raw-attachment/ticket/8109/trac_81...
   from the attachment page for example...
  I'm not talking about the attachment page, but the Attachments section
  (below the ticket description and above the Change History):
 http://trac.sagemath.org/sage_trac/ticket/8109#no1

  Those links used to have a raw-attachment link, but not anymore.  In
  fact, I suspect this has changed only very recently (like the last 24
  hours or so).

 Like I posted 
 here:https://groups.google.com/d/msg/sage-devel/gMYp3syruYQ/lchDQIAv7SkJ those
 are just missing an img line in the html code.

 Exactly.  And I can also just show you the difference between the
 generated HTML code:

 Old style*:

 a href=http://trac.sagemath.org/sage_trac/raw-attachment/ticket/2999/
 pbori-custom_py.patch title=Download class=trac-rawlinkimg
 src=trac_2999-oldstyle-files/download.png alt=Download/a

 New style/after upgrading:

 a href=http://trac.sagemath.org/sage_trac/raw-attachment/ticket/2999/
 pbori-custom_py.patch class=trac-rawlink title=Download/a


 So the raw links (i.e. their anchors) are indeed there, there's just
 no icon you could click on. :-)


 -leif


Thanks for figuring this out. I just fixed it.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Upgrading trac

2012-03-20 Thread Jonathan Bober
Whatever is going on, it has to do with trac, and not the database, it
seems. apache runs at 100% for a while before returning a response
whenever that page is loaded. I wonder if perhaps the issue is with
one of the plugins.

On Tue, Mar 20, 2012 at 4:40 PM, Maarten Derickx
m.derickx.stud...@gmail.com wrote:


 Le mercredi 21 mars 2012 00:22:10 UTC+1, David Roe a écrit :

 I agree that it's a problem, but I don't know what caused the change.
 David


 I don't know either, but the sage logs tell me what your ip adress is :P, I
 was looking trough the logs to see what happend if I requested the page and
 then saw someone elso look at it. The logs of apache don't show weird things
 accept that they confirm it is slow. I.e. two requests that I made only
 seconds after eachoter where quite far appart in the logs. For example:

 194.171.106.2 - - [20/Mar/2012:16:20:41 -0700] GET /sage_trac/admin
 HTTP/1.1 200 2160
 insert a lot of entrys here
 128.208.160.197 - - [20/Mar/2012:16:24:09 -0700] GET /sage_trac/ticket/135
 HTTP/1.1 200 12330
 194.171.106.2 - - [20/Mar/2012:16:20:43 -0700] GET
 /sage_trac/admin/accounts/users HTTP/1.1 200 33771

 This shows that a request that was made more then 3 minutes after mine got
 delivered to the client first!

 I tried to find other trac related log file but I could find nothing in
 /var/log.








 On Tue, Mar 20, 2012 at 16:15, Maarten Derickx
 m.derickx.stud...@gmail.com wrote:



 Le mardi 20 mars 2012 12:33:27 UTC+1, Maarten Derickx a écrit :

 I noticed that the loading
 of http://trac.sagemath.org/sage_trac/admin/accounts/users now takes
 significantly longer. It used to take less then a second but when I now go
 there to create a new user it takes at least 5 seconds (if not more). The
 stupid thing is that after having created a new account, I can read and
 delete the account creation confirmation mail send out by trac before the
 webpage is loaded again.


 Ok, so the 5 seconds in the previous mail was an huge understatement,
 when it didn't load fast previous time I did some other stuff and came back
 later to see it was loaded. I now timed it and it took 2 minutes and 35
 seconds. This is really bad, because if I now have to create a new account I
 have to wait that long before I can start the creation. And after the
 creation I again have to wait that long to see if it went ok.




 Le dimanche 18 mars 2012 17:35:01 UTC+1, David Roe a écrit :

 Hi everyone,
 We're looking at upgrading trac from 0.11.5 to 0.12.3 (latest stable
 version).  There will probably be some downtime later today (hopefully 
 less
 than 10 minutes) to switch over to the new version.

 Let us know if today is a bad day for some reason, and also if
 something is misbehaving after the switch.
 David

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Upgrading trac

2012-03-19 Thread Jonathan Bober
I just made a few more changes to the trac/apache configuration.
Apache now serves up (some) trac static files directly, instead of
having trac do it. This should give better performance, but I don't
know whether or not it will actually be anything noticeable.

I also decreased the maximum number of process/threads that trac runs,
which I think should mean fewer database connections, and should make
the connection limit error go away, I hope.

On Mon, Mar 19, 2012 at 5:30 AM, Hugh Thomas hugh.ross.tho...@gmail.com wrote:

 Thanks, David and Jeroen.  I haven't used trac that much, and I guess I just
 hadn't encountered that behaviour before, so I thought it might be related.


 cheers,

 Hugh

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Releasing more often?

2012-03-07 Thread Jonathan Bober
I think I remembered the problems wrong. I think maybe it isn't an issue
with the version of VirtualBox, but an issue with the kernel that mod.math
is running. Performance with multiple cores seems to be much worse than
single core performance. An example I just tried:

with a single core in the VM, time to compile gnutls:

real 7m46.024s
user 1m23.073s
sys 4m48.342s

with 4 cores in the VM, time to compile gnutls (using a single process):

real 13m20.065s
user 4m50.114s
sys 15m43.147s

(mod.math is heavily loaded right now, so maybe this isn't a completely
conclusive test, but I think it is reliable.) I think that things are
actually much worse than it seems with the above timings. Multicore
VirtualBox seems to be basically unusable on mod.math. (I've tried this
before, and had a similar experience.)

It is hard for me to find reliable information from google, given the age
of the OS, but there is some reference to problems here:
https://www.virtualbox.org/ticket/5501. On the bug discussion, there's
basically no identification of the problem other than it goes away with a
newer kernel, though.


On Tue, Mar 6, 2012 at 12:20 PM, William Stein wst...@gmail.com wrote:

 On Tue, Mar 6, 2012 at 12:11 PM, Jonathan Bober jwbo...@gmail.com wrote:
  I am going to be building a VM on sage.math for other purposes soon, so
 I'll
  try to make one for this as well. I suppose we would want something like
  Ubuntu 11.10 32-bit desktop edition?
 
  (Unfortunately, virtualbox on the *.math machines is really old, or at
 least
  it was last time I checked, so using more than one cpu, for example,
 doesn't
  work. It might be nice if it could be upgraded somehow, but I imagine
 that
  might be rather difficult without upgrading the OS.)

 Use mod.math instead of boxen.math.  On mod.math there is a newer
 virtualbox.

 
 
  On Tue, Mar 6, 2012 at 6:23 AM, Jeroen Demeyer jdeme...@cage.ugent.be
  wrote:
 
  On 2012-03-06 15:17, Georg S. Weber wrote:
   I would expect that a Linux i386 machine
   is mainly tested as a virtualized image (with a special 32bit setup,
   of course, maybe even using SAGE_ATLAS and such).
  This has been suggested regularly, it just needs to be done.  If you are
  able to do this, please do it!
 
  --
  To post to this group, send an email to sage-devel@googlegroups.com
  To unsubscribe from this group, send an email to
  sage-devel+unsubscr...@googlegroups.com
  For more options, visit this group at
  http://groups.google.com/group/sage-devel
  URL: http://www.sagemath.org
 
 
  --
  To post to this group, send an email to sage-devel@googlegroups.com
  To unsubscribe from this group, send an email to
  sage-devel+unsubscr...@googlegroups.com
  For more options, visit this group at
  http://groups.google.com/group/sage-devel
  URL: http://www.sagemath.org



 --
 William Stein
 Professor of Mathematics
 University of Washington
 http://wstein.org

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Releasing more often?

2012-03-06 Thread Jonathan Bober
I am going to be building a VM on sage.math for other purposes soon, so
I'll try to make one for this as well. I suppose we would want something
like Ubuntu 11.10 32-bit desktop edition?

(Unfortunately, virtualbox on the *.math machines is really old, or at
least it was last time I checked, so using more than one cpu, for example,
doesn't work. It might be nice if it could be upgraded somehow, but I
imagine that might be rather difficult without upgrading the OS.)

On Tue, Mar 6, 2012 at 6:23 AM, Jeroen Demeyer jdeme...@cage.ugent.bewrote:

 On 2012-03-06 15:17, Georg S. Weber wrote:
  I would expect that a Linux i386 machine
  is mainly tested as a virtualized image (with a special 32bit setup,
  of course, maybe even using SAGE_ATLAS and such).
 This has been suggested regularly, it just needs to be done.  If you are
 able to do this, please do it!

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Group collaboration on a branch of Sage

2012-02-29 Thread Jonathan Bober
On Wed, Feb 29, 2012 at 10:12 AM, William Stein wst...@gmail.com wrote:


 It's difficult for me because they are patch in hg format, but I can't
 export a patch out of git in hg format.  It's possible to import the
 code in from the trac ticket, but then if I make changes I have to
 export them as a git diff, then import them into another hg repo, then
 export them again.As far as I can tell -- no easy way to referee a
 trac ticket right now using git.   I bet it would be possible for a
 git wizard to setup some scripts to make this easy.I think doing
 so is an important first step toward moving Sage developers to git.


One option might be to try to use the mercurial plugin hggit. I tried it
for about 5 minutes one day to migrate from a mercurial repository to a git
repository, and I never quite got it to work right for what I wanted, but
in principle you should be able to use it to pull from (or push to) a git
repository into (or from) a mercurial repository. So if you keep two
repositories, this should at least get rid of (or automate) the
intermediate step of exporting changes from git and then importing the
changes into mercurial. With some more effort to figure out how to make it
work right, it might make other things easier as well.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] integration bug, segfault

2012-02-21 Thread Jonathan Bober
I didn't actually expect the following to work very well, but I definitely
did not expect the output that I did get:

sage: def IC9E(K, j, a, b, epsilon):
: g(t) = 2*pi*i * (a - i*a + 2*b*K + i*2*b*K)*(1+i)*t - (1+i)^2 *
2*i*b*t^2
: f(t) = t^j * exp(g(t))
: return integral(f, (t, 0, infinity))
sage: IC9E(112, 0, 0.00084696102822690023, 0.044171315635412135, exp(-20))
# suddenly I get a lot of advertisements for a popular question/answer
website...
[...]
;;;
;;; Binding stack overflow.
;;; Jumping to the outermost toplevel prompt
;;;
[...]
;;;
;;; Stack overflow.
;;; Jumping to the outermost toplevel prompt
;;;
[...]
/home/bober/sage-5.0.beta2/spkg/bin/sage: line 304:  3765 Segmentation
fault  sage-ipython $@ -i

The [...] omits a lot of output (from maxima?). The above was on 5.0.beta2,
but it happens at sagenb.org as well.

I haven't opened a track ticket because I would prefer to put something
more specific than the subject of this email. I guess that there is some
sort of bug in maxima, maybe, but also maybe a bug in Sage, since (maybe)
Sage shouldn't crash when the maxima subprocess crashes.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] weird behavior of sage in terminal

2012-02-10 Thread Jonathan Bober
By sage in terminal I mean not the notebook.

In recent builds (maybe mostly in the sage-5.0-beta series, but maybe 4.8,
and maybe older -- I know that's not too helpful) I've occasionally noticed
weird things working with sage from the command line. Sometimes tab
completion has just completely or mostly stopped working until I quit and
restart, and now after hitting CTRL-c I have a terminal where the behavior
of pressing up has just changed.

Normally if I start typing 'G = ', and then hit the up arrow, it will take
me to the previous spot where I started a line this way. (I used to _hate_
this behavior, and now I can't live without it.) Now, after hitting CTRL-c
in the middle of typing a command (to just erase the whole thing), I find
the behavior has changed, so that when I hit up it just takes me to the
previous line.

Anyone have any idea what's going on? It's not a big deal, but it seems to
be a strange bug, and annoys me sometimes.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] weird behavior of sage in terminal

2012-02-10 Thread Jonathan Bober
On Fri, Feb 10, 2012 at 10:58 AM, Justin C. Walker jus...@mac.com wrote:


 On 10 Feb, 2012, at 02:06 AM, Jonathan Bober wrote:

  By sage in terminal I mean not the notebook.
 
  In recent builds (maybe mostly in the sage-5.0-beta series, but maybe
 4.8, and maybe older -- I know that's not too helpful) I've occasionally
 noticed weird things working with sage from the command line. Sometimes tab
 completion has just completely or mostly stopped working until I quit and
 restart, and now after hitting CTRL-c I have a terminal where the behavior
 of pressing up has just changed.
 
  Normally if I start typing 'G = ', and then hit the up arrow, it will
 take me to the previous spot where I started a line this way. (I used to
 _hate_ this behavior, and now I can't live without it.) Now, after hitting
 CTRL-c in the middle of typing a command (to just erase the whole thing), I
 find the behavior has changed, so that when I hit up it just takes me to
 the previous line.
 
  Anyone have any idea what's going on? It's not a big deal, but it seems
 to be a strange bug, and annoys me sometimes.

 Normally, this kind of problem is *very* connected to the OS, and terminal
 emulator, that you are using.  The problems you mention deal with
 curses/ncurses library behavior and TERMCAP/TERMINFO databases; and to
 some extent, with interrupt handling.  To some extent, the problems arise
 from confused state in the emulator, which can arise either from a
 mismatch between kernel and emulator state, or from bugs in either the
 emulator or terminal description.  Sometimes, changing the value of the
 TERM environment variable can ease the problems (but that takes a lot
 experimentation and patience).

 On Mac OS X/Terminal, you should be able to recover normal behavior by
 doing a full terminal reset (Shell - Send Hard Reset).  Other
 systems/emulators should have similar functionality.

 What system(s) are you using?

 Justin


Yes, I should have mentioned, of course, that this is on Ubuntu 11.10, 64
bit. I'm using gnome-terminal, and TERM is set to xterm. I didn't think of
reset before --- next time something weird happens I'll type '!reset' into
Sage, if that is what you mean, and see if that restores things. I can't
seem to replicate the behavior right now, though; it really happens only
occasionally.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: [ARM] The failed numerical tests only show the tests are bad!

2012-02-08 Thread Jonathan Bober
On Tue, Feb 7, 2012 at 7:19 PM, Julien Puydt julien.pu...@laposte.netwrote:

 Le Tue, 7 Feb 2012 20:23:08 -0800 (PST),
 Jason jason.harri...@gmail.com a écrit :
  One benefit of programs like matlab and mathematica is that not only
  do they bring together many different functions with a common syntax,
  but that they (presumably) have standardized precision and accuracy
  control. So coming up with a standard in this area is important. I
  think such a standard could even be used by developers of algorithms
  who have nothing to do with sage as at least a guideline so they know
  if their function is sage compatible or not. The standard doesn't
  have to be a single hard limit, but whatever the experts come up with.
  It could be as little as required documentation to as much as explicit
  benchmarks or classifications. Maybe functions that implement the
  standard could be a special type, so if you use only those functions
  there is some type of guarantee about the overall calculation that can
  be made.. Just some random thoughts here.

 As far as I know, the only way to make this work is with interval
 arithmetic, as it's pretty easy to see that the precision of a
 computation cannot be constant [1].

 Snark on #sagemath

 [1] http://en.wikipedia.org/wiki/Interval_arithmetic


No, it is quite possible to make it work, just not that easy. Note that
certain numerical computations in Sage _will_ be the same on any platform
where Sage is working properly (modulo bugs, of course). For example,
RealField uses mpfr as a backend, so computations with RealField are very
well defined and results will be the same on any platform. (At least, they
should be the same, and if they are not it is definitely a bad bug.) Also,
when you are working at a high level, the default is for all floating point
computations to go through RealField.

If ARM and x86 were giving different answers for RR(6).gamma(), or
RR(anything).gamma(), that would be a serious bug.

The issue is in trying to ensure consistency of lower level computations,
like operations on python floats, which map to machine doubles. This would
be doable in principle, but more difficult. At the very least it would
require elimination of any calls to libc math functions (which could
perhaps be somewhat implemented by writing our own libm and always linking
to it). There are also more subtle issues that come up, like compiler
inconsistencies and 64-bit vs 80-bit floating point.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!

2012-02-07 Thread Jonathan Bober
See http://trac.sagemath.org/sage_trac/ticket/12449

I made a patch to change the way that sage evaluates symbolic functions for
basic python types, and at the same time changed RDF to just use
math.gamma() instead of gsl's gamma function.

(Note: math.gamma() should be available in sage-5.0 (python 2.7.2), but I
don't think it is in 4.8.)

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!

2012-02-06 Thread Jonathan Bober
On Mon, Feb 6, 2012 at 3:05 PM, Dr. David Kirkby david.kir...@onetel.netwrote:

 On 02/ 5/12 10:16 PM, Jonathan Bober wrote:

  Never mind all that: the gsl implementation is not very good at all,
 whereas the libc implementation on my machine seems quite good. Old
 (libc):


 If that's the case, why not report the fact to the appropiate mailing list
 - bug-gsl at gnu.org?

 dave


Well, I just sort of assume that the gsl developers have some idea how
accurate their gamma function is and perhaps they consider their
implementation just fine. It might not be a bug --- it might just be a
design decision. Instead of not very good I should have said not as
accurate as eglibc.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!

2012-02-05 Thread Jonathan Bober
See http://trac.sagemath.org/sage_trac/ticket/12449

I'm in the middle of [hopefully] fixing this by calling the gsl gamma
function. While I'm at it, I'll also make the evaluation on basic types
much faster, as it shouldn't go though Ginac. (Actually, I've already
mostly written a fix. I opened the ticket and wrote this email while
running 'make ptestlong'.)

Have you actually checked the gsl implementation on ARM? For me it at least
satisfies

sage: [ZZ(sage.gsl.special_functions.gamma(n)) == n.gamma() for n in
srange(1, 23)] == [True] * 22
True

(Unfortunately, I just wrote that sage.gsl.special_functions.gamma(). There
is no high level interface for you to immediately check if it gives the
right answer.)

-

Never mind all that: the gsl implementation is not very good at all,
whereas the libc implementation on my machine seems quite good. Old (libc):

sage: gamma(1.23).str(base=2)
'0.111010010010011100111010001011001010110010111'
sage: RR(gamma(1.23r)).str(base=2)
'0.111010010010011100111010001011001010110010111'

gsl:

sage: RR(gamma(1.23r)).str(base=2)
'0.11101001001001110011101011100010110111001'

look at all the wrong digits! In my testing, gsl_gamma() has a typical
error of around 1e-8 on random input between 1 and 2, the tgammal() rounded
to double precision has a typical error of 0 (compared to the correctly
rounded value from mpfr).

What do you get on ARM if you do something like

sage: for n in range(100):
: x = RR.random_element(1, 2)
: print abs(RR(gamma(float(x))) - x.gamma())
:

?

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!

2012-02-05 Thread Jonathan Bober
I think we may be overlooking a very reasonable option. Python already has
a gamma function in the math module! It is a separate implementation that
does not depend on libc, and it gives reasonable results (though perhaps
not as good as eglibc tgammal() on x86):

sage: max(( abs( RR(math.gamma(float(x))) - x.gamma())/x.gamma().ulp() for
x in (RR.random_element(-171, 171) for _ in xrange(10)) ))

6.00
sage: mean([ abs( RR(math.gamma(float(x))) - x.gamma())/x.gamma().ulp() for
x in (RR.random_element(-171, 171) for _ in xrange(10)) ])
0.8679600
sage: median([ abs( RR(math.gamma(float(x))) - x.gamma())/x.gamma().ulp()
for x in (RR.random_element(-171, 171) for _ in xrange(10)) ])
1.00
sage: sage: [ZZ(math.gamma(float(n))) == n.gamma() for n in srange(1, 23)]
== [True] * 22
True

The source code does say:

   In extensive but non-exhaustive
   random tests, this function proved accurate to within = 10 ulps across
the
   entire float domain.  Note that accuracy may depend on the quality of the
   system math functions, the pow function in particular.

So if the accuracy of pow() in eglicb relies on long doubles, then there
may be a problem, but maybe it will work well there.

On Sun, Feb 5, 2012 at 2:16 PM, Jonathan Bober jwbo...@gmail.com wrote:

 See http://trac.sagemath.org/sage_trac/ticket/12449

 I'm in the middle of [hopefully] fixing this by calling the gsl gamma
 function. While I'm at it, I'll also make the evaluation on basic types
 much faster, as it shouldn't go though Ginac. (Actually, I've already
 mostly written a fix. I opened the ticket and wrote this email while
 running 'make ptestlong'.)

 Have you actually checked the gsl implementation on ARM? For me it at
 least satisfies

 sage: [ZZ(sage.gsl.special_functions.gamma(n)) == n.gamma() for n in
 srange(1, 23)] == [True] * 22
 True

 (Unfortunately, I just wrote that sage.gsl.special_functions.gamma().
 There is no high level interface for you to immediately check if it gives
 the right answer.)

 -

 Never mind all that: the gsl implementation is not very good at all,
 whereas the libc implementation on my machine seems quite good. Old (libc):

 sage: gamma(1.23).str(base=2)
 '0.111010010010011100111010001011001010110010111'
 sage: RR(gamma(1.23r)).str(base=2)
 '0.111010010010011100111010001011001010110010111'

 gsl:

 sage: RR(gamma(1.23r)).str(base=2)
 '0.11101001001001110011101011100010110111001'

 look at all the wrong digits! In my testing, gsl_gamma() has a typical
 error of around 1e-8 on random input between 1 and 2, the tgammal() rounded
 to double precision has a typical error of 0 (compared to the correctly
 rounded value from mpfr).

 What do you get on ARM if you do something like

 sage: for n in range(100):
 : x = RR.random_element(1, 2)
 : print abs(RR(gamma(float(x))) - x.gamma())
 :

 ?


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: [ARM] The failed numerical tests only show the tests are bad!

2012-02-04 Thread Jonathan Bober
On Sat, Feb 4, 2012 at 8:13 AM, rjf fate...@gmail.com wrote:



  If there is a reasonable
  implementation that can guarantee this behavior with no loss in speed and
  no other significant trade-offs, then library designers should use it,
 but
  I don't think that it is such a simple issue.

 I think you miss the point that bad answers are harmful in ways that
 you
 cannot anticipate easily. Speed is not job one.


No, I am not missing that point at all. I am simply trying to point out
that there are many issues to consider. Suppose there are two
implementations which compute some function f. If the first has the
property that it is always correct to with 5ulp, and the second has the
property that it is always correct to .5 ulp, but the second takes 5 times
longer to run than the first, then there is no answer to the question of
which is better? It depends entirely on desired usage.

If you are writing a higher level function that uses f() as input, then you
can make a choice of which behavior you prefer, but if you are writing a OS
level library that is going to be the default for anyone who doesn't make a
choice, then it is not clear which implementation you should choose.

 [...]

  As far as I know, the standards do not
  specify that transcendental functions need to be computed with the same
  accuracy that one expects from basic arithmetic operations, because it is
  essentially unknown how much memory/time would be needed for this in all
  cases.

 search for table maker's dilemma.


That is, of course, exactly what I was referring to. I may have exaggerated
a bit, and perhaps 1 ulp is obtainable even when .5 ulp might not be.

Just taking your statement at face value, do you really think that it
 is OK for
 2 people using Sage on 2 different computers should be able to run the
 exact same
 program and get 2 different answers, like  this computation shows
 there is water on Mars
 and nope


In some cases, yes, I think that would be OK. Your example is extreme, but
if somehow the consistency would  come down to the evaluation of
a transcendental function on a primitive type that maps directly to a
hardware type being the same on both platforms, then that program is
probably doing something wrong. Some things should be the same on all
platforms, like arithmetic with elements of the RealField. Some things
might vary, like arithmetic with python floats.


 Would you also
 accept a system in which 2+2 is unreliable?


Probably not, as that is much different. But on some machines, e.g. x86
with extended precision enabled in the FPU, a JIT compiler might change the
results of a function that uses floating point arithmetic from one function
call to the next, as compilation might change when intermediate results get
rounded. And with gcc, results of floating point arithmetic can change
depending on optimization, even without -funsafe-math-optimizations. I view
these things as acceptable, and if I am writing code that depends on
consistency I take them into account.

Anyway, this aspect of the conversation has probably gone on too long.
Maybe I'll have to sign up for sage-flame if I feel like continuing it.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: [ARM] The failed numerical tests only show the tests are bad!

2012-02-04 Thread Jonathan Bober
On Sat, Feb 4, 2012 at 2:11 PM, Robert Bradshaw 
rober...@math.washington.edu wrote:


 Note that what is broken[1] here is ARM's libc, if one types
 gamma(6.0) one gets 120. on all systems. It's a
 question about gamma(float(6.0)) which is explicitly requesting the
 (presumably faster but potentially less accurate) system
 implementation. The question is should we

 1) Weaken this doctest to support this broken platform.
 2) Provide a workaround for this platform (e.g. using gsl or similar)
 to avoid the broken implementation of gamma.
 3) Mark this test as broken (not tested?) on ARM, and ARM only.

 I think options 2  3 are far superior to 1.

 - Robert


Despite everything else I've said, I really don't have a strong opinion on
which option is the best, but I do feel that 2 and 3 are superior to 1.

Actually, given the current state of things, the whole discussion of
whether gamma(6.0r) should be possibly faster but less accurate than
gamma(6.0) is somewhat funny, because it is both slower and less accurate.

sage: timeit('gamma(6.0r)')
625 loops, best of 3: 15.9 µs per loop
sage: timeit('gamma(6.0)')
625 loops, best of 3: 11.7 µs per loop
sage: timeit('gamma(6)')
625 loops, best of 3: 4.85 µs per loop
sage: timeit('RR(6).gamma()')
625 loops, best of 3: 6.53 µs per loop
sage: timeit('float(gamma(6.0))')
625 loops, best of 3: 11.9 µs per loop

So an implementation which quickly converted the float to
a sage.rings.real_mpfr.RealNumber and used that would be better than the
current implementation. (As far as I can tell, gamma(float(6)) takes so
long because sage tries some things which don't work and then it falls back
on creating a pynac object, which it then evaluates. Ginac then does some
things to decide that it does not know how to directly evaluate it, and it
turns the float back into a python object and calls some function which
calls some function which eventually calls tgammal().)

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!

2012-02-03 Thread Jonathan Bober
On Thu, Feb 2, 2012 at 1:16 PM, Julien Puydt julien.pu...@laposte.netwrote:


 Well, if I don't err, $10^{17}$ has 18 decimal digits, which is more than
 the 15,95.. that fit in 53 binary digits.


It is not that simple. 15.95 digits is more like a guideline. At issue is
whether the number in question can be exactly represented in double
precision, and in this case it can be. You can check this (ignoring
possible issues with the size of the exponent, which don't occur here) with:

sage: ZZ(RealField(53)(1.7e17)) == 17*10^16

In fact, 42 bits suffices, but not 41:

sage: ZZ(RealField(42)(1.7e17)) == 17*10^16
True
sage: ZZ(RealField(41)(1.7e17)) == 17*10^16
False

Instead, you could just look at it in base two

sage: (17 * 10^16).str(base=2)
'100101101101111001011010010001'

and count the length of the string not including the trailing zeros:

sage: len((17 * 10^16).str(base=2).strip('0'))
42

So, in this case, I would say that there is either a bug in maxima or a bug
in the sage code that converts from maxima. I've been trying to guess what
that bug would be, but I don't have a guess right now.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: sage-5.0.x and OS X 10.7 Lion

2012-02-03 Thread Jonathan Bober
On Tue, Jan 31, 2012 at 3:52 PM, Dr. David Kirkby
david.kir...@onetel.netwrote:


 But it makes the code unportable. What hope do we have with the Sun/Oracle
 compiler if idiots use non-standard C? What hope do we have if we try to
 build on Windows at some point in the future using a native compiler? All
 these GNU extensions are a headache for everyone except the linux community.

 If there was not so much poor code in Sage, building with the Sun compiler
 would be possible, but it rejects much of the code.


This is a long reply, so a quick summary: If there was not so much poor
code in Sage, building with the Sun compiler would be possible, but Sage
would be much less useful.

This is a rather late response, but as I don't see any responses that
complain
about your viewpoint, it seems appropriate. I don't want anyone to see your
email that think that your views are representative of the majority of
opinions, because I think that they are both wrong and harmful.

Whether or not someone writes standard C has little to do with whether or
not
they are an idiot and in many (most?) cases it doesn't even have much to
do
with whether or not they are ignorant (which is much different). As far as
example given above, I would say that, though I do not know him personally,
I
am pretty sure that Michael Stoll (who wrote ratpoints) is not an idiot.
(The
situation is probably quite the opposite.)

In general, person X might use nonstandard GNU extension Y for many reasons,
including:

(1) X has more important things to worry about than what is or isn't
standard C/C++.

(2) gcc is (for better or for worse) a de facto standard.

(3) Y happens to be very useful, and perhaps makes the code much faster

(4) X knows that Y is not a standard, but for he/she/they is/are only
interested in running his/her/their code for his/her/their own purposes, and
doesn't much care whether or not it works for Sage, or on Solaris, or on
Windows.

That is (partially) why I think your viewpoint is wrong. Another nice
example
is smalljac. It would be very nice if this were included in Sage, bit it
won't
be for the moment largely because

This code is only supported on 64-bit Intel/AMD platforms
(life is too short to spend it worrying about overflowing a 32-bit
integer).

This has nothing to do with GNU extensions, but you can see quite clearly
that
the Drew Sutherland is well aware of some shortcomings of his code and that
he
has no intention of fixing them. That puts the onus to fix the code on Sage
developers, if they want it to be included, the same way that Sage
developers
should be willing to fix ratpoints if they want it to compile with clang (or
sun's compiler, or MSVC).

For another example: I recently tried to compile some of my own code using
clang++ and discovered that I am not allowed to do

void f(int j) {
complexdouble x[j];
[...]
}

even though g++ accepts that. ( See
http://clang.llvm.org/compatibility.html#vla ) Now, I have a few options. I
could allocate everything on the heap explicitly; I could use
vectorcomplexdouble ; or I could specify a maximum size for j. All of
these options are bad for me, because I care very much about speed, and
I know that j is small, and 99% of the time it will be less that 20, but I
don't want to always allocate space for j = 100 because this is very rare.
So
I have to consciously decide what to do, and if I decide to stick with the
nonstandard gcc extension it is not because I am an idiot but because it is
probably the fastest/easiest option on what is likely to be the only
compiler I ever care about.

Continuing: I might find a 5% improvement in my code through proper use of
__builtin_expect on gcc. I know this is not portable, but should I really
take
the time to figure out how to write a proper macro that will work for one
million different compilers, when am only going to use one? If I don't it
has
nothing to do with idiocy.

As for why your viewpoint might be harmful: I have heard anecdotes of people
not wanting to release their code because it was ugly, or nonstandard, or
difficult to use, etc. As long as the response that they are going to
receive
it along the lines of the above, that viewpoint is valid, even if the
response
should be Yes, this is ugly, but it is still awesome.

Large (most?) parts of Sage have simply come from third-party code written
by people who never had any intention of having anything to do with Sage.
Such people should be encouraged to release their code, not called idiots
when they do release it.

(This was a rather long off-topic response, and I'm not going to respond
again.
Or at least not in this thread.)

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!

2012-02-02 Thread Jonathan Bober
On Thu, Feb 2, 2012 at 11:23 AM, Julien Puydt julien.pu...@laposte.netwrote:


 Let us consider :
 #include math.h
 #include stdio.h

 int
 main ()
 {
  long double x;
  x=6.0;
  printf (%.20Lf\n, tgammal(x));
  x=10.0;
  printf (%.20Lf\n, tgammal(x));
  return 0;
 }

 On an x86_64 box, I get :
 $ ./test
 119.8612
 362880.22737368

 and on the ARM box, I get:
 $ ./test
 119.97157829
 362880.046566128731


Interesting. You can also replicate these results (double and long double
using sage):

sage: exp(RealField(64)(log(120))).str(truncate=False)
'119.86'
sage: exp(RealField(53)(log(120))).str(truncate=False)
'119.97'

Anyway, the first value doesn't fit in a double, so when it gets converted
to a double it has to be rounded. It turns out that 120.0 is the closest
double to 119.86, so that is why sage ends up getting the
answer exactly correct:

sage: RR(exp(RealField(64)(log(120.str(truncate=False)
'120.00'


  (The maxima conversion is a different issue. There the result _is_
 wrong. 1.7e17 is exact in double precision, and the conversion should
 not change that.)


 Well, that problem goes away with # tol rel, hence the situation isn't
 that clear-cut.


Can you think of a reason that the answer should change? Does maxima use
less that 53 bits of precision ever?

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: [ARM] The failed numerical tests only show the tests are bad!

2012-02-02 Thread Jonathan Bober
On Thu, Feb 2, 2012 at 3:05 PM, rjf fate...@gmail.com wrote:


 I don't know about arithmetic on ARM specifically, but there is
 something
 wrong with a gamma() function that fails to return an integer (perhaps
 in float format) when it is
 given an integer argument (perhaps in float format), and the answer is
 exactly representable
 as an integer in float format.


I'm not sure that I agree. It would certainly be nice if gamma() worked
this way, but when you consider implementation issues, do you really want
to treat integer arguments in a special manner? If there is a reasonable
implementation that can guarantee this behavior with no loss in speed and
no other significant trade-offs, then library designers should use it, but
I don't think that it is such a simple issue.


 It would also be nice if any floating-point tests that you ran with
 Sage
 either (a) determined IEEE 754 standard compatibility and
 tested numerical routines subject to that standard, or
 (b) determined machine parameters (e.g. machine epsilon etc.)
 and the tests were written so as to take variations into
 account.

 RJF


Are there IEEE 754 standards for the accuracy of special functions? I tried
finding some, but I could not. As far as I know, the standards do not
specify that transcendental functions need to be computed with the same
accuracy that one expects from basic arithmetic operations, because it is
essentially unknown how much memory/time would be needed for this in all
cases.

In Sage, RealField uses mpfr as its backend, and hence its special
functions should always be computed accurately with correct rounding (and
thus by definition its results should be independent of the system sage is
running on). However, python floats map to hardware types and as such, I
think that it is perfectly reasonable if certain operations with floats
depend on the underlying hardware, the operating system, the specific
libraries that are present, and perhaps the phase of the moon. (And I think
that computing the gamma function is probably one of these cases. Yes, in
case you are wondering, I would probably consider it acceptable if
gamma(float(6)) did not even always give the same answer every time it is
run in the same sage session.)

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!

2012-02-01 Thread Jonathan Bober
I've just been looking at this trying to figure out what was going on and I
was just going to say exactly the same thing.

I don't really know anything about the whole glibc vs eglibc thing, but I
bet the implementation is the same as
glibc-2.14.1/sysdeps/ieee754/dbl-64/e_gamma_r.c:

double
__ieee754_gamma_r (double x, int *signgamp)
{
  /* We don't have a real gamma implementation now.  We'll use lgamma
 and the exp function.  But due to the required boundary
 conditions we must check some values separately.  */
[...]

And sure enough:

sage: exp(RR(log(120))).str(truncate=False)
'119.97'

I'm not completely convinced that the results are wrong. They are certainly
less precise than the correct answers, and in this case the correct answers
can be represented exactly in double precision, but I would not be
surprised if on x86_64 there are still cases where the returned answer is
not as exact as possible. And as far as I can tell, the underlying tgamma()
works as advertised, in the sense that I cannot find any description at all
of how accurate it's answers are supposed to be, so within 2ulp seems
possibly reasonable to me.

(The maxima conversion is a different issue. There the result _is_ wrong.
1.7e17 is exact in double precision, and the conversion should not change
that.)

On Wed, Feb 1, 2012 at 8:04 PM, Dima Pasechnik dimp...@gmail.com wrote:



 On Thursday, 2 February 2012 06:24:18 UTC+8, Robert Bradshaw wrote:

 On Wed, Feb 1, 2012 at 4:19 AM, Julien Puydt  wrote:
  = Forewords =
 
  I investigated the numerical issues on my ARM build, and after much
 poking
  around and searching, I found that I was chasing the dahu : the tests
 were
  wrong, and the result were good.

 No, the tests are fine, the results are clearly wrong.

  Let's consider the numerical failures one by one :

 The following comments apply to all 4 tests.

  = 1/4 =
  File /home/jpuydt/sage-4.8/devel/**sage-main/sage/functions/**other.py,
 line
  511:
 sage: gamma1(float(6))
  Expected:
 120.0
  Got:
 119.97
 
  Let's see how bad this is :
  sage: res=gamma(float(6))
  sage: res
  119.97
  sage: n(res,prec=57)
  120.0
  sage: n(res,prec=58)
  119.97

 sage: a = float(120) - 2^-46; a, a == 120
 (119.99, False)
 sage: a = float(120) - 2^-46; a, a == 120
 (119.99, False)
 sage: n(a, prec=57)
 120.0
 sage: n(a, prec=57) == 120
 False
 sage: n(a, prec=57).str(truncate=False)
 '119.9858'

 The string representation of elements of RR truncates the last couple
 of digits to avoid confusion, but floats do not do the same. Changing
 tests like this to have n(..., prec=...) and relying on string
 formatting only confuses the matter (as well as making things ugly).

 It looks like ARM's libc returns incorrect (2 ulp off) values for
 gamma(n) for small n (at least). This should be fixed, not hidden, or
 at least marked as a known bug on ARM (only). IMHO this error is a
 much bigger issue than the noise due to a numerical integration
 arising from double-rounding in moving floats in and out of 80-bit
 registers and other low-level details. That's what the tolerance
 makers should be used for.

  = CONCLUSION =
  A double precision floating point number is supposed to have 53 digits,
  according to the norm 
  (http://en.wikipedia.org/wiki/**IEEE_754-2008http://en.wikipedia.org/wiki/IEEE_754-2008),
 and the
  results are correct from that point of view.

 No, their last (two) binary digits are wrong. If the test was


 further digging shows that it's the implementation of gammal() in the
 platform's libc (eglibc is used)
 is to blame; they do expl(lgammal()), leading to loss of precision, as
 platform's long double is only 8 bytes, and it's simply not
 possible to stuff enough precision in log(gamma) if you only have 8 bytes!

 Dima


 sage: gamma(float(6)) == 120
 True

 It would fail just as well.

  So the tests should be modified not to depend on the specific
 implementation
  : they're currently testing equality of floats!
 
  I would provide a patch for the tests so they use n(..., prec=53), but
 I'm
  hitting a problem in one of the cases ; see the mail I sent yesterday
 for
  more about that.

 See above.

 - Robert

  --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: no space left under /tmp on sage.math

2012-01-31 Thread Jonathan Bober
Instantly makes sense. [Maybe everyone knows this.] If a process creates
a file, and that file is deleted (either by the original process or by some
other process), that file still exists and can be written to by that
process until it is closed, and it takes up space on the file system. (I'm
not sure if any other process would be able to access the file, but you
certainly wouldn't see it with ls.) You can see the effect by running
something like
#--
import sys
import os
import subprocess
import time

subprocess.call('df')
outfile = open('space', 'w')
 for n in xrange(1):
outfile.write(blah)
subprocess.call('df')
os.remove('space')
time.sleep(1)
subprocess.call('df')
 outfile.close()
time.sleep(1)
subprocess.call('df')
#--

Perhaps neither call to time.sleep() should not be necessary, but the
second can change the output on my system, which probably comes down to
some multiprocess race conditions, and the first is there to make sure that
the OS has actually had time to delete the file. Also, you could actually
move the os.remove() to right after the open(). (I think some parts of this
might work differently on windows, though.)

The output I get is:

bober@pafnuty:~/test$ python blah.py | grep sda
/dev/sda51179091124 726301120 392895580  65% / before
writing
 /dev/sda51179091124 726692892 392503808  65% / after
writing, before deleting
/dev/sda51179091124 726692892 392503808  65% / after
deleting, before closing
/dev/sda51179091124 726301116 392895584  65% / after
closing

So what is happening is that something, either in the doctesting system
itself, or in some part of sage that is tested, is creating and deleting
some files, but then the processes using those files are hanging, and the
files are never being closed, so the space is never being freed by the OS.
This would happen if, for example, the doctesting system used
tempfile.TemporaryFile() to create files which hold the output of child
processes, and then never closed those files, and then it hung at some
point. Or it could happen if a doctest did a similar thing, and then hung
and was never killed. (I'm not completely sure what will happen if a child
process creates a temporary file and then is killed, but I'm pretty sure
that the space will immediately be freed.)

I don't think this is from core dumps, as suggested on sage-devel, unless
they were specifically enabled, since they are disabled by default on
sage.math. Also, I don't think they would have this kind of behavior.

Anyway, I'm ccing sage-devel, since that is probably where any further
discussion of patchbot problems belongs.

On Tue, Jan 31, 2012 at 1:21 AM, Jeroen Demeyer jdeme...@cage.ugent.bewrote:

 On 2012-01-31 07:06, William Stein wrote:
  then manually kill -9'd all of the robertwb jobs there, and
  instantly got all of /tmp back.
 You literally mean instantly (as opposed to after some minutes/hours)?


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: VirtualBox

2012-01-10 Thread Jonathan Bober
On Tue, Jan 10, 2012 at 7:17 AM, William Stein wst...@gmail.com wrote:


 On Jan 10, 2012 12:50 AM, Dima Pasechnik dimp...@gmail.com wrote:
 
 
 
  On Tuesday, 10 January 2012 03:06:14 UTC+8, William wrote:
 
   coLinux looks promising. What does stop one from putting Sage on it
   presently?
  
   Dima
 
  Nothing.  I've done it before several times.  I was hoping with my
  email to encourage you (meaning anybody reading this!) to try it.
 
 
  on a 64-bit Windows 7 I have, I cannot install coLinux:
  it aborts with a message saying it cannot run on a 64-bit system.
  (and this is also according to Q27 in http://colinux.wikia.com/wiki/FAQ)

 Thanks, that is very useful to know, as 64-bit is finally becoming common
 for Windows, I think.


I'm pretty sure that most computers sold to the general public today will
have 64 bit Windows installed, and this has probably been true for a few
years. (Just browse the Best Buy website and you will see that most
standard laptops have at least 4 gigs of ram and come with Windows 7 Home
Premium 64 bit.) I suspect that new computers will probably only have 32
bit Windows installed either because they are cheap netbooks which come
with Windows 7 starter or because someone has specifically requested
something 32 bit for legacy compatibility reasons.

I've been puzzled for a while about why Ubuntu encourages the 32 bit
version as the recommended download. There used to be some problems with
flash on 64 bits, but I think those problems are pretty much taken care of
now, and I haven't come across any other software that I want to run that
has problems with 64 bit linux. (Not that I really _want_ to run flash.)

So I think that there should be a 64 bit version, and we should encourage
anyone who can use a 64 bit version to use the 64 bit version, and we
should look forward to the day, possibly only a few years away, when Sage
can drop support for 32 bit operating systems. (Though maybe this day is
more than a few years away, since ARM is lagging behind in adopting 64 bit,
and more and more people may be interested in running Sage on ARM as time
goes on.)

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] importing mpmath in 'sage -python' imports everything?

2011-12-13 Thread Jonathan Bober
That doesn't seem to work. (I haven't tried to figure out what the problem
is, and I think I'm going to sleep soon instead of trying to figure it out
right now.)

I think that it doesn't do the imports, but then later it tries to use them.

bober@sage:~$ MPMATH_NOSAGE=1 sage -python -c import mpmath
Traceback (most recent call last):
  File string, line 1, in module
  File
/usr/local/sage/local/lib/python2.6/site-packages/mpmath/__init__.py,
line 6, in module
from .ctx_mp import MPContext
  File
/usr/local/sage/local/lib/python2.6/site-packages/mpmath/ctx_mp.py, line
48, in module
from sage.libs.mpmath.ext_main import Context as BaseMPContext
  File integer.pxd, line 9, in init sage.libs.mpmath.ext_main
(sage/libs/mpmath/ext_main.c:25047)
  File integer.pyx, line 170, in init sage.rings.integer
(sage/rings/integer.c:35083)
  File
/usr/local/sage/local/lib/python2.6/site-packages/sage/rings/infinity.py,
line 200, in module
import sage.rings.rational
  File fast_arith.pxd, line 3, in init sage.rings.rational
(sage/rings/rational.c:25448)
  File fast_arith.pyx, line 51, in init sage.rings.fast_arith
(sage/rings/fast_arith.c:7557)
  File integer_ring.pyx, line 69, in init sage.rings.integer_ring
(sage/rings/integer_ring.c:11070)
  File
/usr/local/sage/local/lib/python2.6/site-packages/sage/structure/factorization.py,
line 189, in module
from sage.misc.all import prod
  File
/usr/local/sage/local/lib/python2.6/site-packages/sage/misc/all.py, line
81, in module
from functional import (additive_order,
  File
/usr/local/sage/local/lib/python2.6/site-packages/sage/misc/functional.py,
line 36, in module
from sage.rings.complex_double import CDF
  File complex_double.pyx, line 88, in init sage.rings.complex_double
(sage/rings/complex_double.c:14778)
  File real_mpfr.pxd, line 15, in init sage.rings.complex_number
(sage/rings/complex_number.c:16186)
  File real_mpfr.pyx, line 1, in init sage.rings.real_mpfr
(sage/rings/real_mpfr.c:29953)
  File utils.pyx, line 11, in init sage.libs.mpmath.utils
(sage/libs/mpmath/utils.c:5951)
  File /usr/local/sage/local/lib/python2.6/site-packages/sage/all.py,
line 72, in module
from sage.libs.all   import *
  File
/usr/local/sage/local/lib/python2.6/site-packages/sage/libs/all.py, line
1, in module
import sage.libs.ntl.all  as ntl
  File
/usr/local/sage/local/lib/python2.6/site-packages/sage/libs/ntl/all.py,
line 26, in module
from sage.libs.ntl.ntl_ZZ import (
  File ntl_ZZ.pyx, line 28, in init sage.libs.ntl.ntl_ZZ
(sage/libs/ntl/ntl_ZZ.cpp:6238)
  File integer_ring.pyx, line 1063, in
sage.rings.integer_ring.IntegerRing (sage/rings/integer_ring.c:9618)
NameError: ZZ


On Tue, Dec 13, 2011 at 2:35 AM, Fredrik Johansson 
fredrik.johans...@gmail.com wrote:

 On Tue, Dec 13, 2011 at 8:27 AM, Jonathan Bober jwbo...@gmail.com wrote:
  Does anyone happen to know why this happens? I have a feeling it is
 going to
  annoy my sometime soon.

 You can prevent mpmath from trying to import Sage altogether by
 setting the MPMATH_NOSAGE environment variable. Of course, this makes
 the runtime a bit slower.

 Right now mpmath imports sage.all but only uses sage.Integer,
 sage.factorial, sage.fibonacci and sage.primes. I suppose the last
 three imports could be done lazily. But it definitely needs to import
 the Sage Integer type at startup or not at all.

 Fredrik

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] importing mpmath in 'sage -python' imports everything?

2011-12-12 Thread Jonathan Bober
Does anyone happen to know why this happens? I have a feeling it is going
to annoy my sometime soon.

Look how long it takes to import mpmath:

$ time sage -python -c import mpmath; print mpmath.__version__0.17

real 0m0.809s
user 0m0.708s
sys 0m0.076s

compared to the time it takes to import the system mpmath:

$ time python -c import mpmath; print mpmath.__version__
0.17

real 0m0.032s
user 0m0.020s
sys 0m0.008s

It seems a lot of things get imported on when I import mpmath. Is there a
good reason for this? I ran this from an account with no home directory
(and no write access anywhere), and I get

mkdir: cannot create directory `/.sage': Permission denied
Traceback (most recent call last):
  File /home/bober/www/platt_zeros/test.py, line 3, in module
import mpmath
  File
/home/bober/sage/local/lib/python2.6/site-packages/mpmath/__init__.py,
line 6, in module
from .ctx_mp import MPContext
  File
/home/bober/sage/local/lib/python2.6/site-packages/mpmath/ctx_mp.py, line
48, in module
from sage.libs.mpmath.ext_main import Context as BaseMPContext
  File parent.pxd, line 14, in init sage.libs.mpmath.ext_main
(sage/libs/mpmath/ext_main.c:24983)
  File parent.pyx, line 75, in init sage.structure.parent
(sage/structure/parent.c:21250)
  File
/home/bober/sage/local/lib/python2.6/site-packages/sage/categories/sets_cat.py,
line 27, in module
from sage.categories.algebra_functor import AlgebrasCategory
  File
/home/bober/sage/local/lib/python2.6/site-packages/sage/categories/algebra_functor.py,
line 17, in module
from sage.categories.category_types import Category_over_base_ring
  File
/home/bober/sage/local/lib/python2.6/site-packages/sage/categories/category_types.py,
line 18, in module
from sage.misc.latex import latex
  File
/home/bober/sage/local/lib/python2.6/site-packages/sage/misc/latex.py,
line 39, in module
from misc import tmp_dir, graphics_filename
  File
/home/bober/sage/local/lib/python2.6/site-packages/sage/misc/misc.py,
line 97, in module
os.makedirs(DOT_SAGE)
  File /home/bober/sage/local/lib/python/os.py, line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/.sage/'

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Sage 4.7.2 make failure on Thinkpad T-40 with Oneiric Ubuntu - maxima fails to build

2011-12-04 Thread Jonathan Bober
On Sat, Dec 3, 2011 at 3:56 PM, Jonathan Bober jwbo...@gmail.com wrote:

 Would it be enough to do

 export SAGE_FAT_BINARY=yes
 export SAGE_BINARY_BUILD=yes
 export SAGE_ATLAS_ARCH=base

 before building?


I don't know if this will work, but I decided I may as well give it a try.
The resulting binary is currently being uploaded to

http://sage.math.washington.edu/home/bober/sage-4.7.2-oneiric-32bit-Bober-i686-Linux.tar.gz

It looks like my upload speed is nowhere near as good as my download speed,
so this probably won't finish uploading until around 4 a.m. Seattle time.
(40 minutes from now.) When it is finished the size should be 673120489
bytes. (I'm sending this email now because I will probably be asleep by
then.)

You could give that binary a try, and if we are lucky you won't get an
illegal instruction error. Of course, it would still be good to know why
sage won't build on your machine, even if this works.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Sage 4.7.2 make failure on Thinkpad T-40 with Oneiric Ubuntu - maxima fails to build

2011-12-04 Thread Jonathan Bober
Yes, I had seen some reference to something like that. I think there is
also an option to set the cpuid. Unfortunately, these features do not seem
to be very well documented. For example, when I search google for
virtualbox synthcpu I get your mailing list post on the first page of
results, even though it was only sent 3 hours ago. However, the mailing
list post that you linked to contains:

By default, Guest OS sees the Host CPU features, and you will need to
limit this.
All the other hardware is fully abstract.

AFAIK This is possible with VirtualBox, but not documented. i.e. to make
VirtualBox always pass some kind of generic (old) CPU to guest, such as the
old Pentium II chip.

Of course, returning a restricted feature set when executing the cpuid
instruction might not be the same as raising an illegal instruction
exception when the guest OS tries to execute an instruction that the cpuid
says it shouldn't be able to execute, but it would be a start if I could
figure out how to do this. (And it isn't completely implausible to me that
a virtual machine might be able to raise an illegal instruction exception,
and thus more or less fully pretend to be an old cpu, but I also wouldn't
be surprised if it could not.)

If it doesn't take me more than a few minutes (of work -- I don't mind
waiting for compilation) maybe I'll try to do this using QEMU tomorrow. But
compile might take a really long time.

On Sat, Dec 3, 2011 at 11:54 PM, Emil Widmann emil.widm...@gmail.comwrote:

  It seems that properties of the host CPU are passed over to the VM
  guest as indicated here,
 https://forums.virtualbox.org/viewtopic.php?f=10t=33954

 Maybe there is a chance with VirtualBox: I found the following in the
 documentation of the VBoxManage command:
 VBoxManage --synthcpu on|off: This setting determines whether
 VirtualBox will expose a synthetic CPU to the guest to allow live
 migration between host systems that differ significantly.

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Sage 4.7.2 make failure on Thinkpad T-40 with Oneiric Ubuntu - maxima fails to build

2011-12-04 Thread Jonathan Bober
Well, here is Dale's install.log:

http://sage.math.washington.edu/home/bober/dale-amon-20111203-install.log

I don't know what's going wrong.

On Sun, Dec 4, 2011 at 4:34 AM, Dale Amon a...@vnl.com wrote:

 On Sun, Dec 04, 2011 at 03:26:23AM -0800, Jonathan Bober wrote:
  On Sat, Dec 3, 2011 at 3:56 PM, Jonathan Bober jwbo...@gmail.com
 wrote:
 
   Would it be enough to do
  
   export SAGE_FAT_BINARY=yes
   export SAGE_BINARY_BUILD=yes
   export SAGE_ATLAS_ARCH=base
  
   before building?
  
 
  I don't know if this will work, but I decided I may as well give it a
 try.
  The resulting binary is currently being uploaded to
 
 
 http://sage.math.washington.edu/home/bober/sage-4.7.2-oneiric-32bit-Bober-i686-Linux.tar.gz
 
  It looks like my upload speed is nowhere near as good as my download
 speed,
  so this probably won't finish uploading until around 4 a.m. Seattle time.
  (40 minutes from now.) When it is finished the size should be 673120489
  bytes. (I'm sending this email now because I will probably be asleep by
  then.)
 
  You could give that binary a try, and if we are lucky you won't get an
  illegal instruction error. Of course, it would still be good to know why
  sage won't build on your machine, even if this works.

 Will do. It's just after noon here in Belfast so I'm going
 out for some coffee and when I get back I will try installing
 it.

 My compile is still running and will probably not finish up until
 late afternoon my time, so we'll see if things come out better
 with that bugfix ecl spkg in the correct place.

 Seems likely I'll catch you on the flipside of the day, given
 the time over there. I expect to be working past midnight
 here so perhaps I will catch you again then.



-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Sage 4.7.2 make failure on Thinkpad T-40 with Oneiric Ubuntu - maxima fails to build

2011-12-03 Thread Jonathan Bober
It's a shame this has gone unreplied to for so long. (Unless I missed a
reply in another thread. I'm may be confused by the various threads, which
is why I'm only replying on sage-devel.)

Are other people really having problems building on Ubuntu 11.10? I've had
no problems building sage 4.8.2 on 64-bit Ubuntu 11.10. Also, I just tried
on 32-bit 11.10 in a virtual machine, and sage built without any problems.
(Running make ptest now.)

From a fresh 32 bit Ubuntu 11.10 install, I had to 'apt-get install
gfortran m4 g++' to get the build started. Eventually it failed on building
python, I think, and after digging through the install log I found I also
needed to 'apt-get install dpkg-dev'. After that, everything went fine. The
only (possibly) special thing that I did was set SAGE_ATLAS_ARCH=fast.
(Also I set SAGE_PARALLEL_SPKG_BUILD='yes' and MAKE='make -j4'.)

Despite being 32-bit, the VM still sees a very modern processor, though, so
it is possible something could go wrong on old hardware. If someone advises
me on exactly what to do, I could try building a binary that should be
usable on old machines. Would it be enough to do

export SAGE_FAT_BINARY=yes
export SAGE_BINARY_BUILD=yes
export SAGE_ATLAS_ARCH=base

before building? Also, if anyone knows a way to make the cpu in the virtual
machine look like an old cpu, I could try doing that.


On Wed, Nov 30, 2011 at 5:36 AM, Dale Amon a...@vnl.com wrote:

 On Wed, Nov 30, 2011 at 04:33:28AM -0800, Dima Pasechnik wrote:
  after all, it seems you need more things to be installed: the autotools
  chain, i.e.
  autoconf and automake.
  I suppose it's autotools-dev package on oneiric.

 They appear to already be installed:

  *** Opt develautoconf 2.68-1ubunt 2.68-1ubunt automatic configure
 script builder
  *** Opt develautomake 1:1.11.1-1u 1:1.11.1-1u A tool for
 generating GNU Standards-compliant Makefiles
  *** Opt develautotools-de 20110511.1  20110511.1  Update
 infrastructure for config.{guess,sub} files

 The installed versions are:

  automake --version
 automake (GNU automake) 1.11.1
 Copyright (C) 2009 Free Software Foundation, Inc.
 License GPLv2+: GNU GPL version 2 or later 
 http://gnu.org/licenses/gpl-2.0.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.

 Written by Tom Tromey tro...@redhat.com
   and Alexandre Duret-Lutz a...@gnu.org.

  autoconf --version
 autoconf (GNU Autoconf) 2.68
 Copyright (C) 2010 Free Software Foundation, Inc.
 License GPLv3+/Autoconf: GNU GPL version 3 or later
 http://gnu.org/licenses/gpl.html, 
 http://gnu.org/licenses/exceptions.html
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.

 Written by David J. MacKenzie and Akim Demaille.


 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Sage 4.7.2 make failure on Thinkpad T-40 with Oneiric Ubuntu - maxima fails to build

2011-12-03 Thread Jonathan Bober
On Sat, Dec 3, 2011 at 3:56 PM, Jonathan Bober jwbo...@gmail.com wrote:


 Are other people really having problems building on Ubuntu 11.10? I've had
 no problems building sage 4.8.2 on 64-bit Ubuntu 11.10. Also, I just tried
 on 32-bit 11.10 in a virtual machine, and sage built without any problems.
 (Running make ptest now.)


All tests passed!
Total time for all tests: 1428.0 seconds

(And, of course, I meant sage 4.7.2.)

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Sage 4.7.2 make failure on Thinkpad T-40 with Oneiric Ubuntu - maxima fails to build

2011-12-03 Thread Jonathan Bober
I'm not really sure. I'm hoping that someone else might have some more
advice. It would be easier to diagnose these failures if they could be
replicated.

Some other thoughts: More information about your machine might be useful,
like the output from 'cat /proc/cpuinfo', and the amount of ram + swap that
you have. Maybe the machine is running out of memory during the build and
giving a strange error? Maybe you're having actually hardware failures? (If
you try to build again, does it always fail in the same spot?)

It also might be useful for someone else to see your install.log, if you
send it to me I'll put it somewhere public if you don't mind, or maybe Dima
can do that. (The missing files don't appear to be part of the maxima spkg
before it is built, so if it is giving a missing file error, then maybe
there was some earlier error?)

(You might also check the md5 sum of your source download, and make sure
that nothing went wrong there.)


On Sat, Dec 3, 2011 at 5:04 PM, Dale Amon a...@vnl.com wrote:

 On Sat, Dec 03, 2011 at 04:04:48PM -0800, Jonathan Bober wrote:
  On Sat, Dec 3, 2011 at 3:56 PM, Jonathan Bober jwbo...@gmail.com
 wrote:
   Are other people really having problems building on Ubuntu 11.10? I've
 had
   no problems building sage 4.8.2 on 64-bit Ubuntu 11.10. Also, I just
 tried
   on 32-bit 11.10 in a virtual machine, and sage built without any
 problems.
   (Running make ptest now.)
  
  All tests passed!
  Total time for all tests: 1428.0 seconds
 
  (And, of course, I meant sage 4.7.2.)

 Do you have a specific procedure you would like me to test
 here on the T40?


 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: or sage-5.0? (Re: [sage-release] Next release: sage-4.7.3 or sage-4.8?)

2011-11-08 Thread Jonathan Bober
On Tue, Nov 8, 2011 at 3:09 AM, Julien Puydt julien.pu...@laposte.netwrote:


  You'll find here http://clefagreg.dnsalias.org/ a distribution aimed at
 them. And in fact, several distributions, to be installed on a usb key and
 booted, precisely because the students may not have administrative rights
 on the machines they have access to. There used to be a sage variant, which
 is still mentioned, but I don't find it on that page. It was optional in
 part because of size issues : the key was already choke-full with
 math-oriented stuff... which sage duplicates!


USB-booting is nice, but my example takes place on a small super-computer
cluster in a different country, which I only access with ssh. Also, I'm
talking about actually _installing_ software, not just running it.

If there is another way to easily install software in my home directory, I
would like to know about it. I'm sure some sort of package management
system that supported it could be built. The problem is annoying enough
that some people started

http://fixingscientificsoftwaredistribution.blogspot.com/

(It doesn't look like that blog went very far.)

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: or sage-5.0? (Re: [sage-release] Next release: sage-4.7.3 or sage-4.8?)

2011-11-07 Thread Jonathan Bober
On Sun, Nov 6, 2011 at 5:59 AM, Julien Puydt julien.pu...@laposte.netwrote:

 Le 05/11/2011 21:24, Justin C. Walker a écrit :

  There are so many different versions of each library and system (for
 Linux, in particular)
 that it's a practical impossibility to produce a package like Sage that
 will work on the systems currently supported.


 I would like to point out that quite a few distributions have some sort of
 rolling-release organisation, where some packages are updated within a huge
 set with complex deps. It works.

 And I'm not just discussing linux distributions : there are a few *BSD out
 there too, and I think there are distributions for Sun, OSX and win32 too.

 So I think it's definitely practical and possible to produce a package
 like sage that will work on the systems currently supported, and more.
 Because it has been done.

 Any of the many debian packages I have on my system has a list of deps.
 Each sage spkg has deps. Where is the difference? Why would it be
 impossible to apt-get install sagemath, yum install sagemath, emerge
 sagemath and have the right thing be done?

 I can't understand why you think sage is different.


I don't know enough about software packaging to know if sage is
_really_that different, but it is quite complicated and large, and it may
take a big coordinated effort to get sage into every different
distributions package management. And sage needs to work on old releases.
It is possible to update mpfr on Ubuntu 8.04 without adding an extra
repository?

But anyway, the main reason I'm writing is that I've seen these type of
comments many times, and I think you are missing that having everything
included is, to many people, one of the great features of sage. Sage might
be large, but it is completely trivial to download and compile, and it is
not much harder to start developing. And it is also completely trivial to
do this whether or not you have root access on the machine that you are
using. Here is a short story:

Not long ago I wanted to use mercurial on a machine where I have an
account, but no root access. I'm sure I could have asked the sysadmin to
install it, but it was probably not during business hours, and maybe it was
even on a weekend. Well, it should be easy enough to install it myself in
my home directory, I thought. Not so fast, though --- mercurial needed
some python source files in order to compile, and this machine doesn't have
the python development packages installed. So I suppose I can just download
those, but wait, it also needs... 30 minutes later, mercurial is still not
installed, and I'm getting frustrated, but then I realize that _sage_
includes mercurial. So I download sage, type 'make', and sit back and wait.

So I am asserting that the easiest way to install mercurial, if you don't
have root access, is to download and compile sage.

And a little later, when I realized that the system-installed mpfr is so
old that it doesn't have mpfr_mul_d(), my first reaction again was that I
should download and compile mpfr, but first I need gmp (or should I get
mpir instead, and then deal with compiling it in gmp compatibility mode?),
and gmp needs Or, of course, I can just add
-I/home/bober/sage/local/include and -L/home/bober/sage/local/lib to my
makefiles.

So I am asserting that the easiest way to install mpfr, if you don't have
root access, is to download and compile sage. And the same goes for just
about every other package included in sage.

For many of us, sage is not just a 'program'. It is distribution of lots of
the software that we want to use, and many people have done a wonderful job
of making it so that it is usually completely trivial to install, and also
to modify, even without root access. So if someone wanted to go do lots of
work just so that I can type 'apt-get install' instead of 'make', that
would be fine with me, but I think it would be missing the point; and if it
interfered with the current simple build process, it would be a step
backwards for me.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] @parallel doesn't clean up temp directories?

2011-09-19 Thread Jonathan Bober
I just looked at a few machines where I run sage and found that one of them
has this problem. It is also the one where I've used @parallel the most.

I see at most one subdirectory per host in my .sage directory on *.math, so
there's not much of a problem there. I have use @parallel a fair amount
there, but not that much. On a different cluster that I use that has nfs
mounted home directories I see one hostname with 7789 subdirectories and
another with 4731 subdirectories. So whatever is going on, it doesn't seem
to be isolated.

On Mon, Sep 19, 2011 at 12:26 PM, Volker Braun vbraun.n...@gmail.comwrote:

 On a big computation with many @parallel decorators I eventually get

 [Errno 31] Too many links: '/home/vbraun/.sage//temp/hostname/26457/'

 This is caused by trying to create 2^15 ~ 32k subdirectories, which is an
 ext2/ext3 limitation. Is there any reason why the @parallel forker creates
 its own temp directories but then does not clean them up? Each 
 subdirectory contains a single empty interface/ subdirectory.

 PS: This is on a NFS partition, just deleting 32k directories takes a
 lng time.

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Purpose of readlink and realpath in $SAGE_ROOT/sage

2011-08-24 Thread Jonathan Bober
On Thu, Aug 18, 2011 at 9:02 PM, Jeroen Demeyer jdeme...@cage.ugent.bewrote:

 Hello sage-devel,

 Can somebody explain the rationale for the following lines in
 $SAGE_ROOT/sage:
 if [ $SAGE_ROOT = . ];  then
SAGE_ROOT=`readlink -n $0 2 /dev/null` || \
SAGE_ROOT=`realpath$0 2 /dev/null` || \
SAGE_ROOT=$0

 I think the following would work equally well:
 if [ $SAGE_ROOT = . ];  then
SAGE_ROOT=$0



I assume those lines are supposed to try to determine the location of the
sage install when sage is not run from the directory where is it install. I
think that they work when the sage executable is called directly, and your
proposal would probably work equally well in that case, but they seem broken
in other cases. I generally link sage-(release number) to ~/sage, and then I
link ~/bin/sage to ~/sage/sage, and I've always had to set SAGE_ROOT
explicitly. Are those lines supposed to be there so that I don't have to set
the path explicitly?

I think that in my case something like the following would work:

cd  ${0%/*}
SAGE_RELATIVE_PATH=`readlink -n $0`
cd ${SAGE_RELATIVE_PATH%/}
SAGE_ROOT=`pwd`



 Note that this is not needed to get an absolute (rather than relative)
 path, since below there is
 # Make root absolute:
 cd $SAGE_ROOT
 SAGE_ROOT=`pwd`

 If we really want to resolve symbolic links, we could replace `pwd` by
 `pwd -P` (physical path).


 In any case, readlink -n certainly does NOT do what is intended and
 should be removed (#11707):
  - It only works when `$0` (the sage executable itself) is a symbolic
 link, it does not work when some other component in the path is a
 symbolic link.
  - If the sage executable is a symbolic link, then `readlink -n` returns
 the link itself, not the canonicalized name.  Example: if
 `/usr/local/sage-4.7.1/sage` is a symbolic link to `sagefoo`, then
 `SAGE_ROOT` would become `sagefoo` when `'/usr/local/sage-4.7.1/sagefoo`
 is intended.

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] memory leak

2011-07-25 Thread Jonathan Bober
I just found the following memory leak:

def leak():
K.a = NumberField(x^2 - x - 1)
m = get_memory_usage()
for n in xrange(10):
E = EllipticCurve(K, [3,4,5,6,7])
if n % 1000 == 0:
print get_memory_usage() - m

sage: leak()
0.0
0.5
1.0
1.0
1.5
2.0
2.0
2.5
3.0
[...]

Is this the same as an already reported bug? It looks like it might be
related to http://trac.sagemath.org/sage_trac/ticket/715 or
http://trac.sagemath.org/sage_trac/ticket/5970 except that this example is
always constructing the exact same curve, and it isn't doing anything with
it. If it isn't the same as those leaks, does anyone have any idea what it
is?

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: memory leak

2011-07-25 Thread Jonathan Bober
Thanks! 11468 fixed the leak. While I was at it I gave 11495 a positive
review also.

On Mon, Jul 25, 2011 at 1:19 AM, Jean-Pierre Flori jpfl...@gmail.comwrote:

 I tested your function on my Sage 4.7.alpha? install with 11468 (not
 14468...) and 11495 and I do not seem tp get the leak:
 sage: leak()
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875
 0.46875


 On 25 juil, 10:16, Jean-Pierre Flori jpfl...@gmail.com wrote:
  Ticket 11521 which seems to be nothing but 715 looks similar, but does
  not seem related.
 
  You could also try applying 14468 and 11495 (patches but no reviews
  yet, one is just a very old bug which has been unfortunately rolled
  back).
  They affect multi polynomial rings used by elliptic curves.
 
  Finally it might be worth having a look at 5949, but I would say it is
  not related.
 
  On 25 juil, 09:55, Jonathan Bober jwbo...@gmail.com wrote:
 
 
 
 
 
 
 
   I just found the following memory leak:
 
   def leak():
   K.a = NumberField(x^2 - x - 1)
   m = get_memory_usage()
   for n in xrange(10):
   E = EllipticCurve(K, [3,4,5,6,7])
   if n % 1000 == 0:
   print get_memory_usage() - m
 
   sage: leak()
   0.0
   0.5
   1.0
   1.0
   1.5
   2.0
   2.0
   2.5
   3.0
   [...]
 
   Is this the same as an already reported bug? It looks like it might be
   related tohttp://
 trac.sagemath.org/sage_trac/ticket/715orhttp://trac.sagemath.o...that this
 example is
   always constructing the exact same curve, and it isn't doing anything
 with
   it. If it isn't the same as those leaks, does anyone have any idea what
 it
   is?

 --
 To post to this group, send an email to sage-devel@googlegroups.com
 To unsubscribe from this group, send an email to
 sage-devel+unsubscr...@googlegroups.com
 For more options, visit this group at
 http://groups.google.com/group/sage-devel
 URL: http://www.sagemath.org


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] hashing number field ideals

2011-07-18 Thread Jonathan Bober
Hi all. I think there's a problem with the following:

sage: K.a = NumberField(x^2 - x - 1)
sage: I = K.ideal(2 * a - 1)
sage: I2 = I.factor()[0][0]
# I is a prime ideal, so its only factor is itself:
sage: print I, I2, I == I2
Fractional ideal (2*a - 1) Fractional ideal (2*a - 1) True

# also, the ideals even print the same.

# But the two hashes are different:

sage: print I.__hash__(), I2.__hash__()
-7493989779944505313 -6341068275337658337

# The hash is computed using I.pari_hnf().hash(), so let's look at that:

sage: print I.pari_hnf(), I2.pari_hnf()
[5, 2; 0, 1] [5, 2; 0, 1]
sage: print I.pari_hnf() == I2.pari_hnf()
True
sage: print I.pari_hnf().__hash__(), I2.pari_hnf().__hash__()
-7493989779944505313 -6341068275337658337

This looks rather strange. I and I2 also have the same types, and same
parents.

I eventually figured out the likely cause:

sage: I.gens()
(2*a - 1,)
sage: I2.gens()
(5, a + 2)

So pari_hnf() is giving different answers for different bases, even though
those answers look the same. Also, I can work around this with

sage: I2 = K.ideal(I2.gens_reduced())

I'm not sure what the right fix for this is (assuming I'm not the only one
that thinks something is wrong). Maybe the hash needs to be computed from a
reduced generating set? I don't know if this will always work, but it does
work in the example (note that a is a unit, so multiplying by a does not
change the ideal)

sage I3 = K.ideal( (I * a).gens_reduced())

even though the resulting ideals look different when printed:

sage: I
Fractional ideal (2*a - 1)
sage: I3
Fractional ideal (a + 2)
sage: print I == I3, I.__hash__() == I3.__hash__()
True True

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] memory leak

2010-10-27 Thread Jonathan Bober
Dear all,

There seems to be a memory leak in some code below, in at least versions
4.5 and 4.5.3. For example, if I call it with

sage: L = find_candidates_for_large_value(5000)

It prints something like:

current memory usage: 836.73046875
current memory usage: 836.73046875
current memory usage: 836.73046875
current memory usage: 836.73046875
current memory usage: 836.73046875
current memory usage: 836.73046875
current memory usage: 836.73046875
current memory usage: 836.98046875
current memory usage: 836.98046875
current memory usage: 836.98046875
current memory usage: 836.98046875
current memory usage: 836.98046875
current memory usage: 836.98046875
current memory usage: 836.98046875
current memory usage: 837.23046875
current memory usage: 837.23046875
current memory usage: 837.23046875
current memory usage: 837.23046875
current memory usage: 837.23046875
current memory usage: 837.23046875
current memory usage: 837.23046875
current memory usage: 837.23046875
current memory usage: 837.23046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.48046875
current memory usage: 837.73046875
current memory usage: 837.73046875
current memory usage: 837.73046875
current memory usage: 837.73046875
current memory usage: 837.73046875
current memory usage: 837.73046875
current memory usage: 837.73046875
current memory usage: 837.73046875
current memory usage: 837.73046875
current memory usage: 837.98046875
current memory usage: 837.98046875
current memory usage: 837.98046875
current memory usage: 837.98046875
current memory usage: 837.98046875
current memory usage: 837.98046875
current memory usage: 837.98046875
current memory usage: 837.98046875
current memory usage: 838.2578125
current memory usage: 838.2578125
current memory usage: 838.2578125
current memory usage: 838.2578125
current memory usage: 838.2578125
current memory usage: 838.515625
current memory usage: 838.515625
current memory usage: 838.515625
current memory usage: 838.515625
current memory usage: 838.81640625
current memory usage: 838.81640625
current memory usage: 838.81640625
current memory usage: 838.81640625
current memory usage: 838.81640625
current memory usage: 838.81640625
current memory usage: 838.81640625
current memory usage: 838.81640625
current memory usage: 838.81640625
current memory usage: 839.06640625
current memory usage: 839.06640625
current memory usage: 839.1953125
current memory usage: 839.1953125
current memory usage: 839.1953125
current memory usage: 839.1953125
current memory usage: 839.1953125
current memory usage: 839.1953125
current memory usage: 839.1953125
current memory usage: 839.453125
current memory usage: 839.453125
current memory usage: 839.453125
current memory usage: 839.6796875
current memory usage: 839.6796875
current memory usage: 839.6796875
current memory usage: 839.6796875
current memory usage: 839.6796875
current memory usage: 839.6796875
current memory usage: 839.6796875
current memory usage: 839.9375
current memory usage: 839.9375
current memory usage: 840.06640625
[...]


It is a little complicated, so we need a smaller example to figure out
what is causing the memory leak. Before I try making a smaller example,
I thought I would write here to see if anyone happens to know what is
leaking memory, or at least to see if anyone has a good guess.

Some things used are:

prime_range()
nth_prime()
symbolic arithmetic
fast_callable(domain=ComplexField(250))
ZZ.random_element()
RR.random_element()
copy(MatrixSpace(ZZ, n).zero())
A.LLL(), where A is a modification of the above copy(MatrixSpace(ZZ, n).zero())


import sys

def find_candidates_for_large_value(repeat=1):

Find values of t where zeta(1/2 + it) is expected to be big.

possible_t = []
X = var('X')

p_start = 20
p_end = 40

euler_product1 = 1
euler_product2 = 1
for p in prime_range(nth_prime(p_start) + 1):
euler_product1 = euler_product1 / (1 - 1/p^(1/2 + i * X))

euler_product2 = euler_product1

for p in prime_range(nth_prime(p_start) + 1, nth_prime(p_end) + 1):
euler_product2 = euler_product2 / (1 - 1/p^(1/2 + i * X))

euler_product1 = fast_callable(euler_product1, domain=ComplexField(250), 
vars='X')
euler_product2 = fast_callable(euler_product2, domain=ComplexField(250), 
vars='X')

for l in xrange(repeat):
n = ZZ.random_element(p_start, p_end)
m = ZZ.random_element(120, 150)
r = ZZ.random_element(40, 50)
delta = RR.random_element(.9, .)

last_prime = nth_prime(n)
primes = prime_range(last_prime + 1)

weights = [p^(-1/4) for p in primes]
#print weights

A = copy(MatrixSpace(ZZ, n+1).zero())
for k 

Re: [sage-devel] Improvement of numerical integration routines

2010-09-23 Thread Jonathan Bober
On Thu, 2010-09-16 at 15:48 -0700, maldun wrote:
 Hi all!
 
 I'm currently found some time to work again on ticket #8321 again, and
 have added some numerical integration tests.

Hi Maldun,

I think that improvement of numerical integration would be excellent.
I've been somewhat disappointed with the numerical integration in sage
in the past. Probably most of the functionality is actually in sage or
in components of sage, but it isn't exposed too well. (Or else I have
always used the wrong functions, or haven't tried in a while.)

I have some more thoughts below.

 But I found some major problems, that should be discussed. I said
 already to Burcin that I will help out with this, but I think there
 are some things that need discussion before I start.
 
 As far as I have seen sage has the following behavior: calculate the
 exact integral if possible, numerical eval this, else call _evalf_.
 
 But this is not necessarily clever!
 An example:
 sage: integrate(sin(x^2),x,0,pi).n()
 ---
 TypeError Traceback (most recent call
 last)
 ...
 /home/maldun/sage/sage-4.5.2/local/lib/python2.6/site-packages/sage/
 rings/complex_number.so in
 sage.rings.complex_number.ComplexNumber.__float__ (sage/rings/
 complex_number.c:7200)()
 
 TypeError: Unable to convert -2.22144146907918 + 2.22144146907918*I to
 float; use abs() or real_part() as desired
 
 this error comes from erf trying to evaluate a complex number (which
 by the way should possible because it's an anlytic function...)
 But one can also run into numerical troubles! See an example which I
 posted some time ago:
 http://ask.sagemath.org/question/63/problems-with-symbolic-integration-and-then-numerical-evaluating
 Normally numerical integration is done completly numeric! For example
 mathematica always distinguish between Integrate, and NIntegrate.
 N[Integrate[...]] is not the same as NIntegrate!
 

In sage it seems there are a few different ways to get a numeric
approximation to an integral. There are the top level functions integral
and numerical_integral (and their synonyms), and then symbolic
expressions have a method nintegrate. So if f is a symbolic expression,

1.) numerical_integral(f, 0, 1)
2.) integral(f, x, 0, 1).n() and
3.) f.nintegrate(x, 0, 1)

do 3 different things. Maybe there are also other top level ways of
definite integration that don't require either an import statement or a
direct call to mpmath.

I think I would like 1 and 3 to do the exact same thing. I don't know if
there is a good reason for them to be different. (I do know that I might
often be interested in integrating things that aren't symbolic
expressions, though.) I think that the behavior of integral().n() that
you list above is reasonable, though. If it fails like in the first
example you have above, then that is probably a _different_ bug. And the
second example is a type of problem we will probably always have to deal
with. (I have a similar example that came up in the real world that I
might try to dig up because I think it will have the same bad behavior.)

A different behavior that I would find reasonable is for integral().n()
to _never_ do numeric integration. If integral() can't get an exact
answer, then integral().n() could just give an error and ask the user to
call numerical_integral() if that's what's wanted.

If integral().n() always did numeric integration, then it would of
course suffer from the problems below.

 Then on the other hand if we would just hand over things to libraries
 like pari and mpmath we get no hint if our results are trustworthy.
 Consider following simple example of an oscillating integral:
 
 sage: mpmath.call(mpmath.quad,lambda a:
 mpmath.exp(a)*mpmath.sin(1000*a),[0,pi])
 -1.458868938634994559655
 sage: integrate(sin(1000*x)*exp(x),x,0,pi)
 -1000/101*e^pi + 1000/101
 sage: integrate(sin(1000*x)*exp(x),x,0,pi).n()
 -0.0221406704921088
 
 Do you see the problems?! These are caused by the high oscillation,
 but we get no warning.
 If you use scipy you would get the following:
 
 sage: import scipy
 sage: import scipy.integrate
 sage: scipy.integrate.quad(lambda a: exp(a)*sin(1000*a),0,pi)
 Warning: The integral is probably divergent, or slowly convergent.
 (-0.1806104043408597, 0.092211010882734368)
 
 That's how it should be: The user has to be informed if the result
 could be wrong, and is then able to choose another method. But in the
 current state, bad bad things could happen...
 

Printed warnings like this can be useful, but they can also be annoying,
especially if the numeric integration routine is not called from
interactive code. And they might not be necessary in an example like
above where the error estimate returned is almost as big as the answer
returned. At the very least, if a warning might be printed to standard
output I would like a way to turn it off.

mpmath's approach might be good, where it seems the error is 

Re: [sage-devel] Re: Improvement of numerical integration routines

2010-09-23 Thread Jonathan Bober
On Sun, 2010-09-19 at 08:12 -0700, rjf wrote:
 Any program that computes a quadrature by evaluating the integral at
 selected points cannot provide
 a rigorous error bound.  I don't know what a rigorous error estimate
 is, unless it is an
 estimate of what might be a rigorous bound.  And thus not rigorous at
 all, since the
 estimate might be wrong.

This is, of course, true, provided that no information is known about
the function except for its values on a set of measure 0. However, it is
certainly possible for a numeric integration routine to provide rigorous
error bounds provided that some additional constraints are satisfied.

For example, many functions have bounded variation, or have a bounded
derivative, or are strictly increasing. And I have found situations
where I would like to provide that information to an integration routine
and have it use it in order to provide me information about the error
bound. And it probably isn't unreasonable for an integration routine to
guess at that information, and to provide error bounds that are rigorous
assuming a set of not unreasonable hypotheses about the function it is
integrating.

 
 A nasty function:f(x) :=  if member(x,evaluation_point_set) then 0
 else 1.
 integrate(f,a,b)  should be b-a.  The numerical integration program
 returns 0.
 

I am aware that this type of problem can actually occur in real world
examples, but I suspect that in the specific case of numeric integration
it can be somewhat alleviated by using random sampling to choose an
initial evaluation_point_set. Regardless, the existence of unsolvable
problems should not stop us from solving the solvable problems.

 There are SO many programs to do numerical quadrature, and some of
 them are
 quite good and free.  Some are not free (e.g. NIntegrate in
 Mathematica).
 The real issue is not writing the program, but figuring out which
 program to use and
 how to set up the parameters to make it work best.  I doubt that
 providing
 arbitrary-precision floats as a foundation will help solve all the
 issues, though
 a version of Gaussian quadrature or Clenshaw-Curtis using bigfloats
 would
 not be hard to do.  See http://www.cs.berkeley.edu/~fateman/papers/quad.pdf
 for program listings of a few quadrature routines using maxima and
 bigfloats.
 
 
  See what NIntegrate does, and think about
 whether you want to do that.
 
 RJF
 
 
 
 On Sep 17, 1:45 pm, Fredrik Johansson fredrik.johans...@gmail.com
 wrote:
  On Fri, Sep 17, 2010 at 12:48 AM, maldun dom...@gmx.net wrote:
   Do you see the problems?! These are caused by the high oscillation,
   but we get no warning.
   If you use scipy you would get the following:
 
  It is possible to get an error estimate back from mpmath, as follows:
 
  sage: mpmath.quad(lambda a: mpmath.exp(a)*mpmath.sin(1000*a),
  [0,mpmath.pi], error=True)
  (mpf('-0.71920642950258207'), mpf('1.0'))
  sage: mpmath.quad(lambda a: mpmath.exp(a)*mpmath.sin(1000*a),
  [0,mpmath.pi], maxdegree=15, error=True)
  (mpf('-0.022140670492108779'), mpf('1.0e-37'))
 
  Currently it doesn't seem to work with mpmath.call (because
  mpmath_to_sage doesn't know what to do with a tuple).
 
  The error estimates are also somewhat bogus (in both examples above --
  the first value has been capped to 1.0, if I remember correctly what
  the code does, and the second is an extrapolate of the quadrature
  error alone and clearly doesn't account for the arithmetic error). But
  it should be sufficient to correctly detect a problem and signal a
  warning in the Sage wrapper code in typical cases.
 
  I'm currently working on rewriting the integration code in mpmath to
  handle error estimates more rigorously and flexibly. This will work
  much better in a future version of mpmath.
 
  Fredrik
 


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Trac #8276: Specialist Elliptic curve / Brandt Modules needed...

2010-02-16 Thread Jonathan Bober
At least some errors probably come from the file

sage/algebras/quatalg/quaternion_algebra_cython.pyx

and I think they should be easily fixable without any knowledge of what
is going on by just changing the code to make a copy of the zero matrix.

However, I really wonder whether this fix is necessary. I understand the
the bug pointed out is a bad bug, but I think that when I call
zero_matrix() I will usually quickly modify the returned matrix. Are
there many useful situations where it is important that M =
zero_matrix() return very fast and where we _aren't_ going to modify M?

On Wed, 2010-02-17 at 00:30 +0100, Florent Hivert wrote:
 Hi there,
 
 I'm close to solve: #8276 Make the one(), identity_matrix() and zero_matrix()
 cached and immutable. Correcting MatrixSpace is easy, however there are a lot
 of place in sage where people create a matrix from the one or zero and modify
 it after that. I nearly corrected all these occurences However I'm stuck with
 those three files which are very complicated and depend one of each other:
 sage/modular/quatalg/brandt.py
 sage/algebras/quatalg/quaternion_algebra.py
 sage/schemes/elliptic_curves/heegner.py
 There are a lot of error but I can't find where it's coming from.  I really
 could use the help of a specialist. The patch are on
 http://trac.sagemath.org/sage_trac/ticket/8276
 Apply them it the following order:
 trac_8276-MatrixSpace_one-fh.patch
 trac_8276-fix_sagelib-fh.patch
 Thanks for any suggestion.
 
 Cheers,
 
 Florent
 


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] -fPIC in PARI and SELinux

2010-01-19 Thread Jonathan Bober
The Sage README.txt contains the text:

On Linux, if you get this error message:

 restore segment prot after reloc: Permission denied 

the problem is probably related to SELinux.

I got this error on a machine on which I don't have root access, so I
couldn't try the suggested workarounds. Searching the Internet, I
decided that this might have something to do with the -fPIC compiler
option, so I changed the PARI spkg to make sure that PARI compiled with
-fPIC, and this fixed the problem. (I didn't save the original error
message, but it mentioned libpari-gmp.)

Should PARI always be compiled with -fPIC? (Should I really be asking
this question to PARI developers who decided not to use PIC?) I don't
know much about this, but apparently -fPIC might cause some slowdown on
some systems. It seems that just about everything else in Sage is
compiled to be position independent, though (at least, nothing else
causes this problem).

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] log(x) is really slow when x is a float

2010-01-12 Thread Jonathan Bober
I just posted http://trac.sagemath.org/sage_trac/ticket/7918 but I
thought I would also send an email here to catch the attention of
whomever made the change that caused this slowdown. Hopefully this will
be easy to fix.

Somewhere between sage 4.1 and sage 4.3, log(x) got really slow when x
is a float.

Example:

sage: version()
'Sage Version 4.3, Release Date: 2009-12-24'
sage: x = float(5)
sage: x
5.0
sage: timeit('log(x)')
625 loops, best of 3: 362 µs per loop

sage: version()
'Sage Version 4.1, Release Date: 2009-07-09'
sage: x = float(5)
sage: timeit('log(x)')
625 loops, best of 3: 7.26 µs per loop

I notice that the functionality of log() has improved since 4.1, but I
hope that this kind of performance penalty is not necessary. For
comparison, I'll also point out the performance of pure python (even
though this might not be fair), which is an order of magnitude faster
than sage 4.1:

sage: import math
sage: log = math.log
sage: x = float(5)
sage: timeit('log(x)') 
625 loops, best of 3: 715 ns per loop


-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: new virtualbox image!

2009-10-29 Thread Jonathan Bober

This might not be too relevant to OS X, since I am running Ubuntu, but
I'll put it here for the record anyway, in case it is useful to someone:

At some point when the virtualbox image wasn't working for me I
recompiled the kernel module this way. I think that I did it after
getting a more informative error message from virtualbox that suggested
it as a solution.

I also had the problem that I had to unload the KVM kernel modules
before virtualbox would work. I think that I installed KVM sometime in
the past to try it out, but I don't actually use it. (Virtualbox
actually gave me an intelligent error message here.)

I originally thought that the final thing that I did to make the
virtualbox image go from not working to working was increase the video
ram, but I have since tried decreasing the video ram, and the image
still works.

Also, somewhere in between I created a different virtualbox image to
install Windows on, and I may have rebooted by computer once or twice.

Anyway, these are at least some of the various things that I did, and
some combination of these actions made the image work when it originally
did not. I wish I had been more systematic so that I knew exactly what I
did.

Another thought that occurs to me is: What does virtualbox do when you
ask it to use VT-X/AMD-V and your processor does not support it? Will it
cause an error, or will virtualbox just ignore the option?

On Wed, 2009-10-28 at 23:58 -0400, Bill Page wrote:
 William, Marshall,
 
 Have you tried recompiling the Virtualbox kernel headers?
 
   $ sudo /etc/init.d/vboxdrv setup
 
 It also reloads the kernel module.
 
 I suppose that reasonably you should not have to do this immediately
 after an install but perhaps the install misses something subtle about
 the configuration.
 
 Regards,
 Bill Page.
 
 On Wed, Oct 28, 2009 at 11:33 PM, William Stein wrote:
 
  On Mon, Oct 26, 2009 at 9:18 AM, mhampton wrote:
 
  I can't seem to get virtualbox 3.0.8 to work on my mac.  Things work
  fine on my windows machine, I did a small test by adding the new
  biopython optional package and that worked well.  I am curious if
  anyone else can get virtualbox to work on OS X 10.4, or if its
  something funny about my machine.
 
  -Marshall
 
  I just had exactly the same problem as you with the same error message
  on my OS X 10.6 macbook air laptop.  I did the following:
 
1. Rebooted.  That didn't fix it.
2. Downloaded VirtualBox (3.0.10 now -- they release more often
  than Sage!), and reinstalled from scratch.  That didn't help.
3. Reinstalled 3.0.10 *again* from scratch... and everything works
  now.  Weird.
 
  When I googled for results about this error message during the last
  week, all I found were emails from you.
 
 
 

--~--~-~--~~~---~--~~
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: new virtualbox image!

2009-10-27 Thread Jonathan Bober

I've just been trying this on Ubuntu and had the same problem.
Increasing the video ram in the virtual machine from 4 to 16 MB fixed
it, and now it works. (At least, I'm pretty sure that was the change
that fixed it.)

On Fri, 2009-10-23 at 14:01 -0700, mhampton wrote:
 I haven't tried it on Windows yet, but I tried on my intel 10.4 mac
 and got an error:
 
 Unknown error creating VM (VERR_INTERNAL_ERROR)
 Result Code: NS_ERROR_FAILURE (0x80004005)
 Component: Console Interface: IConsole {0a51994b-cbc6-4686-94eb-
 d4e4023280e2}
 
 -Marshall
 
 On Oct 23, 2:32 pm, William Stein wst...@gmail.com wrote:
  Hi,
 
  Please try out
 
 http://sage.math.washington.edu/home/wstein/binaries/sage_vbox-4.1.2-...
 
  Let me know what happens.  It addresses several issues.  Now you
  should be able to install new packages, build Cython/GCC/G++ code,
  change/develop Sage, etc.
 
  And, amazingly, it is only 2MB bigger than the old sage vbox.
 
  Extra thanks to Bill Page for helping with this.
 
   -- William
 
  --
  William Stein
  Associate Professor of Mathematics
  University of Washingtonhttp://wstein.org
 

--~--~-~--~~~---~--~~
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: pari slowness

2008-06-09 Thread Jonathan Bober

On Mon, 2008-06-09 at 10:19 -0700, mabshoff wrote:
 
 [...]
 No clue. Can you actually compare the gp binary from Sage directly
 with the timings from your self builid binary to eliminate the problem
 that libPari is involved here? If the gp binary in Sage is slower by a
 factor of three compared to the one you build this sounds like a bug
 to me. But it could also be conversation overhead for example.

Definitely could be conversion overhead. On my machine (warning: I'm
still running the ridiculously old Sage 2.10.2) I get

sage: time y = pari(4).bernfrac()
CPU times: user 4.14 s, sys: 0.00 s, total: 4.14 s
Wall time: 4.15
sage: type(y)
type 'sage.libs.pari.gen.gen'
sage: time x = Rational(y)
CPU times: user 1.50 s, sys: 0.00 s, total: 1.50 s
Wall time: 1.50

It looks like the conversion from sage.lib.pari.gen.gen to
sage.rings.rational.Rational just converts y to a string and then parses
the resulting string, which is why this takes so long.


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Graph planarity

2008-05-26 Thread Jonathan Bober

Since no one else has responded, I'll go ahead and say that I don't know
anything about this. Maybe this email will be a reminder to someone who
does know. (Or maybe any questions have already been taken care of off
list?)

On Fri, 2008-05-23 at 17:12 -0700, William Stein wrote:
 On Fri, May 23, 2008 at 4:43 PM, William Stein [EMAIL PROTECTED] wrote:
  On Fri, May 23, 2008 at 4:02 PM, Mike Hansen [EMAIL PROTECTED] wrote:
 
  Actually, it is not released under GPL.  It's currently licensed under
  Apache 2.0, which is GPL compatible.  It had previously been under
  Boyer's personal license.
 
  The Apache 2.0 license is not compatible with GPLv2.  I believe during
  Sage Days 7, he released it (at least to Sage) under the GPL.
 
 
  I think apache 2.0 is not GPLv2 compatible, but Apache 2.0 *is* GPLv2+
  compatible.
 
 
 I just talked about this with Michael, and we really don't want Apache code
 linked into Sage, at least not for our Microsoft Windows port.
 
 Emily, Robert, and Jon Bober - could you guys get documentation that the
 code has been relicensed GPLv2 and change the headers, or if not please
 contact the author?
 
  -- William
 
  
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] some sort of plotting problem

2008-01-27 Thread Jonathan Bober

Does anyone know what's going on in the following example? I can't seem
to reproduce this with a simple example. Basically, I create a few
'callable symbolic expressions', and then define a lambda function that
calls a one of them. 

Ultimately, I have a function

Q = lambda x : RR(bound(15000, 15000^x))

which takes a float and returns a float. (Well, if I change 'RR' to
'float' in the above code, I get the same result.) So, as far as I can
tell, the plot() function should just see a function that takes a float
and returns a float, and it should just plot it without complaining.

Also, if I change my callable symbolic expressions to lambdas, this
still doesn't work. 

But, if I replace Q with

Q = lambda x : RR(sin(x))

the everything works fine.

I suspect that I might be doing something wrong, since I can't reproduce
this with something simpler (and I don't like sending 'what is wrong
with my code'-type emails to sage-devel instead of sage-support), but
the same thing was happening to me yesterday and I just tried rewriting
from scratch, and I don't know what's going on. Maybe there is some sort
of bug in sage.

Here's the code, and the error.

sage: M = var('M')
sage: B = .561459483
sage: C1(M) = 1/(4*pi^2) * (B/(log(M))) * (1 - 1/(2 * log(M)^2))^2
sage: C2(M) = 2/(M - 1)^(1/2) * (log(M)^2)/B * (1 - 1/(2 * log(M)))^(-1) + 
1/(M-1)
sage: bound(N,M) = N * C1(M) - N^2 * C2(M)
sage: Q = lambda x : RR(bound(15000, 15000^x))
sage: Q(10)
2.21828479338329
sage: P = plot(Q, (1, 10))
---
type 'exceptions.TypeError' Traceback (most recent call last)

/home/bober/ipython console in module()

/home/bober/sage/local/lib/python2.5/site-packages/sage/plot/plot.py in 
__call__(self, funcs, *args, **kwds)
   2394 # if there is one extra arg, then it had better be a tuple
   2395 elif n == 1:
- 2396 G = self._call(funcs, *args, **kwds)
   2397 elif n == 2:
   2398 # if ther eare two extra args, then pull them out and pass 
them as a tuple

/home/bober/sage/local/lib/python2.5/site-packages/sage/plot/plot.py in 
_call(self, funcs, xrange, parametric, polar, label, **kwds)
   2470 del options['plot_division']
   2471 while i  len(data) - 1:
- 2472 if abs(data[i+1][1] - data[i][1])  max_bend:
   2473 x = (data[i+1][0] + data[i][0])/2
   2474 try:

type 'exceptions.TypeError': 'float' object is unsubscriptable
sage: 




--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: generator inconsistencies in finite fields

2008-01-23 Thread Jonathan Bober

I just realized a source of my confusion. The docstring that I quoted
was not actually wrong in the way that I thought is was, but was
apparently deceptive (to me). Perhaps some people are already aware of
this, but GF(5), GF(25), and GF(5^100) are all different types, and so
have different docstrings, some of which have errors.

sage: F = GF(5)
sage: K = GF(5^2,'x')
sage: L = GF(5^100,'x')

sage: type(F)
class 'sage.rings.finite_field.FiniteField_prime_modn'
sage: type(K)
type 'sage.rings.finite_field_givaro.FiniteField_givaro'
sage: type(L)
class 'sage.rings.finite_field_ext_pari.FiniteField_ext_pari'

I earlier quoted the docstring for
sage.rings.finite_field_givaro.FiniteField_givaro. Assuming that 'small'
means 'implemented using givaro', I guess (from Martin's email) that for
fields F implemented using givaro F.gen() is always a multiplicative
generator. But for prime fields constructed with GF(), F.gen() is always
1, and for large fields, F.gen() is 'just' a root of the defining
polynomial, whatever that happens to be.

In fact, if I force the construction of a givaro-implemented prime
field, then the generator is not 1.

sage: H = sage.rings.finite_field_givaro.FiniteField_givaro(5)
sage: H.gen()
2

So the state of the docstrings is

sage: sage.rings.finite_field.FiniteField_prime_modn.gen?
[...]
Docstring:

Return generator of this finite field.
(Nothing wrong here I suppose, but I think that it is always going to
return 1.)


sage: sage.rings.finite_field_givaro.FiniteField_givaro.gen?
[...]
Docstring:

Return a generator of self. All elements x of self are
expressed as log_{self.gen()}(p) internally. If self is
a prime field this method returns 1.

(The sentence If self is a prime field... is wrong, but the first
sentence is correct.)

sage: sage.rings.finite_field_ext_pari.FiniteField_ext_pari.gen?
[...]
Docstring:

Return chosen generator of the finite field.  This generator
is a root of the defining polynomial of the finite field, and
is guaranteed to be a generator for the multiplicative group.  

(This is wrong because in this case the generator is not guaranteed to
be a generator for the multiplicative group.)

I think that the ultimate source of my confusion is that GF is not a
class, but is actually a function that returns a instance of any one of
at least 3 different classes. I feel that if it's somehow possible I
would much prefer if FiniteField was a type all its own, and if all
finite fields were of type FiniteField, and the difference in
implementations was completely invisible. (I'm pretty sure that there
are other instances of this type of thing in sage.)

At the very least, there should probably be some sort of uniformity in
the docstrings, but I'm not sure exactly how this should work.

(Additionally, I was just thinking that it might be nice to have a
standard SEE ALSO: section in the documentation. For example

SEE ALSO: multiplicative_generator()

The online documentation could even automatically generate links for
this.)

Anyway, that was quite a long email. Congratulations and apologies if
you read it all.

-bober


On Tue, 2008-01-22 at 00:16 +, Martin Albrecht wrote:
  In this case, the docstring needs to be corrected, because the statement
  that All elements x of self are expressed as log_{self.gen()}(p)
  internally is not true, right? (Extrapolating from this sentence and my
  two examples led me to make my previous statements.) Probably it is true
  that all elements are expressible as polynomials in x, and perhaps this
  is also the internal representation.
 
 Hi,
 
 It is not. For small extension fields of order  2^16 the elements are 
 represented using Zech logs internally, so for these finite fields the 
 docstring all elements x of self are expressed as log_{self.gen()}(p) 
 internally is true.
 
 Martin
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: generator inconsistencies in finite fields

2008-01-21 Thread Jonathan Bober

Ok, I was wrong. I'm convinced that sage has the correct behavior, which
I think is:

GF(q).gen() returns an element x of GF(q) such that the smallest
subfield of GF(q) containing x is GF(q).

In this case, the docstring needs to be corrected, because the statement
that All elements x of self are expressed as log_{self.gen()}(p)
internally is not true, right? (Extrapolating from this sentence and my
two examples led me to make my previous statements.) Probably it is true
that all elements are expressible as polynomials in x, and perhaps this
is also the internal representation.


On Mon, 2008-01-21 at 00:32 -0800, David Kohel wrote:
 Hi,
 
 It is probably a bias of the choice of (additive) generators for
 finite field
 extensions which results in the primitive field element also being a
 generator for the multiplicative group (confusingly called a
 primitive
 element of the finite field).
 
 It is not possible to set GF(q).gen() to always be a generator for
 the
 multiplicative group, since it is not always computationally feasible
 to
 determine it.  In particular requires a factorization of q-1.  The
 finite
 field constructor for non-prime fields would then also have to be
 changed to ONLY use primitive elements (since GF(q).gen() must
 certainly return the generator with respect the the defining
 polynomial).
 This would conflict with the desire to allow user-chosen polynomials
 or polynomials of SAGE's choice which are selected for sparseness
 (hence speed) rather than as generator of the multiplicative group.
 Moreover, any such definition for convenience of the user would
 have to be turned off for q  [some arbitrary bound] and could not
 be reliable.  Thus I don't see how it would be possible to make the
 definition that GF(q).gen() is a generator for the multiplicative
 group.
 
 For small finite fields, one could set GF(q).gen() to be such a
 generator.  This could either be convenient, or confuse users into
 thinking that this is more generally True.
 
 --David
 
 P.S. On the other hand, I find (additive) order confusing and there
 should
 be some checking of the index for FF.i below (i.e. give an error if i !
 = 0).
 
 sage: FF.x = GF(7^100);
 sage: x.order()
 7
 sage: x
 x
 sage: x.multiplicative_order()
 323447650962475799134464776910021681085720319890462540093389533139169145963692806000
 sage: FF.0
 x
 sage: FF.1
 x
 sage: FF.2
 x
 sage: FF.100
 x
 sage: FF.101
 x
  
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] generator inconsistencies in finite fields

2008-01-20 Thread Jonathan Bober

I don't like the behavior illustrated below. Briefly, my problem is that
GF(p).gen() gives a generator for the additive group of GF(5), while
GF(p^n).gen() gives a generator for for multiplicative group of GF(p^n)
(n  1).

I would file this 'complaint' directly as a trac bug report, but the
documentation is somewhat clear that this is what is _supposed_ to
happen - i.e., it says

sage: F.gen?
Type:   builtin_function_or_method
Base Class: type 'builtin_function_or_method'
String Form:built-in method gen of
sage.rings.finite_field_givaro.FiniteField_givaro object at 0x9c22324
Namespace:  Interactive
Docstring:

Return a generator of self. All elements x of self are
expressed as log_{self.gen()}(p) internally. If self is
a prime field this method returns 1.

[...]

I also know that there is multiplicative_generator() method which always
does the right thing, but I still don't like this inconsistency.

Anyway, perhaps I'll turn my 'complaint' into a trac ticket, but I won't
bother if others don't consider this to be a bug.

-bober

p.s. The complaint illustrated:

sage: F = GF(25,name='x')
sage: x = F.gen()
sage: x # x generates the multiplicative group
x
sage: [x^i for i in range(25)]

[1,
 x,
 x + 3,
 4*x + 3,
 2*x + 2,
 4*x + 1,
 2,
 2*x,
 2*x + 1,
 3*x + 1,
 4*x + 4,
 3*x + 2,
 4,
 4*x,
 4*x + 2,
 x + 2,
 3*x + 3,
 x + 4,
 3,
 3*x,
 3*x + 4,
 2*x + 4,
 x + 1,
 2*x + 3,
 1]
sage: K = GF(5, name='y')
sage: y # y generates the additive group
1
sage: y * y
1




--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Approximating Ei using polynomials

2008-01-17 Thread Jonathan Bober

I don't know the answer to this for certain, but (assuming you mean eq.
5.1.53 on page 231 - http://www.math.sfu.ca/~cbm/aands/page_231.htm ) it
looks to me like this might just a 5th degree polynomial interpolation
(in which case, it's of course not something that you can compute to
arbitrary precision without computing E1 to arbitrary precision first).
In particular, the first few terms are just small perturbations of the
coefficients of the power series in 5.1.11 on page 229.

The sage docstring for exponential_function_1 (which is ultimately
implemented in PARI) says:

REFERENCE: See page 262, Prop 5.6.12, of Cohen's book A Course
in Computational Algebraic Number Theory.

where Cohen suggests using equation 5.1.11 from AS for x small (he says
x = 4) and the continued fraction expansion - equation 5.1.22 - for x
large.


On Thu, 2008-01-17 at 10:17 -0600, [EMAIL PROTECTED] wrote:
 The AS handbook lists polynomial coefficients for approximation of E1,
 the exponential integral. Does anyone know how these coefficients were
 derived? Is it a chebyshev polynomial? I want to dynamically compute
 these coefficients to the required precision.
 
 Tim
 
  
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Fun Things to do with a Sage Notebook

2007-10-27 Thread Jonathan Bober


On Sat, 2007-10-27 at 17:03 -0500, William Stein wrote:

 I think the public free Sage notebook should be configured so that
 the sageXX accounts cannot open sockets to the outside world.  Period.
 If I knew how to configure this in  30 minutes, I would have done it already.

I think that this should be very simple with iptables, provided that the
kernel has been compiled with the right options, and that the relevant
options are --uid-owner and --gid-owner. Unfortunately, I don't know any
more than that, though. (So what I really mean is that this might be
really simple for an iptables expert, or even for someone with a little
bit of iptables knowledge.)


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] problem with showing plots from command line

2007-10-23 Thread Jonathan Bober

Hi folks.

I tried to send the following email a few hours ago (but I put the email
address in wrong) and I don't feel like rewriting it all. So I should
add to it now that I have upgraded sage (so I am running 2.8.8.1) and
the error still occurs.

Perhaps I should open a ticket for this, but perhaps I should also wait
to see it anyone else can duplicate this problem.

-

Hi all.

I apologize for not upgrading to the latest version of sage before
writing this email, but I didn't see any tickets about this on the since
2.8.7.2, so perhaps no one else has had this problem. (And I am
upgrading now, but if I don't send this email now, I might forget about
it for a few days).

Anyway, the following takes place on a Intel Core Duo (not Core 2, so
just 32 bits) newly upgraded to Ubuntu 7.10, and running sage 2.8.7.2. 

Briefly, plot().show() isn't working for me. After looking into the
problem a little bit, I modified the code for show() so that it would
print out the command line that it was using, along with any errors. I
get the following:

sage: f(x) = x*exp(-4*x)
sage: P = f.plot()
sage: P.show()
xdg-open /home/bober/.sage//temp/bober/11314//tmp_0.png 
sage: eog: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol:
gzopen64

So there is some sort of library problem. But if I execute the same
command from a plain shell:

[EMAIL PROTECTED]:~$  xdg-open /home/bober/.sage//temp/bober/11314//tmp_0.png

eog opens the file just fine. So it seems that sage is somehow screwing
up library search paths, or perhaps when eog is run from within sage it
is trying to use some libraries that sage provides and some libraries
that Ubuntu provides, and those libraries don't play nice together. Or
something like that is going on.

plot().show() also does not work anymore for me when running sage 2.8.2,
and it certainly used to before I upgraded Ubuntu, so it is probably
safe to assume that it is getting the same error, and that this error
was caused by the upgrade. (I haven't actually taken the time to verify
that eog gives the same library error when run from 2.8.2, however.)

This is probably beyond my ability to debug any further without spending
many hours, so hopefully someone knows how to fix this easily, or can at
least point me in the right direction.



--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: problem with showing plots from command line

2007-10-23 Thread Jonathan Bober

 Could you give us the output from echo 'LD_LIBRARY_PATH' and 'ldd xdg-
 open' with and without sourcing sage-env. If you add an 'echo
 LD_LIBRARY_PATH' right before xdg-open is called in P.show()
 
 From the name of the symbol I would guess that it is a libz
 incompability. There was a patch to the launch code for firefox/
 iceweasel that malb made because of a similar issue. Maybe we need to
 reset LD_LIBRARY_PATH to the old value before we modify it in sage-env
 in case we launch external helper applications.

LD_LIBRARY_PATH (right before xdg-open is called) is

/home/bober/sage-2.8.7.2/local/lib/openmpi:/home/bober/sage-2.8.7.2/local/lib/:

It isn't actually set to anything normally (outside of sage-env),
however. ldd xgd-open won't tell anything (actually, it says that it
isn't a dynamic executable) because xdg-open just launched the preferred
application. However, when executed from within sage, ldd /usr/bin/eog
yields the following possibly offending lines. (The full output is at
the end of this email.)

libgnutls.so.13 = /home/bober/sage-2.8.7.2/local/lib/libgnutls.so.13 
(0xb6f0f000)
libfreetype.so.6 = /home/bober/sage-2.8.7.2/local/lib/libfreetype.so.6 
(0xb6e36000)
libz.so.1 = /home/bober/sage-2.8.7.2/local/lib/libz.so.1 (0xb6e2)
libpng12.so.0 = /home/bober/sage-2.8.7.2/local/lib/libpng12.so.0 
(0xb6df3000)
libgcrypt.so.11 = /home/bober/sage-2.8.7.2/local/lib/libgcrypt.so.11 
(0xb6d25000)
libgpg-error.so.0 = 
/home/bober/sage-2.8.7.2/local/lib/libgpg-error.so.0 (0xb6d21000)

I realize now that I don't have to go through all of this source editing
to find the problem:

sage: !eog
eog: symbol lookup error: /usr/lib/libxml2.so.2: undefined symbol: gzopen64

sage: !gimp
gimp: symbol lookup error: /usr/lib/libcairo.so.2: undefined symbol: 
FT_Library_SetLcdFilter

sage: !gvim
gvim: symbol lookup error: /usr/lib/libcairo.so.2: undefined symbol: 
FT_Library_SetLcdFilter

And the problem extends beyond library path errors into python path
errors:

sage: !glchess
Traceback (most recent call last):
  File /usr/games/glchess, line 18, in module
from glchess.glchess import start_game
ImportError: No module named glchess.glchess


However, calling

os.system(LD_LIBRARY_PATH='' eog)
or
os.system(LD_LIBRARY_PATH='' gimp)

works just fine, so a solution might be to just unset LD_LIBRARY_PATH
before calling external applications. But there might be a better
solution, because I would like for !eog to work, for example. (Well,
that's not something that I ever use, but it should work anyway.)


output from ldd /usr/bin/eog, just in case there is something I missed:

linux-gate.so.1 =  (0xe000)
/usr/lib/fglrx/libGL.so.1.2.xlibmesa (0xb7ee9000)
libpython2.5.so.1.0 = /usr/lib/libpython2.5.so.1.0 (0xb7d9a000)
libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0xb7d81000)
libglade-2.0.so.0 = /usr/lib/libglade-2.0.so.0 (0xb7d69000)
liblaunchpad-integration.so.0 = /usr/lib/liblaunchpad-integration.so.0 
(0xb7d65000)
libgnome-desktop-2.so.2 = /usr/lib/libgnome-desktop-2.so.2 (0xb7d5)
libgnomeui-2.so.0 = /usr/lib/libgnomeui-2.so.0 (0xb7cc4000)
libgnomevfs-2.so.0 = /usr/lib/libgnomevfs-2.so.0 (0xb7c6a000)
libgnome-2.so.0 = /usr/lib/libgnome-2.so.0 (0xb7c55000)
libart_lgpl_2.so.2 = /usr/lib/libart_lgpl_2.so.2 (0xb7c3f000)
libgconf-2.so.4 = /usr/lib/libgconf-2.so.4 (0xb7c0d000)
libgthread-2.0.so.0 = /usr/lib/libgthread-2.0.so.0 (0xb7c08000)
libgtk-x11-2.0.so.0 = /usr/lib/libgtk-x11-2.0.so.0 (0xb7883000)
libgdk-x11-2.0.so.0 = /usr/lib/libgdk-x11-2.0.so.0 (0xb77fb000)
libgdk_pixbuf-2.0.so.0 = /usr/lib/libgdk_pixbuf-2.0.so.0 (0xb77e3000)
libcairo.so.2 = /usr/lib/libcairo.so.2 (0xb776c000)
libX11.so.6 = /usr/lib/libX11.so.6 (0xb767b000)
libgmodule-2.0.so.0 = /usr/lib/libgmodule-2.0.so.0 (0xb7677000)
libexif.so.12 = /usr/lib/libexif.so.12 (0xb764d000)
liblcms.so.1 = /usr/lib/liblcms.so.1 (0xb761b000)
libm.so.6 = /lib/tls/i686/cmov/libm.so.6 (0xb75f6000)
libdbus-glib-1.so.2 = /usr/lib/libdbus-glib-1.so.2 (0xb75db000)
libgobject-2.0.so.0 = /usr/lib/libgobject-2.0.so.0 (0xb75a)
libglib-2.0.so.0 = /usr/lib/libglib-2.0.so.0 (0xb74e3000)
libjpeg.so.62 = /usr/lib/libjpeg.so.62 (0xb74c3000)
libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0xb7378000)
libxml2.so.2 = /usr/lib/libxml2.so.2 (0xb725a000)
libXext.so.6 = /usr/lib/libXext.so.6 (0xb724c000)
libXxf86vm.so.1 = /usr/lib/libXxf86vm.so.1 (0xb7247000)
libXdamage.so.1 = /usr/lib/libXdamage.so.1 (0xb7244000)
libXfixes.so.3 = /usr/lib/libXfixes.so.3 (0xb723f000)
libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0xb723a000)
libdrm.so.2 = /usr/lib/libdrm.so.2 (0xb723)
libutil.so.1 = /lib/tls/i686/cmov/libutil.so.1 (0xb722c000)
/lib/ld-linux.so.2 

[sage-devel] Re: number_of_partitions

2007-10-14 Thread Jonathan Bober

On Sun, 2007-10-14 at 15:13 -0700, William Stein wrote:
 On 10/14/07, Jonathan Bober [EMAIL PROTECTED] wrote:
 
  I'm not sure right now, but I'm thinking about it.
 
 OK, your new code on x86_64 gets essentially every single
 number_of_partitions(n) wrong for 242 = n = 2833, and
 seems right for everything else.
 

I think that the fix (for x86_64) should be to add the following code
around line 462, right at the beginning of the function
compute_current_precision().

// n = the number for which we are computing p(n)
// N = the number of terms that have been computed so far

// if N is 0, then we can't use the above formula (because we would be
// dividing by 0).
if(N == 0) return compute_initial_precision(n) + extra;

It is actually odd that I am getting the correct answer on my (x86_32)
laptop.

This might also affect powerpc, but should only matter for small input.
For large input, disabling long doubles might help, but I'm not sure.
(The previous version of this code used long doubles as well, but the
original version did not.)

Also, for small input this algorithm might not be the best option (and
my code certainly is not optimized for small input), so it might
actually be better to use Pari if the sage -- pari -- sage conversion
is fast enough.


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: number_of_partitions

2007-10-14 Thread Jonathan Bober

I'm not sure right now, but I'm thinking about it.

You could try setting

long_double_precision = double_precision

wherever it is initialized. (This is around line 140 somewhere.) If you
do this it will just skip the part of the computation where it uses long
doubles (For some reason I have a feeling that there might be something
funny about the long double type on PPC OS X - but I don't know what
that reason is, so I could be wrong.)

Similarly, you could play with setting qd_precision = dd_precision to
skip the part of the computation with quad_doubles, or set everything to
double_precision, so that it only uses either mpfr or standard doubles,
etc.

Another thing to try is using just a little bit more precision. See
around line 572, in the function compute_extra_precision(). You could
turn the 5 into, say, a 15 or a 30 to see what happens. If that works,
then you can try to experiment to see what the smallest number that
works is.

On Sun, 2007-10-14 at 14:40 -0600, William Stein wrote:
 Hi Jon,
 
 Your number_of_partitions code is in the current sage-2.8.7.rc1,
 and it works on all but one machine we tested in on.  Unfortunately
  -- JUST AS YOU SUSPECTED -- it doens't work on PPC OS X.  It runs,
 but gives wrong answers.  Any ideas how to fix your code to still
 work on OS X PPC, even slowly?
 
 William
 



--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: number_of_partitions

2007-10-12 Thread Jonathan Bober

I've been meaning to get around to pointing these out:

http://trac.sagemath.org/sage_trac/ticket/468

http://trac.sagemath.org/sage_trac/ticket/486

The first contains a patch to fix the problem where quad_double screws
up the fpu precision on x86 machines, and the second contains a patch to
significantly speed up number_of_partitions() (the second patch won't
work properly unless the first patch is applied, though.)

The main reason that I haven't really pointed the patches out is that I
haven't tried them on a recent version of sage, and they definitely need
to be tested on a few different machines before they are included in
sage, because I'm not sure if the code will work on all computers.

The speedup is quite significant, though, and any problems should be
minor.

(By the way, the next thing to do, of course, is to to put in support
for bigger numbers. Right now it is limited to at most the size of an
int, and I'm not actually sure if it works correctly for large ints.
Maybe I should add a trac ticket for this, but, then again, is there any
chance that anyone would ever really care enough to let a computer spend
a week, or however long it would take, calculating the number of
partitions of, say, 10^100?)

On Fri, 2007-10-12 at 09:14 -0700, William Stein wrote:
 Jon,
 
 Is there an update on your number_of_partitions code?
 
 I'm going to give a plenary talk at an AMS meeting tomorrow,
 and your fast number_of_partitions code will be one of
 my examples.  I had the feeling you were about to release
 a version that was faster than the one in Sage a month or
 two ago, but somehow it didn't happen.
 
 William
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] object initialization in cython

2007-08-30 Thread Jonathan Bober

Hi. In integer.pyx I see a lot of code such as:

cdef Integer z
z = PY_NEW(Integer)
# [...] mpz_somethingorother(z.value, somethingorother)

Is it necessary to explicitly use PY_NEW like this?

I'd like to write code that casts a number to an Integer object. For
example, in writing a function Integer.number_of_digits() I'd like to
write

cdef Integer size
size = (Integer)(mpz_sizeinbase(self.value, base))

(One reason that I want to do it this way, other than simplicity, is
that I don't necessarily know what type a size_t is.)

Similarly, if x in an int and y and z are Integers, is it OK to write

z = Integer(x)**y

Anyway, this point isn't really why I want to write it this way. The
question is really, is there any reason I shouldn't write it this way?
Memory leaks? Efficiency?

Maybe the real question is: In Cython code, when do I need to take care
of memory management myself, and when can I count on the garbage
collector?


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: quaddouble in sage, and other floating point issues

2007-08-23 Thread Jonathan Bober

See ticket number 468.

http://trac.sagemath.org/sage_trac/ticket/468

I believe that this is fixed - at least the short easy part.

I'm running ./sage -t right now to make sure that no new problems have
been introduced. Someone should probably spend a few minutes looking at
the patch to make sure that it doesn't do anything stupid, though, as
this is my first real attempt at touching pyrex/sagex/cython code.


On Wed, 2007-08-15 at 15:10 -0700, William Stein wrote:
 On 8/15/07, Jonathan Bober [EMAIL PROTECTED] wrote:
 
  I am willing to rewrite the quad double wrapper for part (2), but it's
  possible that it will be at least a little while before it happens. It
  shouldn't be too difficult, though, so it might happen soon.
 
  Part (1) probably requires dealing with autotools stuff (unless it can
  be wrapped with distutils somehow) and I don't know how to do it right
  now. (But maybe it's a good time to learn.)
 
 Excellent.  Can you put something in the trac that describes the
 issues, solutions, and that you're going to work on it?
 
 http://www.sagemath.org:9002/sage_trac
 
 If you need a trac login email me.
 
  -- William
 
  
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: quaddouble in sage, and other floating point issues

2007-08-15 Thread Jonathan Bober

On Wed, 2007-08-15 at 09:13 -0700, William Stein wrote:
 On 8/14/07, William Stein [EMAIL PROTECTED] wrote:
  On 8/14/07, cwitty [EMAIL PROTECTED] wrote:
   On Aug 14, 12:59 am, Jonathan Bober [EMAIL PROTECTED] wrote:
This is exactly what NTL does in its quad float class. Just about every
function starts and ends with a macro to adjust the fpu, resulting in
around 7 extra assembly instructions. In the following code, the
overhead is quite significant - it takes around 21 seconds to execute on
my machine, but only about 4 seconds without the START_FIX and END_FIX.
Of course, this is not necessarily any sort of accurate test, but it
does indicate that this can be an expensive operation.
  
   Yes, changing the floating-point modes is very slow on many (all?) x86
   processors.  I believe it flushes the floating-point pipeline, which
   takes many clock cycles.
 
  OK, how about this plan:
 
  (1) On systems with sse2, we do the option 3a (which is If a
  processor supports sse2,
  then passing gcc -march=whatever -msse2 -mfpmath=sse (maybe the -march
  isn't needed) will cause gcc to use sse registers and instructions for
  doubles, and these have the proper precision.)
 
  (2) On systems without sse2 (old slow pentium 3's) we do the START_FIX
  and END_FIX.  These computers are very slow anyways, so let them suffer
  (and the suffering is *only* for code that uses quaddouble, which is very 
  little
  code anyways).
 

I am willing to rewrite the quad double wrapper for part (2), but it's
possible that it will be at least a little while before it happens. It
shouldn't be too difficult, though, so it might happen soon.

Part (1) probably requires dealing with autotools stuff (unless it can
be wrapped with distutils somehow) and I don't know how to do it right
now. (But maybe it's a good time to learn.)

 Since nobody objected, can somebody volunteer to implement this? :-)
 
 William
 
  
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: quaddouble in sage, and other floating point issues

2007-08-13 Thread Jonathan Bober

On Mon, 2007-08-13 at 10:25 -0700, cwitty wrote:
 On Aug 11, 6:03 pm, Jonathan Bober [EMAIL PROTECTED] wrote:
  I have just noticed that using the C type long double from within sage
  doesn't work the way that I've expected it to.
 
  The issue is a little complicated, and other people on this list
  probably know more about it than I do, but, briefly, the problem stems
  from the fact that x86 fpu registers support extended precision (64 bit
  mantissa), while a C double is just double precision (53 bit mantissa).
  This causes unexpected results sometimes because floating point
  arithmetic will be done on the processor at extended precision, and then
  stored in memory at double precision, which means that that value of a
  double could theoretically change at any time. This is very bad for
  packages like quaddouble.
 
  It is possible to set the fpu control word so that the fpu only uses
  double precision, which is one way to solve this problem. Apparently,
  this is done by default on Windows, but not in Linux (and I have no idea
  about OS X). One problem with this, however, at least on Linux, is that
  long doubles no longer 64 bits of precision.
 
  It seems that, perhaps accidentally, sage is setting the fpu to use
  double precision.
 ...
  There are a few possible ways to fix this, I think.
 
  (1)One way is to rewrite the quaddouble wrapper. Then the fpu control
  word would need to be saved, set, and reset every time arithmetic was
  done on a quad double (not a good thing, probably).
 

I took a look at the NTL quad float package, and it actually does
exactly this. (NTL does this, not the sage wrapper.)

  (2)Another possibility is to decide that it is a good idea to set the
  fpu to just use double precision, and to make sure that no sage
  libraries ever expect extended precision. This might actually be the
  only really cross platform solution that isn't lots of work -- I'm not
  sure how long doubles will work on Windows or on OS X.
 
 Note that the transcendental functions (trigonometric, etc.) in libm
 on my Debian Linux box are much more accurate if the fpu is in
 extended precision mode than if the fpu is in double precision mode.
 (If you're using RDF floating-point numbers in SAGE, then SAGE will
 use the gsl_ routines instead of the libm routines.  I don't know what
 the effect of extended precision is on the gsl_ routines.  If you're
 using RealField floating-point numbers, then SAGE uses the mpfr
 routines, and the fpu is not used at all.)

  (3a)What I think might be the best idea, at least on Linux, is to change
  the compilation settings for quad double so that the fpu fix is not
  needed. There are two ways to do this: If a processor supports sse2,
  then passing gcc -march=whatever -msse2 -mfpmath=sse (maybe the -march
  isn't needed) will cause gcc to use sse registers and instructions for
  doubles, and these have the proper precision. In fact, gcc already does
  this by default for x86-64 cpus, so the quad double package doesn't even
  need the fpu fix on those architectures. Also, this has the added
  benefit of being faster.
 
 Yes, this certainly sounds like a good idea.  In fact, most of SAGE
 should be built this way whenever possible.  (This might mean having
 two different binary downloads for x86 Linux, though.)
 
 Do people use pre-SSE2 x86 processors to run SAGE?  (According to
 http://en.wikipedia.org/wiki/SSE2, Intel added SSE2 with the Pentium 4
 in 2001, and AMD added SSE2 with their 64-bit processors in 2003.)
 
  (3b)For processors that don't support sse2, gcc can be passed the
  -ffloat-store option, which fixes the problem by storing doubles in
  memory after every operation, ensuring that they are always correctly
  rounded to double precision. This slows things down a little bit, but
  would probably be much simpler than option (1).
 
 Unfortunately this doesn't work.  The float-store option rounds all
 numbers to double precision, avoiding the problem where a number
 invisibly changes in the middle of a computation; but it does not
 correctly round to double precision, so it is not usable for quad
 double.
 

Are you certain that it is not usable? The NTL quad float package falls
back on declaring all doubles volatile when it doesn't know how to
change the fpu control word, which should give the same behavior as the
-ffloat-store option. From the source comments:

The third way to fix the problem is to 'force' all intermediate
floating point results into memory.  This is not an 'ideal' fix,
since it is not fully equivalent to 53-bit precision (because of 
double rounding), but it works (although to be honest, I've never seen
a full proof of correctness in this case).

I just tried compiling quad double with the -ffloat-store and with the
fpu fix turned off, and it seems to be working. (The tests are still
running, but a bunch have passed already.)

  Personally, I think that I like options 3a and 3b. These would probably
  require rewriting

[sage-devel] Re: quaddouble in sage, and other floating point issues

2007-08-13 Thread Jonathan Bober


 I just tried compiling quad double with the -ffloat-store and with the
 fpu fix turned off, and it seems to be working. (The tests are still
 running, but a bunch have passed already.)

What I didn't realize when I wrote this email is that the tests should
only take a few seconds to run. Turning off the fpu fix and turning on
the -ffloat-store option causes one of the tests to run forever, or at
least for more than a few minutes (instead of  1 second).

So -ffloat-store doesn't work. So on cpus without sse2, the quaddouble
wrapper needs to use the fpu fix, but probably should be rewritten so
that it doesn't affect other things.


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] quaddouble in sage, and other floating point issues

2007-08-11 Thread Jonathan Bober

I have just noticed that using the C type long double from within sage
doesn't work the way that I've expected it to. 

The issue is a little complicated, and other people on this list
probably know more about it than I do, but, briefly, the problem stems
from the fact that x86 fpu registers support extended precision (64 bit
mantissa), while a C double is just double precision (53 bit mantissa).
This causes unexpected results sometimes because floating point
arithmetic will be done on the processor at extended precision, and then
stored in memory at double precision, which means that that value of a
double could theoretically change at any time. This is very bad for
packages like quaddouble.

It is possible to set the fpu control word so that the fpu only uses
double precision, which is one way to solve this problem. Apparently,
this is done by default on Windows, but not in Linux (and I have no idea
about OS X). One problem with this, however, at least on Linux, is that
long doubles no longer 64 bits of precision.

It seems that, perhaps accidentally, sage is setting the fpu to use
double precision.

sage/rings/real_rqdf.pyx contains the following code:

cdef class RealQuadDoubleField_class(Field):

Real Quad Double Field


def __init__(self):
fpu_fix_start(self.cwf)

def __dealloc__(self):
fpu_fix_end(self.cwf)

[etc]

__dealloc__() is never called until sage exits, however, since a global
instance of this class is created at startup. This means that all
computations with long doubles in the libraries that sage uses will not
have the expected precision, at least on Linux. (I noticed this while
working on the partition counting code -- the current version in sage
probably won't produce wrong answers because it already is using too
much precision, but this causes problems on the new version that I have
been working on.)

There are a few possible ways to fix this, I think. 

(1)One way is to rewrite the quaddouble wrapper. Then the fpu control
word would need to be saved, set, and reset every time arithmetic was
done on a quad double (not a good thing, probably).

(2)Another possibility is to decide that it is a good idea to set the
fpu to just use double precision, and to make sure that no sage
libraries ever expect extended precision. This might actually be the
only really cross platform solution that isn't lots of work -- I'm not
sure how long doubles will work on Windows or on OS X.

(3a)What I think might be the best idea, at least on Linux, is to change
the compilation settings for quad double so that the fpu fix is not
needed. There are two ways to do this: If a processor supports sse2,
then passing gcc -march=whatever -msse2 -mfpmath=sse (maybe the -march
isn't needed) will cause gcc to use sse registers and instructions for
doubles, and these have the proper precision. In fact, gcc already does
this by default for x86-64 cpus, so the quad double package doesn't even
need the fpu fix on those architectures. Also, this has the added
benefit of being faster.

(3b)For processors that don't support sse2, gcc can be passed the
-ffloat-store option, which fixes the problem by storing doubles in
memory after every operation, ensuring that they are always correctly
rounded to double precision. This slows things down a little bit, but
would probably be much simpler than option (1).

Personally, I think that I like options 3a and 3b. These would probably
require rewriting the configure script for quad double. I don't know how
to do that, but it probably isn't that hard.

Option 2 might be better, though, depending on how Windows (I know sage
doesn't work on Windows now, but it would be best not to make it any
harder to port to Windows in the future -- also, I'm not whether this
effects vmware) and OS X work. Hopefully other people on this list know
more about floating point arithmetic issues than I do.

Anyway, I mostly just wanted to point out the issue, since other people
likely know more about it than I do. I don't know how this email got to
be so long.

-Jon


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: docprofile (was: bugs, bugs, bugs)

2007-08-10 Thread Jonathan Bober

On Fri, 2007-08-10 at 12:52 -0700, William Stein wrote:

 Here's a crazy idea.  We decide on an extension to the doctest syntax
 that states that a doctest fails if it takes over a certain amount of time
 to run on a machine with at least a 2Ghz processor. For example,
 
 sage: 2 + 2   # takes at most 5s
 
 In doctesting, if the comment includes the phrase takes at most [xxx]s,
 then local/bin/sage-doctest will run the doctest but also time it, and if it
 takes more than [xxx] seconds, it flags it as an error.
 
 To implement the above, we could replace the above line by the
 following Python equivalent lines (which gets run by Python's doctest):
 
  t = walltime()
  2 + 2
  if FASTCPU and walltime(t)  5: assert False, test xxx takes
 too long to run
 
 What do you think?   This has the feel of a good but easy to use idea
 to me, which other
 people will poke full of holes, but with sufficient discussion might
 lead to something
 actually useful.
 
 William

This doesn't sound like a good idea to me, if for no other reason than
the fact that either FASTCPU will need to change over time, which will
require updating the time it is supposed to take for all the tests to
run, or the tests will have to be run on a specific machine to really
have any meaning.

Here is a high level description of another possible idea, ignoring most
implementation details for a moment. To test the speed of my SAGE
installation, I simply run a function benchmark(). This runs lots of
test code, and probably takes at least an hour to run, and returns a
Benchmark object, say, which contains lots of information about how long
various tests took to run.

Now if I want to compare the speed to a different SAGE installation, I
can load a Benchmark instance from disk, as in:

sage: b1 = benchmark()
sage: b2 = load('sage2.6.4-benchmark')
sage: b1
(Benchmark instance created with SAGE version:2.7.3; branch: sage-main;  
processor: something-or-other)
sage: b2
(Benchmark instance created with SAGE version:2.6.4; branch: sage-main;  
processor: something-or-other)
sage: b1.compare(b2)
--The following tests ran faster under version 2.7.3 ...
[some information about tests and timings.]
[...]
--The following tests ran faster under version 2.6.4 ...
[more info]

or maybe instead

sage: b1.compare(b2)
testnametime1   time2   difference
... ... ... ...
... ... ... ...
(etc)

An automated test could then be written to pick up on things that
significantly slow down between releases. For example, maybe when sage
-test is run, it can be supplied with timing data from a previous run,
and produce warnings if anything is slower that it used to be.


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: Quitting ignored worksheet vs. long computations

2007-08-10 Thread Jonathan Bober


 
  Obviously, this option should not exist on servers that serve a wide
  range of users.
  It's really for people doing big computations on their own set of
  computers.
 
 OK.

Just a thought - it might be even better to have this option available
or not on a per-user basis. In fact, though I haven't used the notebook
much, I imagine that it might be nice in general to have a way to assign
different privileges to different users. (For example, perhaps user was
ought to be able to run a computation on the public notebook on
sage.math that is going use 100% load on 8 processors for 3 weeks,
whereas user bober, if he exists, shouldn't be able to do that.)


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: ams notices opinion column

2007-08-09 Thread Jonathan Bober

I've been meaning to reply to this, since I've been specifically
mentioned. Sorry for the delay.

On Sun, 2007-08-05 at 11:49 -0700, William Stein wrote: 
 On 8/5/07, Justin C. Walker [EMAIL PROTECTED] wrote:
   On 8/4/07, Alec Mihailovs [EMAIL PROTECTED] wrote:
   From: Alec Mihailovs [EMAIL PROTECTED]
  
   It actually may be true (regarding to Mathematica). Their code
   may be very
   poorly commented (and AFAICT it is.)
  
   And the motivation for doing that is very simple. If you (a
   developer) do it
   that way, so that you are the only one who could understand the
   code - less
   chances that you get fired.
  
   Scary.  Unfortunately, you're quite right.
 

I also disagree with this, although not from much real job experience.
Probably, most code is poorly documented. This might just be because
documentation is often just an afterthought, and perhaps a boring
afterthought.

   I think Jon Bober's code for number of partitions is a very nice
   example
   of how open source is so much better.
 
  Jon's code is a very good example of what everyone wants from code.
  It certainly is not an example of open vs. closed source.
 
 But it is -- at least -- an example of open source code.   The corresponding
 closed source code is whatever implements that function in Mathematica,
 and I can't give it as an example to compare with, since I can't look at it.
 
 Only Jon can really answer this (Jon?) but I tend to
 suspect -- and maybe I am wrong -- that Jon wrote
 his code the way he did, because he knows he's posting it to
 potentially 100 people to look at every time he posts an update.
 There must also be something to the fact that he knows people
 will always have a chance to look at his code and inspect it, that
 increases the chances
 he'll document things.Moreover, just perhaps, the fact that
 the open source PARI code to do the same thing was poorly documented and
 wrong (!) and was sort of embarrassing to PARI, might have also made
 him want to document things more.
 
 Or I could be making up nonsense.  Jon, why is your code so well
 documented?

I would like to say that I always write like that, but unfortunately
that probably isn't true. Anyway, while I may be influenced by the fact
that other people might look at the code, it's just a simple fact that
well documented code is more maintainable. For example, it has been
about a week since I last really looked at that code, and if it was
poorly documented, then by now I probably would have forgotten how it
worked.

The point about the pari code being poorly documented might also have
had some influence. For example, I didn't want to someone else to have
to search for Rademacher's paper, or to try to figure out just how to
reduce precision properly, etc.


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



  1   2   >