Re: [sage-devel] linbox 64-bit charpoly

2016-10-03 Thread parisse
The probabilistic early termination does not take much time here, the 
charpoly stabilizes at about 85% of the primes required to reach the 
Hadamard bound. Testing with a few random matrices, I often get 
stabilization at about 80% (+/-10%), in this situation I think it's best to 
wait a little more and return a deterministic answer. The sitation is 
different for determinants in the generic situation where the last 
invariant factor is big. For the matrix of this thread, the lif is however 
small, and certifying the determinant does not cost much, like for the 
charpoly (I get the det in 1.7s and the charpoly in 3.1s with giac 1.2.2-89 
vs 0.9s and 6.8s for sage 6.9). This matrix is also special because it has 
a double eigenvalue 155789435562191565856, the minimal polynomial has 
degree 171-1.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-28 Thread Clement Pernet

Hi,

Update: I think I finally found the bug that led some rare computations to hang forever: givaro's 
random iterator was seed from the 6 digits of the current time microseconds, and could, with proba 
10^-6 be seeded with 0, and the congurential generator would then always output 0, causing the 
search for a non-zero krylov vector to hang forever!


This might be also a fix to https://trac.sagemath.org/ticket/15535

Meanwhile I also found a corner case with a bug in LUKrylov charpoly. I posted a patch to both 
givaro and fflas-ffpack in https://trac.sagemath.org/ticket/21579


As for the choice of running an early termination for charpoly over ZZ, we already had this 
discussion a very long time ago, and it used to be a deterministic call with Hadamard's bound.

It has been unfortunately turned back to the early termination version since 
then, which I agree is bad.

I'm cleaning up this code, and will provide a boolean argument proof to enable/disable the early 
termination on request.


Clément

Le 28/09/2016 à 08:22, parisse a écrit :



Le mercredi 28 septembre 2016 03:13:11 UTC+2, Jonathan Bober a écrit :


Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I 
didn't look at what Sage
has until now (flint 2.5.2, which looks likes it uses a fairly simple 
O(n^4) algorithm). I had
previously looked at the source code of the version of flint that I've 
actually been using
myself, which is from June. As I now recall (after reading an email I sent 
in June) I'm using a
"non-released" version precisely for the nmod_mat_charpoly() function, 
which doesn't exist in
the most recent release (which I guess might be 2.5.2, but flintlib.org 

seems to be having problems at the moment).

I've actually done some fairly extensive real world semi-testing of 
nmod_mat_charpoly() in the
last few months (for almost the same reasons that have lead me to 
investigate Sage/Linbox) but
not fmpz_mat_charpoly(). "semi" means that I haven't actually checked that 
the answers are
correct. I'm actually computing characteristic polynomials of integer 
matrices, but writing down
the integer matrices is too expensive, so I'm computing the polynomials 
more efficiently mod p
and then CRTing. Also, I'm doing exactly what I think Linbox does, in that 
I am just waiting for
the polynomials to stabilize. Step 2, when it eventually happens, will 
separately compute the
roots of these polynomials numerically, which will (heuristically) verify 
that they are correct.
(Step 3 might involve actually proving somehow that everything is correct, 
but I strongly fear
that it might involve confessing that everything is actually only 
"obviously" correct.) Once
step 2 happens, I'll either report some problems or let you know that 
everything went well.


I don't think computing roots numerically is a good idea, because the charpoly 
is ill-conditionned.
It would be interesting to compare outputs and timings with other packages, for 
example giac has its
own multi-modular charpoly implementation (multi-modular, with probabilistic 
answer if
proba_epsilon>0 or certified answer if proba_epsilon=0).


--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to
sage-devel+unsubscr...@googlegroups.com 
.
To post to this group, send email to sage-devel@googlegroups.com 
.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-28 Thread parisse


Le mercredi 28 septembre 2016 03:13:11 UTC+2, Jonathan Bober a écrit :
>
>
> Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I 
> didn't look at what Sage has until now (flint 2.5.2, which looks likes it 
> uses a fairly simple O(n^4) algorithm). I had previously looked at the 
> source code of the version of flint that I've actually been using myself, 
> which is from June. As I now recall (after reading an email I sent in June) 
> I'm using a "non-released" version precisely for the nmod_mat_charpoly() 
> function, which doesn't exist in the most recent release (which I guess 
> might be 2.5.2, but flintlib.org seems to be having problems at the 
> moment).
>
> I've actually done some fairly extensive real world semi-testing of 
> nmod_mat_charpoly() in the last few months (for almost the same reasons 
> that have lead me to investigate Sage/Linbox) but not fmpz_mat_charpoly(). 
> "semi" means that I haven't actually checked that the answers are correct. 
> I'm actually computing characteristic polynomials of integer matrices, but 
> writing down the integer matrices is too expensive, so I'm computing the 
> polynomials more efficiently mod p and then CRTing. Also, I'm doing exactly 
> what I think Linbox does, in that I am just waiting for the polynomials to 
> stabilize. Step 2, when it eventually happens, will separately compute the 
> roots of these polynomials numerically, which will (heuristically) verify 
> that they are correct. (Step 3 might involve actually proving somehow that 
> everything is correct, but I strongly fear that it might involve confessing 
> that everything is actually only "obviously" correct.) Once step 2 happens, 
> I'll either report some problems or let you know that everything went well.
>  
>
I don't think computing roots numerically is a good idea, because the 
charpoly is ill-conditionned. It would be interesting to compare outputs 
and timings with other packages, for example giac has its own multi-modular 
charpoly implementation (multi-modular, with probabilistic answer if 
proba_epsilon>0 or certified answer if proba_epsilon=0).
 

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread Jonathan Bober
On Tue, Sep 27, 2016 at 8:34 PM, 'Bill Hart' via sage-devel <
sage-devel@googlegroups.com> wrote:

>
>
> On Tuesday, 27 September 2016 20:53:28 UTC+2, Jonathan Bober wrote:
>>
>> On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel <
>> sage-...@googlegroups.com> wrote:
>>
>>> I'm pretty sure the charpoly routine in Flint is much more recent that 2
>>> years. Are you referring to a Sage implementation on top of Flint
>>> arithmetic or something?
>>>
>>
>> It is just a problem with Sage.
>>
>
> Sure, I realised the problem was in Sage. I just wasn't sure if the
> algorithm itself is implemented in Flint or Sage.
>
>
>> Sorry, I thought I was clear about that. I assume that no one has been
>> using the algorithm='flint' option in Sage in the last two years, which
>> makes sense, because most people aren't going to bother changing the
>> default.
>>
>>
>>> The only timing that I can find right at the moment had us about 5x
>>> faster than Sage. It's not in a released version of Flint though, just in
>>> master.
>>>
>>
>> That sounds really nice. On my laptop with current Sage, it might be the
>> other way around. With Sage 7.3 on my laptop, with this particular matrix,
>> I get
>>
>
> Yes, Sage/Linbox was about 2.5x times faster than the old charpoly routine
> in Flint, I believe. The new one is quite recent and much quicker.
>
>
>> sage: %time f = A.charpoly(algorithm='flint')
>> CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
>> Wall time: 1min 24s
>>
>> sage: %time f = A.charpoly(algorithm='linbox')
>> CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
>> Wall time: 13.3 s
>>
>> However, perhaps the average runtime with linbox is infinity. (Also, this
>> in an out of date Linbox.)
>>
>> I think that Linbox may be "cheating" in a way that Flint is not. I'm
>> pretty sure both implementations work mod p (or p^n?) for a bunch of p and
>> reconstruct. From my reading of the Flint source code (actually, I didn't
>> check the version in Sage) and comments from Clement Pernet, I think that
>> Flint uses an explicit Hadamard bound to determine how many primes to use,
>> while Linbox just waits for the CRT'd polynomial to stabilize for a few
>> primes.
>>
>
> Ouch!
>
> Yes, in the new code we use an explicit proven bound. I can't quite recall
> all the details now, but I recall it is multimodular.
>
> I would give it a good amount of testing before trusting it. We've done
> quite a lot of serious testing of it and the test code is nontrivial, but
> some real world tests are much more likely to shake out any bugs, including
> the possibility I screwed up the implementation of the bound.
>

Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I
didn't look at what Sage has until now (flint 2.5.2, which looks likes it
uses a fairly simple O(n^4) algorithm). I had previously looked at the
source code of the version of flint that I've actually been using myself,
which is from June. As I now recall (after reading an email I sent in June)
I'm using a "non-released" version precisely for the nmod_mat_charpoly()
function, which doesn't exist in the most recent release (which I guess
might be 2.5.2, but flintlib.org seems to be having problems at the moment).

I've actually done some fairly extensive real world semi-testing of
nmod_mat_charpoly() in the last few months (for almost the same reasons
that have lead me to investigate Sage/Linbox) but not fmpz_mat_charpoly().
"semi" means that I haven't actually checked that the answers are correct.
I'm actually computing characteristic polynomials of integer matrices, but
writing down the integer matrices is too expensive, so I'm computing the
polynomials more efficiently mod p and then CRTing. Also, I'm doing exactly
what I think Linbox does, in that I am just waiting for the polynomials to
stabilize. Step 2, when it eventually happens, will separately compute the
roots of these polynomials numerically, which will (heuristically) verify
that they are correct. (Step 3 might involve actually proving somehow that
everything is correct, but I strongly fear that it might involve confessing
that everything is actually only "obviously" correct.) Once step 2 happens,
I'll either report some problems or let you know that everything went well.


>
>
>> I have no idea how much of a difference that makes in this case.
>>
>>
>>> Bill.
>>>
>>> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:

 On Tue, Sep 27, 2016 at 4:18 AM, William Stein 
 wrote:

> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober 
> wrote:
> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein 
> wrote:
>
> >>
> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober 
> wrote:
> >> > In the matrix_integer_dense charpoly() function, there is a note
> in the
> >> > docstring which says "Linbox charpoly disabled on 64-bit
> machines, since
> >> > it

Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread Jonathan Bober
On Tue, Sep 27, 2016 at 8:02 PM, William Stein  wrote:

> On Tue, Sep 27, 2016 at 11:53 AM, Jonathan Bober 
> wrote:
> > On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel
> >  wrote:
> >>
> >> I'm pretty sure the charpoly routine in Flint is much more recent that 2
> >> years. Are you referring to a Sage implementation on top of Flint
> arithmetic
> >> or something?
> >
> >
> > It is just a problem with Sage. Sorry, I thought I was clear about that.
> I
> > assume that no one has been using the algorithm='flint' option in Sage in
> > the last two years, which makes sense, because most people aren't going
> to
> > bother changing the default.
> >
> >>
> >> The only timing that I can find right at the moment had us about 5x
> faster
> >> than Sage. It's not in a released version of Flint though, just in
> master.
> >
> >
> > That sounds really nice. On my laptop with current Sage, it might be the
> > other way around. With Sage 7.3 on my laptop, with this particular
> matrix, I
> > get
> >
> > sage: %time f = A.charpoly(algorithm='flint')
> > CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
> > Wall time: 1min 24s
> >
> > sage: %time f = A.charpoly(algorithm='linbox')
> > CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
> > Wall time: 13.3 s
> >
> > However, perhaps the average runtime with linbox is infinity. (Also,
> this in
> > an out of date Linbox.)
> >
> > I think that Linbox may be "cheating" in a way that Flint is not. I'm
> pretty
> > sure both implementations work mod p (or p^n?) for a bunch of p and
> > reconstruct. From my reading of the Flint source code (actually, I didn't
> > check the version in Sage) and comments from Clement Pernet, I think that
> > Flint uses an explicit Hadamard bound to determine how many primes to
> use,
> > while Linbox just waits for the CRT'd polynomial to stabilize for a few
>
> If it is really doing this, then it should definitely not be the
> default algorithm for Sage, unless proof=False is explicitly
> specified.   Not good.
>
>
Yes, I've had the same thought, which is actually part of the reason I took
the time to write this. I hope that Clement, or someone else who knows,
will notice and confirm or deny. Also, eventually I will probably try to
read the Linbox source code. It is possible that I am wrong. (I guess that
there is some certification step, and that it is somewhat heuristic, but
maybe it is more definitive than that.)


> William
>
> > primes. I have no idea how much of a difference that makes in this case.
> >
> >>
> >> Bill.
> >>
> >> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
> >>>
> >>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein 
> wrote:
> 
>  On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober 
>  wrote:
>  > On Mon, Sep 26, 2016 at 11:52 PM, William Stein 
>  > wrote:
> 
>  >>
>  >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober 
>  >> wrote:
>  >> > In the matrix_integer_dense charpoly() function, there is a note
> in
>  >> > the
>  >> > docstring which says "Linbox charpoly disabled on 64-bit
> machines,
>  >> > since
>  >> > it
>  >> > hangs in many cases."
>  >> >
>  >> > As far as I can tell, that is not true, in the sense that (1) I
>  >> > have
>  >> > 64-bit
>  >> > machines, and Linbox charpoly is not disabled, (2)
>  >> > charpoly(algorithm='flint') is so horribly broken that if it were
>  >> > ever
>  >> > used
>  >> > it should be quickly noticed that it is broken, and (3) I can't
> see
>  >> > anywhere
>  >> > where it is actually disabled.
>  >> >
>  >> > So I actually just submitted a patch which removes this note
> while
>  >> > fixing
>  >> > point (2). (Trac #21596).
>  >> >
>  >> > However...
>  >> >
>  >> > In some testing I'm noticing problems with charpoly(), so I'm
>  >> > wondering
>  >> > where that message came from, and who knows something about it.
>  >>
>  >> Do you know about "git blame", or the "blame" button when viewing
> any
>  >> file here: https://github.com/sagemath/sage/tree/master/src
>  >
>  >
>  > Ah, yes. Of course I know about that. And it was you!
>  >
>  > You added that message here:
> 
>  Dang... I had a bad feeling that would be the conclusion :-)
> >>>
> >>>
> >>> Well, I'm sure you've done one or two things in the meantime that will
> >>> allow me to forgive this one oversight.
> >>>
> 
>  In my defense, Linbox/FLINT have themselves changed a lot over the
>  years...  We added Linbox in 2007, I think.
> 
> >>>
> >>> Yes. As I said, this comment, and the design change, is ancient. In
> some
> >>> limiting testing, linbox tends to be faster than flint, but has very
> high
> >>> variance in the timings. (I haven't actually 

Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread 'Bill Hart' via sage-devel


On Tuesday, 27 September 2016 20:53:28 UTC+2, Jonathan Bober wrote:
>
> On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel <
> sage-...@googlegroups.com > wrote:
>
>> I'm pretty sure the charpoly routine in Flint is much more recent that 2 
>> years. Are you referring to a Sage implementation on top of Flint 
>> arithmetic or something?
>>
>
> It is just a problem with Sage.
>

Sure, I realised the problem was in Sage. I just wasn't sure if the 
algorithm itself is implemented in Flint or Sage.
 

> Sorry, I thought I was clear about that. I assume that no one has been 
> using the algorithm='flint' option in Sage in the last two years, which 
> makes sense, because most people aren't going to bother changing the 
> default.
>  
>
>> The only timing that I can find right at the moment had us about 5x 
>> faster than Sage. It's not in a released version of Flint though, just in 
>> master.
>>
>
> That sounds really nice. On my laptop with current Sage, it might be the 
> other way around. With Sage 7.3 on my laptop, with this particular matrix, 
> I get
>

Yes, Sage/Linbox was about 2.5x times faster than the old charpoly routine 
in Flint, I believe. The new one is quite recent and much quicker.


> sage: %time f = A.charpoly(algorithm='flint')
> CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
> Wall time: 1min 24s
>
> sage: %time f = A.charpoly(algorithm='linbox')
> CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
> Wall time: 13.3 s
>
> However, perhaps the average runtime with linbox is infinity. (Also, this 
> in an out of date Linbox.)
>
> I think that Linbox may be "cheating" in a way that Flint is not. I'm 
> pretty sure both implementations work mod p (or p^n?) for a bunch of p and 
> reconstruct. From my reading of the Flint source code (actually, I didn't 
> check the version in Sage) and comments from Clement Pernet, I think that 
> Flint uses an explicit Hadamard bound to determine how many primes to use, 
> while Linbox just waits for the CRT'd polynomial to stabilize for a few 
> primes. 
>

Ouch!

Yes, in the new code we use an explicit proven bound. I can't quite recall 
all the details now, but I recall it is multimodular.

I would give it a good amount of testing before trusting it. We've done 
quite a lot of serious testing of it and the test code is nontrivial, but 
some real world tests are much more likely to shake out any bugs, including 
the possibility I screwed up the implementation of the bound.
 

> I have no idea how much of a difference that makes in this case.
>
>
>> Bill.
>>
>> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
>>>
>>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein  wrote:
>>>
 On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober  
 wrote:
 > On Mon, Sep 26, 2016 at 11:52 PM, William Stein  
 wrote:

 >>
 >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober  
 wrote:
 >> > In the matrix_integer_dense charpoly() function, there is a note 
 in the
 >> > docstring which says "Linbox charpoly disabled on 64-bit machines, 
 since
 >> > it
 >> > hangs in many cases."
 >> >
 >> > As far as I can tell, that is not true, in the sense that (1) I 
 have
 >> > 64-bit
 >> > machines, and Linbox charpoly is not disabled, (2)
 >> > charpoly(algorithm='flint') is so horribly broken that if it were 
 ever
 >> > used
 >> > it should be quickly noticed that it is broken, and (3) I can't see
 >> > anywhere
 >> > where it is actually disabled.
 >> >
 >> > So I actually just submitted a patch which removes this note while
 >> > fixing
 >> > point (2). (Trac #21596).
 >> >
 >> > However...
 >> >
 >> > In some testing I'm noticing problems with charpoly(), so I'm 
 wondering
 >> > where that message came from, and who knows something about it.
 >>
 >> Do you know about "git blame", or the "blame" button when viewing any
 >> file here: https://github.com/sagemath/sage/tree/master/src
 >
 >
 > Ah, yes. Of course I know about that. And it was you!
 >
 > You added that message here:

 Dang... I had a bad feeling that would be the conclusion :-) 

>>>
>>> Well, I'm sure you've done one or two things in the meantime that will 
>>> allow me to forgive this one oversight.
>>>  
>>>
 In my defense, Linbox/FLINT have themselves changed a lot over the
 years...  We added Linbox in 2007, I think.


>>> Yes. As I said, this comment, and the design change, is ancient. In some 
>>> limiting testing, linbox tends to be faster than flint, but has very high 
>>> variance in the timings. (I haven't actually checked flint much.) Right now 
>>> I'm running the following code on 64 cores, which should test linbox:
>>>
>>> import time
>>>
>>> @parallel
>>> def test(n):
>>> start = time.clock()
>>> f = 

Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread William Stein
On Tue, Sep 27, 2016 at 11:53 AM, Jonathan Bober  wrote:
> On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel
>  wrote:
>>
>> I'm pretty sure the charpoly routine in Flint is much more recent that 2
>> years. Are you referring to a Sage implementation on top of Flint arithmetic
>> or something?
>
>
> It is just a problem with Sage. Sorry, I thought I was clear about that. I
> assume that no one has been using the algorithm='flint' option in Sage in
> the last two years, which makes sense, because most people aren't going to
> bother changing the default.
>
>>
>> The only timing that I can find right at the moment had us about 5x faster
>> than Sage. It's not in a released version of Flint though, just in master.
>
>
> That sounds really nice. On my laptop with current Sage, it might be the
> other way around. With Sage 7.3 on my laptop, with this particular matrix, I
> get
>
> sage: %time f = A.charpoly(algorithm='flint')
> CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
> Wall time: 1min 24s
>
> sage: %time f = A.charpoly(algorithm='linbox')
> CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
> Wall time: 13.3 s
>
> However, perhaps the average runtime with linbox is infinity. (Also, this in
> an out of date Linbox.)
>
> I think that Linbox may be "cheating" in a way that Flint is not. I'm pretty
> sure both implementations work mod p (or p^n?) for a bunch of p and
> reconstruct. From my reading of the Flint source code (actually, I didn't
> check the version in Sage) and comments from Clement Pernet, I think that
> Flint uses an explicit Hadamard bound to determine how many primes to use,
> while Linbox just waits for the CRT'd polynomial to stabilize for a few

If it is really doing this, then it should definitely not be the
default algorithm for Sage, unless proof=False is explicitly
specified.   Not good.

William

> primes. I have no idea how much of a difference that makes in this case.
>
>>
>> Bill.
>>
>> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
>>>
>>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein  wrote:

 On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober 
 wrote:
 > On Mon, Sep 26, 2016 at 11:52 PM, William Stein 
 > wrote:

 >>
 >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober 
 >> wrote:
 >> > In the matrix_integer_dense charpoly() function, there is a note in
 >> > the
 >> > docstring which says "Linbox charpoly disabled on 64-bit machines,
 >> > since
 >> > it
 >> > hangs in many cases."
 >> >
 >> > As far as I can tell, that is not true, in the sense that (1) I
 >> > have
 >> > 64-bit
 >> > machines, and Linbox charpoly is not disabled, (2)
 >> > charpoly(algorithm='flint') is so horribly broken that if it were
 >> > ever
 >> > used
 >> > it should be quickly noticed that it is broken, and (3) I can't see
 >> > anywhere
 >> > where it is actually disabled.
 >> >
 >> > So I actually just submitted a patch which removes this note while
 >> > fixing
 >> > point (2). (Trac #21596).
 >> >
 >> > However...
 >> >
 >> > In some testing I'm noticing problems with charpoly(), so I'm
 >> > wondering
 >> > where that message came from, and who knows something about it.
 >>
 >> Do you know about "git blame", or the "blame" button when viewing any
 >> file here: https://github.com/sagemath/sage/tree/master/src
 >
 >
 > Ah, yes. Of course I know about that. And it was you!
 >
 > You added that message here:

 Dang... I had a bad feeling that would be the conclusion :-)
>>>
>>>
>>> Well, I'm sure you've done one or two things in the meantime that will
>>> allow me to forgive this one oversight.
>>>

 In my defense, Linbox/FLINT have themselves changed a lot over the
 years...  We added Linbox in 2007, I think.

>>>
>>> Yes. As I said, this comment, and the design change, is ancient. In some
>>> limiting testing, linbox tends to be faster than flint, but has very high
>>> variance in the timings. (I haven't actually checked flint much.) Right now
>>> I'm running the following code on 64 cores, which should test linbox:
>>>
>>> import time
>>>
>>> @parallel
>>> def test(n):
>>> start = time.clock()
>>> f = B.charpoly()
>>> end = time.clock()
>>> runtime = end - start
>>> if f != g:
>>> print n, 'ohno'
>>> return runtime, 'ohno'
>>> else:
>>> return runtime, 'ok'
>>>
>>> A = load('hecke_matrix')
>>> A._clear_cache()
>>> B, denom = A._clear_denom()
>>> g = B.charpoly()
>>> B._clear_cache()
>>>
>>> import sys
>>>
>>> for result in test(range(10)):
>>> print result[0][0][0], ' '.join([str(x) for x in result[1]])
>>> sys.stdout.flush()
>>>
>>> where the file hecke_matrix was produced by
>>>
>>> 

Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread Jonathan Bober
On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel <
sage-devel@googlegroups.com> wrote:

> I'm pretty sure the charpoly routine in Flint is much more recent that 2
> years. Are you referring to a Sage implementation on top of Flint
> arithmetic or something?
>

It is just a problem with Sage. Sorry, I thought I was clear about that. I
assume that no one has been using the algorithm='flint' option in Sage in
the last two years, which makes sense, because most people aren't going to
bother changing the default.


> The only timing that I can find right at the moment had us about 5x faster
> than Sage. It's not in a released version of Flint though, just in master.
>

That sounds really nice. On my laptop with current Sage, it might be the
other way around. With Sage 7.3 on my laptop, with this particular matrix,
I get

sage: %time f = A.charpoly(algorithm='flint')
CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
Wall time: 1min 24s

sage: %time f = A.charpoly(algorithm='linbox')
CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
Wall time: 13.3 s

However, perhaps the average runtime with linbox is infinity. (Also, this
in an out of date Linbox.)

I think that Linbox may be "cheating" in a way that Flint is not. I'm
pretty sure both implementations work mod p (or p^n?) for a bunch of p and
reconstruct. From my reading of the Flint source code (actually, I didn't
check the version in Sage) and comments from Clement Pernet, I think that
Flint uses an explicit Hadamard bound to determine how many primes to use,
while Linbox just waits for the CRT'd polynomial to stabilize for a few
primes. I have no idea how much of a difference that makes in this case.


> Bill.
>
> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
>>
>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein  wrote:
>>
>>> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober 
>>> wrote:
>>> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein 
>>> wrote:
>>>
>>> >>
>>> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober 
>>> wrote:
>>> >> > In the matrix_integer_dense charpoly() function, there is a note in
>>> the
>>> >> > docstring which says "Linbox charpoly disabled on 64-bit machines,
>>> since
>>> >> > it
>>> >> > hangs in many cases."
>>> >> >
>>> >> > As far as I can tell, that is not true, in the sense that (1) I have
>>> >> > 64-bit
>>> >> > machines, and Linbox charpoly is not disabled, (2)
>>> >> > charpoly(algorithm='flint') is so horribly broken that if it were
>>> ever
>>> >> > used
>>> >> > it should be quickly noticed that it is broken, and (3) I can't see
>>> >> > anywhere
>>> >> > where it is actually disabled.
>>> >> >
>>> >> > So I actually just submitted a patch which removes this note while
>>> >> > fixing
>>> >> > point (2). (Trac #21596).
>>> >> >
>>> >> > However...
>>> >> >
>>> >> > In some testing I'm noticing problems with charpoly(), so I'm
>>> wondering
>>> >> > where that message came from, and who knows something about it.
>>> >>
>>> >> Do you know about "git blame", or the "blame" button when viewing any
>>> >> file here: https://github.com/sagemath/sage/tree/master/src
>>> >
>>> >
>>> > Ah, yes. Of course I know about that. And it was you!
>>> >
>>> > You added that message here:
>>>
>>> Dang... I had a bad feeling that would be the conclusion :-)
>>>
>>
>> Well, I'm sure you've done one or two things in the meantime that will
>> allow me to forgive this one oversight.
>>
>>
>>> In my defense, Linbox/FLINT have themselves changed a lot over the
>>> years...  We added Linbox in 2007, I think.
>>>
>>>
>> Yes. As I said, this comment, and the design change, is ancient. In some
>> limiting testing, linbox tends to be faster than flint, but has very high
>> variance in the timings. (I haven't actually checked flint much.) Right now
>> I'm running the following code on 64 cores, which should test linbox:
>>
>> import time
>>
>> @parallel
>> def test(n):
>> start = time.clock()
>> f = B.charpoly()
>> end = time.clock()
>> runtime = end - start
>> if f != g:
>> print n, 'ohno'
>> return runtime, 'ohno'
>> else:
>> return runtime, 'ok'
>>
>> A = load('hecke_matrix')
>> A._clear_cache()
>> B, denom = A._clear_denom()
>> g = B.charpoly()
>> B._clear_cache()
>>
>> import sys
>>
>> for result in test(range(10)):
>> print result[0][0][0], ' '.join([str(x) for x in result[1]])
>> sys.stdout.flush()
>>
>> where the file hecke_matrix was produced by
>>
>> sage: M = ModularSymbols(3633, 2, -1)
>> sage: S = M.cuspidal_subspace().new_subspace()
>> sage: H = S.hecke_matrix(2)
>> sage: H.save('hecke_matrix')
>>
>> and the results are interesting:
>>
>> jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | head
>> 30 27.98 ok
>> 64 28.0 ok
>> 2762 28.02 ok
>> 2790 28.02 ok
>> 3066 28.02 ok
>> 3495 28.03 ok
>> 3540 28.03 ok
>> 292 28.04 ok
>> 437 28.04 ok
>> 941 28.04 ok
>>
>> 

Re: [sage-devel] linbox 64-bit charpoly

2016-09-27 Thread 'Bill Hart' via sage-devel
I'm pretty sure the charpoly routine in Flint is much more recent that 2 
years. Are you referring to a Sage implementation on top of Flint 
arithmetic or something?

The only timing that I can find right at the moment had us about 5x faster 
than Sage. It's not in a released version of Flint though, just in master.

Bill.

On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
>
> On Tue, Sep 27, 2016 at 4:18 AM, William Stein  > wrote:
>
>> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober > > wrote:
>> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein > > wrote:
>> >>
>> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober > > wrote:
>> >> > In the matrix_integer_dense charpoly() function, there is a note in 
>> the
>> >> > docstring which says "Linbox charpoly disabled on 64-bit machines, 
>> since
>> >> > it
>> >> > hangs in many cases."
>> >> >
>> >> > As far as I can tell, that is not true, in the sense that (1) I have
>> >> > 64-bit
>> >> > machines, and Linbox charpoly is not disabled, (2)
>> >> > charpoly(algorithm='flint') is so horribly broken that if it were 
>> ever
>> >> > used
>> >> > it should be quickly noticed that it is broken, and (3) I can't see
>> >> > anywhere
>> >> > where it is actually disabled.
>> >> >
>> >> > So I actually just submitted a patch which removes this note while
>> >> > fixing
>> >> > point (2). (Trac #21596).
>> >> >
>> >> > However...
>> >> >
>> >> > In some testing I'm noticing problems with charpoly(), so I'm 
>> wondering
>> >> > where that message came from, and who knows something about it.
>> >>
>> >> Do you know about "git blame", or the "blame" button when viewing any
>> >> file here: https://github.com/sagemath/sage/tree/master/src
>> >
>> >
>> > Ah, yes. Of course I know about that. And it was you!
>> >
>> > You added that message here:
>>
>> Dang... I had a bad feeling that would be the conclusion :-) 
>
>
> Well, I'm sure you've done one or two things in the meantime that will 
> allow me to forgive this one oversight.
>  
>
>> In my defense, Linbox/FLINT have themselves changed a lot over the
>> years...  We added Linbox in 2007, I think.
>>
>>
> Yes. As I said, this comment, and the design change, is ancient. In some 
> limiting testing, linbox tends to be faster than flint, but has very high 
> variance in the timings. (I haven't actually checked flint much.) Right now 
> I'm running the following code on 64 cores, which should test linbox:
>
> import time
>
> @parallel
> def test(n):
> start = time.clock()
> f = B.charpoly()
> end = time.clock()
> runtime = end - start
> if f != g:
> print n, 'ohno'
> return runtime, 'ohno'
> else:
> return runtime, 'ok'
>
> A = load('hecke_matrix')
> A._clear_cache()
> B, denom = A._clear_denom()
> g = B.charpoly()
> B._clear_cache()
>
> import sys
>
> for result in test(range(10)):
> print result[0][0][0], ' '.join([str(x) for x in result[1]])
> sys.stdout.flush()
>
> where the file hecke_matrix was produced by
>
> sage: M = ModularSymbols(3633, 2, -1)
> sage: S = M.cuspidal_subspace().new_subspace()
> sage: H = S.hecke_matrix(2)
> sage: H.save('hecke_matrix')
>
> and the results are interesting:
>
> jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | head
> 30 27.98 ok
> 64 28.0 ok
> 2762 28.02 ok
> 2790 28.02 ok
> 3066 28.02 ok
> 3495 28.03 ok
> 3540 28.03 ok
> 292 28.04 ok
> 437 28.04 ok
> 941 28.04 ok
>
> jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | tail
> 817 2426.04 ok
> 1487 2466.3 ok
> 1440 2686.43 ok
> 459 2745.74 ok
> 776 2994.01 ok
> 912 3166.9 ok
> 56 3189.98 ok
> 546 3278.22 ok
> 1008 3322.74 ok
> 881 3392.73 ok
>
> jb12407@lmfdb5:~/sage-bug$ python analyze_output.py test_output3 
> average time: 53.9404572616
> unfinished: [490, 523, 1009, 1132, 1274, 1319, 1589, 1726, 1955, 2019, 
> 2283, 2418, 2500, 2598, 2826, 2979, 2982, 3030, 3057, 3112, 3166, 3190, 
> 3199, 3210, 3273, 3310, 3358, 3401, 3407, 3434, 3481, 3487, 3534, 3546, 
> 3593, 3594, 3681, 3685, 3695, 3748, 3782, 3812, 3858, 3864, 3887]
>
> There hasn't yet been an ohno, but on a similar run of 5000 tests 
> computing A.charpoly() instead of B I have 1 ohno and 4 still running after 
> 5 hours. (So I'm expecting an error in the morning...)
>
> I think that maybe I was getting a higher error rate in Sage 7.3. The 
> current beta is using a newer linbox, so maybe it fixed something, but 
> maybe it isn't quite fixed.
>
> Maybe I should use a small matrix to run more tests more quickly, but this 
> came from a "real world" example.
>  
>
>> --
>> William (http://wstein.org)
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "sage-devel" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sage-devel+...@googlegroups.com .
>> To post to this group, send email to sage-...@googlegroups.com 
>> .
>> Visit 

Re: [sage-devel] linbox 64-bit charpoly

2016-09-26 Thread Jonathan Bober
On Tue, Sep 27, 2016 at 4:18 AM, William Stein  wrote:

> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober  wrote:
> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein 
> wrote:
> >>
> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober 
> wrote:
> >> > In the matrix_integer_dense charpoly() function, there is a note in
> the
> >> > docstring which says "Linbox charpoly disabled on 64-bit machines,
> since
> >> > it
> >> > hangs in many cases."
> >> >
> >> > As far as I can tell, that is not true, in the sense that (1) I have
> >> > 64-bit
> >> > machines, and Linbox charpoly is not disabled, (2)
> >> > charpoly(algorithm='flint') is so horribly broken that if it were ever
> >> > used
> >> > it should be quickly noticed that it is broken, and (3) I can't see
> >> > anywhere
> >> > where it is actually disabled.
> >> >
> >> > So I actually just submitted a patch which removes this note while
> >> > fixing
> >> > point (2). (Trac #21596).
> >> >
> >> > However...
> >> >
> >> > In some testing I'm noticing problems with charpoly(), so I'm
> wondering
> >> > where that message came from, and who knows something about it.
> >>
> >> Do you know about "git blame", or the "blame" button when viewing any
> >> file here: https://github.com/sagemath/sage/tree/master/src
> >
> >
> > Ah, yes. Of course I know about that. And it was you!
> >
> > You added that message here:
>
> Dang... I had a bad feeling that would be the conclusion :-)


Well, I'm sure you've done one or two things in the meantime that will
allow me to forgive this one oversight.


> In my defense, Linbox/FLINT have themselves changed a lot over the
> years...  We added Linbox in 2007, I think.
>
>
Yes. As I said, this comment, and the design change, is ancient. In some
limiting testing, linbox tends to be faster than flint, but has very high
variance in the timings. (I haven't actually checked flint much.) Right now
I'm running the following code on 64 cores, which should test linbox:

import time

@parallel
def test(n):
start = time.clock()
f = B.charpoly()
end = time.clock()
runtime = end - start
if f != g:
print n, 'ohno'
return runtime, 'ohno'
else:
return runtime, 'ok'

A = load('hecke_matrix')
A._clear_cache()
B, denom = A._clear_denom()
g = B.charpoly()
B._clear_cache()

import sys

for result in test(range(10)):
print result[0][0][0], ' '.join([str(x) for x in result[1]])
sys.stdout.flush()

where the file hecke_matrix was produced by

sage: M = ModularSymbols(3633, 2, -1)
sage: S = M.cuspidal_subspace().new_subspace()
sage: H = S.hecke_matrix(2)
sage: H.save('hecke_matrix')

and the results are interesting:

jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | head
30 27.98 ok
64 28.0 ok
2762 28.02 ok
2790 28.02 ok
3066 28.02 ok
3495 28.03 ok
3540 28.03 ok
292 28.04 ok
437 28.04 ok
941 28.04 ok

jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | tail
817 2426.04 ok
1487 2466.3 ok
1440 2686.43 ok
459 2745.74 ok
776 2994.01 ok
912 3166.9 ok
56 3189.98 ok
546 3278.22 ok
1008 3322.74 ok
881 3392.73 ok

jb12407@lmfdb5:~/sage-bug$ python analyze_output.py test_output3
average time: 53.9404572616
unfinished: [490, 523, 1009, 1132, 1274, 1319, 1589, 1726, 1955, 2019,
2283, 2418, 2500, 2598, 2826, 2979, 2982, 3030, 3057, 3112, 3166, 3190,
3199, 3210, 3273, 3310, 3358, 3401, 3407, 3434, 3481, 3487, 3534, 3546,
3593, 3594, 3681, 3685, 3695, 3748, 3782, 3812, 3858, 3864, 3887]

There hasn't yet been an ohno, but on a similar run of 5000 tests computing
A.charpoly() instead of B I have 1 ohno and 4 still running after 5 hours.
(So I'm expecting an error in the morning...)

I think that maybe I was getting a higher error rate in Sage 7.3. The
current beta is using a newer linbox, so maybe it fixed something, but
maybe it isn't quite fixed.

Maybe I should use a small matrix to run more tests more quickly, but this
came from a "real world" example.


> --
> William (http://wstein.org)
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-26 Thread William Stein
On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober  wrote:
> On Mon, Sep 26, 2016 at 11:52 PM, William Stein  wrote:
>>
>> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober  wrote:
>> > In the matrix_integer_dense charpoly() function, there is a note in the
>> > docstring which says "Linbox charpoly disabled on 64-bit machines, since
>> > it
>> > hangs in many cases."
>> >
>> > As far as I can tell, that is not true, in the sense that (1) I have
>> > 64-bit
>> > machines, and Linbox charpoly is not disabled, (2)
>> > charpoly(algorithm='flint') is so horribly broken that if it were ever
>> > used
>> > it should be quickly noticed that it is broken, and (3) I can't see
>> > anywhere
>> > where it is actually disabled.
>> >
>> > So I actually just submitted a patch which removes this note while
>> > fixing
>> > point (2). (Trac #21596).
>> >
>> > However...
>> >
>> > In some testing I'm noticing problems with charpoly(), so I'm wondering
>> > where that message came from, and who knows something about it.
>>
>> Do you know about "git blame", or the "blame" button when viewing any
>> file here: https://github.com/sagemath/sage/tree/master/src
>
>
> Ah, yes. Of course I know about that. And it was you!
>
> You added that message here:

Dang... I had a bad feeling that would be the conclusion :-)

In my defense, Linbox/FLINT have themselves changed a lot over the
years...  We added Linbox in 2007, I think.

-- 
William (http://wstein.org)

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-26 Thread Jonathan Bober
On Mon, Sep 26, 2016 at 11:52 PM, William Stein  wrote:

> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober  wrote:
> > In the matrix_integer_dense charpoly() function, there is a note in the
> > docstring which says "Linbox charpoly disabled on 64-bit machines, since
> it
> > hangs in many cases."
> >
> > As far as I can tell, that is not true, in the sense that (1) I have
> 64-bit
> > machines, and Linbox charpoly is not disabled, (2)
> > charpoly(algorithm='flint') is so horribly broken that if it were ever
> used
> > it should be quickly noticed that it is broken, and (3) I can't see
> anywhere
> > where it is actually disabled.
> >
> > So I actually just submitted a patch which removes this note while fixing
> > point (2). (Trac #21596).
> >
> > However...
> >
> > In some testing I'm noticing problems with charpoly(), so I'm wondering
> > where that message came from, and who knows something about it.
>
> Do you know about "git blame", or the "blame" button when viewing any
> file here: https://github.com/sagemath/sage/tree/master/src


Ah, yes. Of course I know about that. And it was you!

You added that message here:
https://github.com/sagemath/sage/commit/ce8c59b53cb46c338ca89cebc50e26ff0e0643cc
and didn't remove it here:
https://github.com/sagemath/sage/commit/7030ad944f6023825fbd4a30e1c800d18a2c0b4c

(though it isn't completely clear to me that the second commit really comes
after the first, since date's might not tell the full story.)

Anyway, it seems likely that this comment is ancient history.

Using flint to compute the characteristic polynomial has only been broken
for 2 years (though not because of brokenness in filnt). I'll have to keep
testing to see if linbox is broken.

>
>
> >
> > The problems seem the likely cause of #21579, though I haven't actually
> been
> > able to conclusively blame that on linbox yet. I'm also not sure I've
> seen
> > linbox's charpoly() hang, exactly, but I do see very erratic behavior in
> how
> > long it takes: on a fixed matrix I have it typically takes 30 seconds,
> but
> > I've also seen it return the correct answer after 50 minutes.
> >
> > (I've also got the wrong answer sometimes, but there are some conversions
> > going on, and I've so far only seen the wrong answer for rational
> matrices,
> > which is why I've not get blamed linbox, though I am certainly leaning
> > towards blaming it.)
> >
> > Note: I'm currently testing on 7.4.beta6, so this is after the recent
> linbox
> > upgrade. But I was also having some problems before that. It is possible
> > that the recent upgrade made errors less likely, though.
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "sage-devel" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to sage-devel+unsubscr...@googlegroups.com.
> > To post to this group, send email to sage-devel@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sage-devel.
> > For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> William (http://wstein.org)
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] linbox 64-bit charpoly

2016-09-26 Thread William Stein
On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober  wrote:
> In the matrix_integer_dense charpoly() function, there is a note in the
> docstring which says "Linbox charpoly disabled on 64-bit machines, since it
> hangs in many cases."
>
> As far as I can tell, that is not true, in the sense that (1) I have 64-bit
> machines, and Linbox charpoly is not disabled, (2)
> charpoly(algorithm='flint') is so horribly broken that if it were ever used
> it should be quickly noticed that it is broken, and (3) I can't see anywhere
> where it is actually disabled.
>
> So I actually just submitted a patch which removes this note while fixing
> point (2). (Trac #21596).
>
> However...
>
> In some testing I'm noticing problems with charpoly(), so I'm wondering
> where that message came from, and who knows something about it.

Do you know about "git blame", or the "blame" button when viewing any
file here: https://github.com/sagemath/sage/tree/master/src

>
> The problems seem the likely cause of #21579, though I haven't actually been
> able to conclusively blame that on linbox yet. I'm also not sure I've seen
> linbox's charpoly() hang, exactly, but I do see very erratic behavior in how
> long it takes: on a fixed matrix I have it typically takes 30 seconds, but
> I've also seen it return the correct answer after 50 minutes.
>
> (I've also got the wrong answer sometimes, but there are some conversions
> going on, and I've so far only seen the wrong answer for rational matrices,
> which is why I've not get blamed linbox, though I am certainly leaning
> towards blaming it.)
>
> Note: I'm currently testing on 7.4.beta6, so this is after the recent linbox
> upgrade. But I was also having some problems before that. It is possible
> that the recent upgrade made errors less likely, though.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.



-- 
William (http://wstein.org)

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.