Sorry to jump in late; I just found this discussion by googling
something else..
Maxima was berated for being too slow in factoring this..
-p10^170*X1^10*X2^10+p10^130*X1^5*X2^10+p10^130*X1^10*X2^5-
p10^90*X1^5*X2^5+p10^80*X1^5*X2^5-p10^40*X2^5-p10^40*X1^5+1
which apparently did not terminate
> > This is the developer doc I plan to write someday, but it is currently
> > not a main priority.
>
> Sure, I understand that and since it is your time it is your call how
> you allocate your resources. But it would make it much easier for us
> to wrap the library if documentation existed.
>
Th
On Apr 13, 2:04 am, Alexander Dreyer
<[EMAIL PROTECTED]> wrote:
> On 12 Apr., 17:53, mabshoff <[EMAIL PROTECTED]> wrote:
>
> [...]
>
> > Since we are not sure yet if it is the interface's or PolyBoRi's fault
> > I haven't pinged you about this. Malb said that he would poke around
> > this weeke
On Apr 12, 9:44 pm, parisse <[EMAIL PROTECTED]> wrote:
Hi Bernard,
> > * unknown memory leak issues [Did you ever run valgrind? If not I
> > would highly recommend it]
>
> yes
Ok. What about the following then?
[EMAIL PROTECTED] bin]$ /usr/local/valgrind-3.3.0/bin/valgrind --
tool=memcheck
On Apr 12, 9:44 pm, parisse <[EMAIL PROTECTED]> wrote:
Hi,
> > Nope, it isn't. After initially switching to GPL V3+ we have decided
> > to remain at GPL V2+ for now. Since we have discussed this quite
> > extensively in the past here is no need to rehash this here. I don't
> > need another dr
On 12 Apr., 17:53, mabshoff <[EMAIL PROTECTED]
dortmund.de> wrote:
[...]
> Since we are not sure yet if it is the interface's or PolyBoRi's fault
> I haven't pinged you about this. Malb said that he would poke around
> this weekend since we was working with PolyBoRi anyway. The above is
> new in
>
> Nope, it isn't. After initially switching to GPL V3+ we have decided
> to remain at GPL V2+ for now. Since we have discussed this quite
> extensively in the past here is no need to rehash this here. I don't
> need another drawn out licensing discussion.
>
Well, I don't see why it is a concer
On Apr 12, 5:33 pm, Michael Brickenstein <[EMAIL PROTECTED]> wrote:
> > * unknown number of external library users: I am not away of anybody
> > external using giac as a library. From my experience with libSingular
> > I am not too optimistic that we will not run into a number of issues
> > her
> * unknown number of external library users: I am not away of anybody
> external using giac as a library. From my experience with libSingular
> I am not too optimistic that we will not run into a number of issues
> here. Look at #2822: PolyBoRi is designed as a library to be used from
> Python.
On Apr 12, 8:52 am, parisse <[EMAIL PROTECTED]> wrote:
Hi,
> ;; This buffer is for notes you don't want to save, and for Lisp
> evaluation.
> ;; If you want to create a file, visit that file with C-x C-f,
> ;; then enter the text in that file's own buffer.
Emacs people ;)
> > You can disable
;; This buffer is for notes you don't want to save, and for Lisp
evaluation.
;; If you want to create a file, visit that file with C-x C-f,
;; then enter the text in that file's own buffer.
> You can disable the X component of GIAC, so that isn't too much of a
> problem. But the fact that it isn
Hi,
On Apr 12, 4:23 am, "Joel B. Mohler" <[EMAIL PROTECTED]> wrote:
> On Friday 11 April 2008 05:39:03 pm Martin Albrecht wrote:
>
> > PS: In any case, If anyone wants to work on an optional GIAC spkg, please
> > speak up!
>
> I tried to build GIAC and it didn't go so well. The seemingly obviou
On Friday 11 April 2008 05:39:03 pm Martin Albrecht wrote:
> PS: In any case, If anyone wants to work on an optional GIAC spkg, please
> speak up!
I tried to build GIAC and it didn't go so well. The seemingly obvious way to
have it pick up sage pre-installed components (--prefix=...) allowed
c
On Wednesday 09 April 2008, parisse wrote:
> On 8 avr, 21:25, "Mike Hansen" <[EMAIL PROTECTED]> wrote:
> > > I have added a benchmark link with Fermat gcd tests, giac seems 5 to
> > > 10 * faster than maxima. I don't have magma, it is most probably
> > >
> > > another factor of 10 * faster.
> >
On 8 avr, 21:25, "Mike Hansen" <[EMAIL PROTECTED]> wrote:
> > I have added a benchmark link with Fermat gcd tests, giac seems 5 to
> > 10 * faster than maxima. I don't have magma, it is most probably
>
> > another factor of 10 * faster.
>
> I think that comparison with Magma is a little optimi
On Tuesday 08 April 2008 03:25:40 pm Mike Hansen wrote:
> Today 03:25:40 pm
> > I have added a benchmark link with Fermat gcd tests, giac seems 5 to
> > 10 * faster than maxima. I don't have magma, it is most probably
> >
> > another factor of 10 * faster.
>
> I think that comparison with Magma
> I have added a benchmark link with Fermat gcd tests, giac seems 5 to
> 10 * faster than maxima. I don't have magma, it is most probably
>
> another factor of 10 * faster.
I think that comparison with Magma is a little optimistic, especially
in the modular case.
http://www-fourier.ujf-grenobl
> Just because an algorithm is probabilistic
> doesn't mean it necessarily gives wrong answers. By the way, see
>
> http://www-fourier.ujf-grenoble.fr/~parisse/publi/gcdheu.pdf
>
> which is I think a paper that proves correctness of the algorithm
> we're talking about. (I've not carefully read
> Just because an algorithm is probabilistic
> doesn't mean it necessarily gives wrong answers. By the way, see
>
> http://www-fourier.ujf-grenoble.fr/~parisse/publi/gcdheu.pdf
>
> which is I think a paper that proves correctness of the algorithm
> we're talking about. (I've not carefully read
On Apr 1, 8:58 am, root <[EMAIL PROTECTED]> wrote:
> >Michael Abshoff made that comment. He's motivated by wanting
> >to port Sage to a wide range of architectures and keep everything
> >maintainable, since he works incredibly hard on that. He suffers
> >a huge amount trying to deal with buil
On 1 Apr, 05:21, "Mike Hansen" <[EMAIL PROTECTED]> wrote:
> I've posted some benchmarks
> athttp://wiki.sagemath.org/MultivariateGCDBenchmarks.
>
> --Mike
I can't do timings for the degree 1000 or 2000 (at least Allan Steel
gives it as degree 2000, whereas your page Mike seems to say it is
deg
On Apr 1, 7:19 am, "William Stein" <[EMAIL PROTECTED]> wrote:
> On Mon, Mar 31, 2008 at 11:21 PM, root <[EMAIL PROTECTED]> wrote:
[Apologies in advance for going somewhat off topic in this thread and
ranting a little too much :)]
Hi Tim,
> > Both Axiom
> > and Maxima have implementations o
Roman, I thoroughly agree with you that the multipolygcd and factoring
problem is not going to go away overnight. I'm sure by your comments
that you can guess what we've been doing with FLINT for univariate gcd
and how long even that is taking.
Also, I too get frustrated by some of the simple-min
On Mar 31, 10:55 pm, "William Stein" <[EMAIL PROTECTED]> wrote:
> On Mon, Mar 31, 2008 at 6:48 PM, Roman Pearce <[EMAIL PROTECTED]> wrote:
>
> > You need "Algorithms for Computer Algebra" by Geddes, Czapor, and
> > Labahn:
> > Chapter 5: Chinese Remainder Theorem
> > Chapter 6: Newton's Iterat
Since the Debian distribution of SAGE uses Maxima with GCL list, I figured
I'd run the benchmarks Mike posted on my installation. The SAGE times are
comparable to those in Mike's test, while the Maxima tests are faster:
sage: load /home/tabbott/fermat_gcd_1var.py
sage: time a = p1.gcd(p2, algo
On Mon, Mar 31, 2008 at 11:58 PM, root <[EMAIL PROTECTED]> wrote:
> >Michael Abshoff made that comment. He's motivated by wanting
> >to port Sage to a wide range of architectures and keep everything
> >maintainable, since he works incredibly hard on that. He suffers
> >a huge amount trying t
Hi!
I think, multivariate gcd is a neverending topic, discussed very often
in many communities.
Actually, I never tried to implement that and hopefully I never will.
But I *enjoyed* ;-) many discussions about it.
So, I think, if you want to have success, the first task is, what
Roman said:
Learni
>Michael Abshoff made that comment. He's motivated by wanting
>to port Sage to a wide range of architectures and keep everything
>maintainable, since he works incredibly hard on that. He suffers
>a huge amount trying to deal with build issues on various platforms
>such as solaris, Linux PPC, et
On Mon, Mar 31, 2008 at 11:21 PM, root <[EMAIL PROTECTED]> wrote:
> William,
>
>
> >By the way, Richard Fateman pointed out to me offlist that
> >Maxima running on top of clisp _might_ be much
> >slower than Maxima on gcl. This could be relevant to
> >our benchmarking.
>
> Not to start an im
William,
>By the way, Richard Fateman pointed out to me offlist that
>Maxima running on top of clisp _might_ be much
>slower than Maxima on gcl. This could be relevant to
>our benchmarking.
Not to start an implementation war but GCL compiles to C which
compiles to machine code whereas clisp is
>
> > Also, to Joel Mohler:
> > You need "Algorithms for Computer Algebra" by Geddes, Czapor, and
> > Labahn:
> > Chapter 5: Chinese Remainder Theorem
> > Chapter 6: Newton's Iteration and Hensel Lifting
> > Chapter 7: Polynomial GCD
> > Chapter 8: Polynomial Factorization
> >
> >
On Mon, Mar 31, 2008 at 6:48 PM, Roman Pearce <[EMAIL PROTECTED]> wrote:
>
> > The important thing isn't what algorithm is implemented, but that the
> > result is fast(er than Magma).
>
> The important thing is whether users can hope to get an answer at all.
> The old Singular factoring code w
On Mon, Mar 31, 2008 at 9:21 PM, Mike Hansen <[EMAIL PROTECTED]> wrote:
>
> I've posted some benchmarks at
> http://wiki.sagemath.org/MultivariateGCDBenchmarks .
>
Just out of curiosity, is Magma vastly faster
than everything else in every single benchmark? I'm having
some trouble matching up
I've posted some benchmarks at
http://wiki.sagemath.org/MultivariateGCDBenchmarks .
--Mike
On Mon, Mar 31, 2008 at 6:48 PM, Roman Pearce <[EMAIL PROTECTED]> wrote:
>
> > The important thing isn't what algorithm is implemented, but that the
> > result is fast(er than Magma).
>
> The important
> The important thing isn't what algorithm is implemented, but that the
> result is fast(er than Magma).
The important thing is whether users can hope to get an answer at all.
The old Singular factoring code was hopeless and I bet multivariate
gcd
is still hopeless. And what about correct ? The
On Monday 31 March 2008 07:38:48 pm William Stein wrote:
> On Mon, Mar 31, 2008 at 4:27 PM, Mike Hansen <[EMAIL PROTECTED]> wrote:
> > Hi Roman,
> >
> > It seems that for characteristic 0, Maxima is quite a bit faster than
> > Singular, but there's some overhead in talking to Maxima over the
>
On Mon, Mar 31, 2008 at 4:27 PM, Mike Hansen <[EMAIL PROTECTED]> wrote:
>
> Hi Roman,
>
> It seems that for characteristic 0, Maxima is quite a bit faster than
> Singular, but there's some overhead in talking to Maxima over the
> pexpect interface. In any case, it is still _way_ slower than w
Hi Roman,
It seems that for characteristic 0, Maxima is quite a bit faster than
Singular, but there's some overhead in talking to Maxima over the
pexpect interface. In any case, it is still _way_ slower than what we
need. For example, Maxima takes around 10 seconds to do the bivariate
gcd of de
38 matches
Mail list logo