On Dec 5, 2006, at 12:55 AM, Nick Alexander wrote:
Hello everyone,
I've posted to the list a few times, but William asked me to post a
short introduction. My name is Nick Alexander; I'm a graduate student
at University of California, Irvine. Before UCI I was at
University of
On Dec 5, 2006, at 12:29 PM, William Stein wrote:
You know, honestly, the problem of how to express do Calculus with
a computer algebra system is not exactly a new one. It's been
to some degree completely and totally solved by Mathematica.
Maybe the real discussion we should be having is
On Dec 3, 2006, at 3:25 PM, Joel B. Mohler wrote:
My comments above lead me to believe that we really need to step
outside the
polynomial ring box though.
I tentatively agree with this assessment. Polynomial rings don't seem
to be the right thing here.
David
On Dec 1, 2006, at 12:29 AM, William Stein wrote:
It's really incredible that MAGMA goes faster than python ints here.
From memory, at sage days 2, our Integer stuff was still a factor of
7-10 away from python ints, at least for addition.
Python ints:
(1) Have a custom optimized
On Nov 30, 2006, at 1:35 AM, William Stein wrote:
Note -- there's a new potentially controversial change!
Now by default the left control pane (with the worksheet list,
etc.,) is *off*. To see it click on Control Bar in the upper
left of the screen. What do you think?
Fantastic. Thanks
On Nov 30, 2006, at 8:50 PM, William Stein wrote:
IntegerModRing now has a method precompute_table() which will create
a cached table of all ring elements which cuts down on the overhead a
I've applied your patch and it does indeed speed things up. Thanks.
lot. Perhaps this should be
On Nov 30, 2006, at 10:34 PM, William Stein wrote:
It's really incredible that MAGMA goes faster than python ints here.
From memory, at sage days 2, our Integer stuff was still a factor of
7-10 away from python ints, at least for addition.
I don't know what benchmark you were doing
On Nov 16, 2006, at 1:53 AM, Martin Albrecht wrote:
But I can see why it would be faster, given all the crap that sits
between us and those 16 bits.
I don't necessarily have a problem with what you're doing, but in the
long run, we're better off just bloody well implementing the fields
On Nov 16, 2006, at 7:45 AM, Martin Albrecht wrote:
Ah. Are you saying what you've done is a bit like the situation where
Python caches int object?
Yes, but I can cache all elements and they have to maintain a FIFO or
something similar. We might want some generic FIFO / dict hybrid for
On Nov 15, 2006, at 5:00 PM, Martin Albrecht wrote:
Hi there,
I've implemented a very naive cache for finite extension field
elements in the
Givaro wrapper. Basically, all elements are created when the field
is created
and references are returned by the arithmetic methods: Thus, no
William (stein),
Thanks for doing these. At some point I'm going to do all of these
benchmarks *properly*, and this stuff will be a useful reference to
work from.
In particular I'm going to work within each system as far as possible
(not through a SAGE wrapper) so we can get as fair a sense
On Nov 3, 2006, at 1:57 PM, William Stein wrote:
Also, since I'm making so many changes to Pyrex, to avoid confusion (or
making the
Pyrex author angry) I'm going to call the SAGE branch of Pyrex by the
name
Syrex,
which means SAGE Pyrex.
Surely you meant to say Spyrex.
Also, Spyrexx,
On Oct 30, 2006, at 11:51 PM, William Stein wrote:
Generators is badly named. It should be SageObjectWithGenerators,
since
inheritence is an is a relationship, and everything that inherits
from a class should satisfy an is a relationship with it. E.g.,
a polynomial Ring is a object
On Oct 29, 2006, at 6:36 PM, William Stein wrote:
additive_order, multiplicative_order, and is_zero all make perfect
sense for p-adics, so i don't want to delete them.
I'm not sure I agree with this. When I ask for e.g. the
multiplicative order of an element of a p-adic field, the best
On Oct 28, 2006, at 2:09 PM, William Stein wrote:
On Sat, 28 Oct 2006 06:47:59 -0500, David Harvey
[EMAIL PROTECTED] wrote:
On Oct 28, 2006, at 3:18 AM, William Stein wrote:
Moreover, if you use R['...'] notation anywhere in library code it
doesn't
affect the interpreter's variables
On Oct 28, 2006, at 3:22 PM, David Harvey wrote:
I can get *really* close to modifying locals:
Oh no I can't. I see what's going on. I'm not even close.
David
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
On Oct 28, 2006, at 2:50 PM, William Stein wrote:
You might think you could do:
def func1():
T = QQ[x]
# do some calculations with T
func2()
# do some calculations with T
def func2():
S = ZZ[x]
S.inject_variables(locals())
print x^3 + 5
On Oct 28, 2006, at 4:39 PM, Fernando Perez wrote:
http://bytecodehacks.sourceforge.net/bch-docs/bch/bch.html
Omigosh that's insane. My favourite:
http://bytecodehacks.sourceforge.net/bch-docs/bch/module-
bytecodehacks.assemble.html
Why use Pyrex to write C in Python, when you could be
On Oct 28, 2006, at 5:47 PM, David Harvey wrote:
(5)
I am concerned that there is a speed bump when calling _coerce_ to
determine *whether a coercion is possible*. In many cases it is
probably cheap to determine whether a coercion is possible, and
relatively expensive to actually perform
On Oct 28, 2006, at 8:23 PM, William Stein wrote:
On Sat, 28 Oct 2006 16:39:10 -0700, David Harvey
[EMAIL PROTECTED] wrote:
Another issue that slightly complicates this is base rings. When I do
x * y, I don't always want to coerce into the same parent; sometimes
I want to coerce x
On Oct 28, 2006, at 8:23 PM, William Stein wrote:
What are the rules for algebras going to be?
Suppose R and S are commutative rings (for simplicity) and M and N are
R modules and K is an S-module.
Choose x in M and y in N. Then coerce should work exactly as before.
All the _coerce_
On Oct 28, 2006, at 9:28 PM, Martin Albrecht wrote:
To elaborate this, this is how the suggestion translates into
Python/Pyrex
code. Please remember the idea is to avoid Python calls (at all
costs).
Pyrex:
---
cdef class RingElement:
def
Following our discussion on IRC, I tried to put together my idea in
detail, with as many scenarios as I could think of. I think it works,
but maybe someone can prove me wrong.
http://sage.math.washington.edu:8100/29
Please make a copy if you want to edit it, thanks.
David
I just had eight lisp.run processes running on sage.math. They were
owned by me, so I killed them. I'm not sure how they got there. The
only weird thing I was doing was running a broken version of sage
where i was accidentally not incrementing a reference count
somewhere, and I had to
Is there any particular reason that the cdef functions in arith.pyx
are wrapped in classes? Why not just make them globals?
David
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email
On Oct 25, 2006, at 5:02 AM, William Stein wrote:
* foo?? now gives the source code of foo now, even if foo
is
defined
in Pyrex.
[...]
The source code brick wall you get when you hit Pyrex
code in
the
interpreter is gone. I did
On Oct 24, 2006, at 12:34 AM, Bill Hart wrote:
Now MAGMA uses SS/FFT down to degree 16 at least, for 1000 bit.
But now they really screwed up their algorithm, because I can use
MAGMA
to multiply 2400 degree polynomials considerably faster than they
do it
themselves.
!!! :-)
David
On Oct 24, 2006, at 7:25 AM, Bill Hart wrote:
David, did your comparative GMP/Magma timings take into account this
MAGMA binary issue, which I presume William told you about? I.e. which
binary of MAGMA did you measure against?
I'm not sure. I think it must have been the V12, 64-bit one. I
On Oct 24, 2006, at 12:34 AM, Bill Hart wrote:
Now MAGMA uses SS/FFT down to degree 16 at least, for 1000 bit.
But now they really screwed up their algorithm, because I can use
MAGMA
to multiply 2400 degree polynomials considerably faster than they
do it
themselves.
I think part of
On Oct 23, 2006, at 12:16 AM, William Stein wrote:
Perhaps there is a fast way to tell whether a class is a Python class
or a Pyrex
class (say in the base class __add__ method), and always call
_add_sibling_cdef
if it's a Pyrex class and _add_sibling if it's a Python class. There
On Oct 23, 2006, at 10:43 AM, Bill Hart wrote:
At one stage MAGMA were boasting that their integer multiplication was
a lot faster than GMP, but I suspect GMP has caught them up now, and I
think it only made a difference to numbers of a million bits or more.
MAGMA now seem to claim that
On Oct 22, 2006, at 6:32 PM, Bill Hart wrote:
I am now absolutely certain MAGMA uses the FFT for multiplying
polynomials over ZZ right down to degree 16 (when the bit length is
1000). This is a **much** lower cutoff than NTL uses, which is
indicative of the fact that MAGMA's FFT is way
On Fri, 20 Oct 2006, Bill Hart wrote:
I can't get MAGMA to go that fast.
I timed it doing 1000 multiplies of degree 255 polynomials with
coefficients of 1000 bits. It took 15 seconds or thereabouts.
I'm getting 3.7 seconds with version V2.12-20 and 6.9 seconds with
version V2.13-5. I've
On Oct 20, 2006, at 9:43 AM, Bill Hart wrote:
Anyhow, I'm now wondering whether MAGMA just uses Toom-3 instead of
Karasuba by the time you get to degree 250 or so. I'll implement a
Toom-3 algorithm once I get my Karasuba implementation sorted out, and
we'll see.
I believe GMP has mpn-level
Hi guys,
Something about the canonical coercion framework is bothering me a bit.
There's some weird asymmetry which I don't think should be there.
Suppose we have two hierarchies of finite field objects, say with
different underlying representations. Suppose I do something like:
sage:
On Oct 18, 2006, at 9:57 AM, David Joyner wrote:
sage: F1small = FiniteField1(9)
sage: F1big = FiniteField1(27)
sage: F2small = FiniteField2(9)
sage: F2big = FiniteField2(27)
I don't understand the notation. If FIniteField1 is always defined
using the Conway polynomials, how would
On Oct 17, 2006, at 2:18 PM, Bill Hart wrote:
The following is completely just for fun, and not meant to be taken
seriously:
Understood.
Just to make sure the above was accurate, I computed the factors of
p^100 for random primes p below 16. It was only 10% slower, i.e. it
could compute
On Oct 17, 2006, at 6:40 PM, Bill Hart wrote:
I will try implementing some polynomial multiplication routines over
the next week and see just how bad NTL's routines are. I don't expect
to beat NTL straight away, since there are so many possible algorithms
to use, and so many variants, that
On Oct 16, 2006, at 11:36 AM, Bill Hart wrote:
Clearly MAGMA is using a different algorithm. NTL is using SSMul
(schonhage-strassen) in this range, at least on my machine.
It is possibly using a different algorithm, but I'd be more confident
of that if I saw the MAGMA code. One should of
On Oct 14, 2006, at 11:15 PM, Bill Hart wrote:
I decided to see just how much of an overhead there
is using NTL as opposed to something written
specifically *for* GMP.
So over the weekend I wrote my own library (some of it
from code I already had lying around) of a selection
of functions
On Oct 13, 2006, at 6:43 PM, Martin Albrecht wrote:
Thoughts?
(1) I assume you're using omalloc as a drop-in replacement for
malloc, i.e. just substituting malloc/free/realloc or something like
that. Does omalloc have a more sophisticated interface though?
(2) Have you run the timings
On Oct 13, 2006, at 12:20 PM, William Stein wrote:
Thinking about this, one *major* worry I have if we define == to be
canonical isomorphism is that any object with automorphisms is not
even == to itself!
I don't think this is true. There's always the identity map, which is
canonical.
Hi guys,
I promised I would take a few days off from SAGE when I got home, but
it seems I have failed for the moment.
I have created a new page on the wiki:
http://sage.math.washington.edu:9001/DevelopersRoom
The idea is to have a summary of current development activity in
various areas
301 - 343 of 343 matches
Mail list logo