On Sat, Apr 30, 2011 at 1:27 AM, Simon King <[email protected]> wrote:
> Hi Nicolas,
>
> On 29 Apr., 23:06, "Nicolas M. Thiery" <[email protected]>
> wrote:
>> Can you give it a shot with GF(2), so as to hit one of the smallest
>> possible sage Element?
>
> Sage-4.7.alpha5 with patches:
>
> sage: K = GF(2)
> sage: get_memory_usage()
> 842.01171875
> sage: %time L = [K(i) for i in xrange(10^7)]
> CPU times: user 25.85 s, sys: 0.05 s, total: 25.91 s
> Wall time: 25.91 s
> sage: get_memory_usage()
> 920.51171875
> sage: 920.51171875 - 842.01171875
> 78.5000000000000
>
> Sage-4.7.alpha5 without patches:
>
> sage: K = GF(2)
> sage: get_memory_usage()
> 839.96875
> sage: %time L = [K(i) for i in xrange(10^7)]
> CPU times: user 24.71 s, sys: 0.04 s, total: 24.75 s
> Wall time: 24.75 s
> sage: get_memory_usage()
> 918.4609375
> sage: 918.4609375 - 839.96875
> 78.4921875000000
>
> So, the additional memory usage seems affordable to me.
Elements of small finite fields are cached, so this test doesn't
really show anything.
sage: GF(2)(1) is GF(2)(-1)
True
> But I guess I
> should try to find out why the element creation became slower for
> GF(2), while it became faster for GF(101), and again slower for
> GF(next_prime(10^6)) (see example on the ticket).
Noise?
I think the increased memory usage is a bigger issue than performance
(it can impact the latter, but in hard-to-measure ways). I'm not as
worried about one field as a precedent, and I'm glad you brought this
up (even if it just sat starred in my inbox for a while). Perhaps for
infrequently accessed fields a level of indirection would be
worthwhile, a single lazily-initialized field (either an actual Python
dictionary, or some cdef object with special fields (+ a dict too?)).
In fact, some objects (e.g. matrices) already have a "cache" dict,
this could be done at a higher level.
Creating a new element means existing class hierarchies will have to
be changed (e.g. RingElement) to take advantage of this, but that may
be what's required.
Also, blindly marking all self-only methods as cached is a huge change
in the time-memory performance tradeoff. In terms of memory, imagine I
did
def all_possible_orders(element_list):
return set(g.order() for g in element_list)
I have now doubled my memory usage if g was small. Some things may be
better to re-compute rather than store (especially if they're biggish
and cheap). If your'e setting up a generic infrastructure, perhaps the
default could be enabled/disabled with a flag (and with statement
context).
- Robert
--
To post to this group, send an email to [email protected]
To unsubscribe from this group, send an email to
[email protected]
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org