On May 12, 2008, at 11:10 AM, Dag Sverre Seljebotn wrote:
> (Rearranged email to order of urgency.)
>
>> type of the object. Another problem is that is specifically requires
>> one to declare ahead of time what compile-time assumptions can be
>> made, rather than letting the user of the .pxd file specify things
>> ahead of time for explicit optimization.
>
> This "problem" was the exact reason I did this, and is a feature!!
>
> Perhaps there's more I don't understand. Tell me then how you prevent
> something like this:
>
> cdef timer(seconds=0) t = timer(0)
> print t.seconds # prints 0
> sleep(10)
> print t.seconds # prints 0 (!)
>
> The whole point was to leave what one can safely (!) make assumptions
> about up to the class designer (and presumably pxd author). Otherwise
> this just becomes some dangerous shoestring feature, not something you
> give to engineers using NumPy without deep programming knowledge...
>
> Possible assumptions should be part of the published API of the class!
>
> (Is this something we will simply never agree on? I really hope I am
> misunderstanding something, would make all of this much easier.)
No, you have a very good point here that I missed. It should probably
be an attribute of the attributes themselves, e.g.
cdef class A:
cdef int can_be_set_at_compile_time len # No, I'm not seriously
suggesting that name
...
perhaps there could be keywords designating type parameters too...
(These could then be used in function signatures? Maybe) They're
attributes, they just belong to the type rather than (necessarily) to
an instance.
> (About __assume__:)
>> and also that it requires the use of full control flow to do any
>> reasoning (e.g. there's the variable before assumption, the variable
>> after, and the variable which (depending on branching) may or may not
>> have been certified to have a given property. Then further
>> __assumes__ would be illegal? Or just ones that contradict?) It just
>> gets a lot messier than simply adding the data to the compile-time
>
> You misunderstood me here! I specifically noted that __assume__
> only did
> the checking part, and does *not* magically constitute the assumptions
> themselves (that would be insane :-), probably challenging halting and
> NP-completeness and whatnot).
>
> If we want to up the bets from there, I think __assume__ could
> return a
> dict like this:
>
> cdef __assume__(self, len):
> ...
> return { "_len" : len }
>
> in order to provide renames etc. This *does* have the problems you
> mention though, but it is somewhat easier to arbitrarily raise errors
> for complex expressions. But I'm not advocating it this time around.
The body of __assume__ gets executed at compile time? Is it checking
or setting the object's parameters? It's more like assert I guess. I
guess it's unclear what __assume__ really is--it's not really a cdef,
def, or special function...
>> The mapping of __init__ parameters to type parameters (for use with
>> type inference) could be arbitrarily complicated, and I don't know
>> how to do that without having the compiler actually execute code at
>> compile time.
>
> I don't see why. What I meant is simply:
>
> - Take parameters passed to constructor.
> - Take intersection of the names of these with __assume__ method
> signature.
> - Pass same parameters (set to same expressions) to __assume__.
>
> OK, I suppose if you have non-trivial expressions as parameters this
> fails, so add this rule then:
>
> - However, if the expression of a parameter is not a compile-time-
> value,
> don't pass it to __assume__ anyway.
>
> If we know enough to attempt type inference, we'll know enough to do
> this I think.
>
> If this is not accepted, I'm leaning against using the () syntax,
> because you'll want to do stuff like (using [] syntax):
>
> x = ndarray[nd=2, dtype=float64](shape=(2,2), dtype=float64)
>
> in a type-inferred environment, to ensure that x has efficient access,
> and using () is very ambigous in the expression above.
I have a slight preference for the () notation, but I can see
(especially in the above case) how [] is much clearer. Being able to
do type inference based on the init parameters seems desirable, but
one can't even be sure that A() returns an object of type A (if A
implements a python __new__ method).
- Robert
_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev