(Note: Perhaps we should just postpone this until after your exam and 
then have an IRC session about it? The communication overhead seems 
bigger here than in other matters, perhaps because of more complexity...)

> No, you have a very good point here that I missed. It should probably  
> be an attribute of the attributes themselves, e.g.
> 
> cdef class A:
>      cdef int can_be_set_at_compile_time len # No, I'm not seriously  
> suggesting that name
>      ...
> 
> perhaps there could be keywords designating type parameters too...  
> (These could then be used in function signatures? Maybe) They're  
> attributes, they just belong to the type rather than (necessarily) to  
> an instance.

Yes, this seems ok (for clarity, note this as an alternative proposal to 
__assume__).

>> in order to provide renames etc. This *does* have the problems you
>> mention though, but it is somewhat easier to arbitrarily raise errors
>> for complex expressions. But I'm not advocating it this time around.
> 
> The body of __assume__ gets executed at compile time? Is it checking  
> or setting the object's parameters? It's more like assert I guess. I  
> guess it's unclear what __assume__ really is--it's not really a cdef,  
> def, or special function...

In my original proposal for __assume__ (simply forget about the dict 
return), it gets executed run-time. That is, you could just compile it 
in a different pyx and only declare it in a pxd if you want to. As long 
as the signature is available compile-time you are fine (so, it would be 
a cdef method, predeclared or inline-implemented in pxd file).

Reasons:

  * It provides a place to specify an argument order, which I'd prefer 
to have available. (Ie ndarray(2, float64), rather than "ndim=2, 
dtype=float64)). What we have here is essentially a function call, and 
it seems nice to be able to use a function signature to declare it.

  * It provides a place to specify "arguments which are not fields", ie

arr.pxd:

cdef class MyArray:
     # no dtype declared!
     cdef MyArray cdef __assume__(self, dtype)

client.pyx:

cdef MyArray(dtype=int) arr = ...
cdef arr.dtype x = arr[23]

  But a declarative syntax (like you suggest) can be used for this as well.

  * It gives a place to also check arbitrarily complex boundaries for 
the parameters. Ie, if "use_foo=False" is specified as one assumption 
then also assuming "foo=3" can raise an exception.

(There's a weakness here: Errors from inconsistent assumptions in this 
way would give a run-time error!, which could in theory be known 
compile-time. However it would "work" -- that run-time error prevents 
the inconsistently generated code from being run, even if it *is* being 
generated.)

  * Checking the assumptions for an object could be done automatically, 
but requesting that it is done manually has some advantages:

  a) The class coder is made very responsible and conscious about not 
changing the fields after an assumption is made. This kind of unenforced 
"run-time contract" feels very Pythonic.

  b) Having an assumption match might not always mean "==". Consider C 
booleans for instance, where "attr=True" should map to checking whether 
"self.attr != 0". However one could argue that one should then create a 
Python property that converts attr to a Python boolean object; so this 
might not hold; but having manual run-time-code just seems more 
flexible... Still this, aspect can be dropped from assume, it can still 
play the other roles.

  * It makes it possible to have side-effects on assumptions. One thing 
is that in special situations the object might be able to change to 
accomodate the assumption (though I cannot think of examples where this 
would make sense, it is much better to be explicit in such cases). A 
much better usecase is that an object might want to freeze itself, 
provide CopyOnWrite-semantics, and so on.

In fact, perhaps __assume__ could simply return what should be assigned 
to the "assumed" variable, so that it can return a "frozen proxy" in 
special cases?

Example (requires that an __unassume__ is added as well):

arr.pxd:

cdef class Arr:
     cdef int len
     cdef int assumption_refcount
     cdef __assume__(self, int len)
     cdef __unassume__(self)
     cdef append(self, int value)

arr.pyx:

cdef class Arr:
     cdef int len
     cdef int assumption_refcount
     cdef __assume__(self, int len):
         if self.len != len: raise AssumptionError(...)
         self.assumption_refcount += 1

     # same arguments passed for convenience
     cdef __unassume__(self, int len):
         self.assumption_refcount -= 1

     cdef append(self, int value):
         if self.assumption_refcount > 0: raise RuntimeError(...)

client.pyx:

cimport arr

o = get_arr()
print o.assumption_refcount # => 0
cdef Arr(len=3) x = o # calls __assume__ on new x
print o.assumption_refcount # => 1
try:
    o.append(3) # => exception
except RuntimeError:
    pass
x = some_other_object
# ^^^ calls __unassume__ on old x and __assume__ on new x
print o.assumption_refcount # => 0
o.append(3) # ok

-- 
Dag Sverre
_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev

Reply via email to