> buffers, then this could be provided automatically. Overloading the =
> operator would work for the one direction, and perhaps something
> similar would be provided to do an "convert self to other type"
> declaration.
Right. This will keep me thinking :-)
Has there been any thoughts on doing something as brute as allowing
Python code in pxd files that is simply called compile-time and returns
strings of C code? Or if not there, as a kind of plugin-architecture for
the Cython compiler engine? Like this (apologies for bad knowledge of
this aspect of Cython):
(Lot's of stuff that doesn't work, just scetching wildly...)
cdef class std_vector_int:
def __compile_index_assign__(lefthand, indices, righthand):
return "%s[%s] = %s;" % (as_c(lefthand), as_c(indices), as_c(righthand))
def __compile_index_lookup__(lefthand, indices):
return "%s[%s];" % (as_c(lefthand), as_c(indices))
def __compile_assign__(lefthand, rightand):...
cdef class numpy.numarray ...:
def __compile_index_assign__(lefthand, indices, righthand):
l = indices.split(":")
if len(l) != int(lefthand.type_params[0]): raise ValueError(...)
idx = " + ".join([as_c(x) + " * stride[%d]" % d for x in zip(l,
range(len(l))])
return "%s[%s] = %s;" % (as_c(lefthand), idx, as_c(righthand))
With the "macro as cdef"-trick earlier in the mailing list this
certainly doesn't seem *too* far-fetched for me...
>
> These data types can be declared, right now, by doing.
>
> cdef extern from "numpy.h":
> cdeftype int uint8
Ah, right :-)
>>
>> @Compile
>> def myfunc(a: uint8, b: array(2, uint8), c: int = 10):
>> d: ptr(int) = &a
>> print a, b, c, d
>>
>
> One thing I really like about the cdef keyword is that it makes it
> very clear what is special Cython commands, and what is not. I also
> find this harder to read (and parse). (BTW, what's the @Compile
> supposed to mean?) However, if people want to move in this direction I
> wouldn't protest against it as an alternative.
The idea was to focus on making Cython usable for numerical computation
-- and for that use (as opposed to making wrappers for C code) you do
NOT want people to feel that it is not part of Python, but rather treat
it as a "typed Python that will run fast". The use case is simply
somebody that uses NumPy for everything, but comes over this one
operation that a) isn't in the libraries, b) can't be vectorized but
must be implemented as for-loops. If they can simply drop in a @Compile
decorator and add some types and get on with their work, the pure
convenience of it might make Cython a regular tool for these kind of
things...
@Compile simply uses the standard Python function decorator syntax. The
@Compile might be @Cython instead actually. The idea is that while
Python syntax allows for decorating function arguments, it doesn't say
anything about what they're for (documentation? something else?), so you
would often have a function decorator that acts on them, and this serves
to document that the function argument decorators are Cython types.
Long term, @Compile might be used to replace the function with calls to
compiled code in Python-land, and to extract functions to compile from a
py-file in Cython-land, so that one can mix Cython and Python in one
source file. Also, it can take the options that Cython needs (ie
@Compile(native=True, except=1) instead of cdef ... except ).
Dag Sverre
_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev