On Dec 15, 2008, at 17:57 , Dag Sverre Seljebotn wrote:

> Hei der!

Hei, ja :)

> Yes, this is very expected on 64-bit systems, I suppose Robert's  
> slides
> are in error.
>
> Basically the "np.int" type object is defined (by NumPy) to be long
> (probably on merit of being the most convenient int type; read  
> "np.int"
> as saying "Integer", not a C int).

I see... But, as I mentioned, x.dtype is Int32 (or seems to be, when I  
print out repr(x))... I also believe I tried explicitly setting the  
type to int32 (although I am now on a different machine, without the  
proper software installed, so I can't check right now :)

> To be safe with these things, you should use the compile-time types
> defined in the numpy cimport:
>
> cdef numpy.ndarray[numpy.int_t, ndim=1] arr = x
>
> numpy.int_t is typedef-ed to always be whatever np.int refers to.
>
> If you really want a C int, use e.g. numpy.int32(_t).

Right. That might be an issue at times, I guess.

I'll have a look to see what happens if I use int32. As I said, I  
thought I had tried that (and failed), but I wouldn't rule out a bit  
of PEBKAC there ;-)

Thanks,

- M

-- 
Magnus Lie Hetland
http://hetland.org


_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev

Reply via email to