On Fri, Jul 31, 2015 at 5:19 PM, Nick Papior <nickpap...@gmail.com> wrote:

> --
>
> Kind regards Nick Papior
> On 31 Jul 2015 17:53, "Chris Barker" <chris.bar...@noaa.gov> wrote:
> >
> > On Thu, Jul 30, 2015 at 11:24 PM, Jason Newton <nev...@gmail.com> wrote:
> >>
> >> This really needs changing though.  scientific researchers don't catch
> this subtlety and expect it to be just like the c and matlab types they
> know a little about.
> >
> >
> > well, C types are a %&$ nightmare as well! In fact, one of the biggest
> issues comes from cPython's use of a C "long" for an integer -- which is
> not clearly defined. If you are writing code that needs any kind of binary
> compatibility, cross platform compatibility, and particularly if you want
> to be abel to distribute pre-compiled binaries of extensions, etc, then
> you'd better use well-defined types.
>
There was some truth to this but if you, like the majority of scientific
researchers only produce code for x86 or x86_64 on windows and linux... as
long as you aren't treating pointers as int's, everything behaves in
accordance to general expectations.   The standards did and still do allow
for a bit of flux but things like OpenCL [
https://www.khronos.org/registry/cl/sdk/1.0/docs/man/xhtml/scalarDataTypes.html
] made this really strict so we stop writing ifdef's to deal with varying
bitwidths and just implement the algorithms - which is typically a
researcher’s top priority.

I'd say I use the strongly defined types (e.g. int/float32) whenever doing
protocol or communications work - it makes complete sense there. But often
for computation, especially when interfacing with c extensions it makes
more sense for the developer to use types/typenames that ought to match 1:1
with c in every case.

-Jason
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to