David Cournapeau wrote: > Pierre GM wrote: > >> FYI, >> I can't reproduce David's failures on my machine (intel core2 duo w/ >> 10.5.5) >> * python 2.6 from macports >> >> > > I think that's the main difference. I feel more and more that the > problem is linked to fat binaries (more exactly multi arch build in one > autoconf run: since only one pyconfig.h is generated for all archs, only > one value is defined for CPU specific configurations). On my machine, > pyconfig.h has WORDS_BIGENDIAN defined to one, which I can only explain > by the binary being built on ppc (unfortunately, I can't find this > information from python itself - maybe in the release notes). And that > cannot work on Intel.
Ok, I think I fixed the problem in the dynamic_cpu_configuration branch. I get only two test failures, which appear also on windows and linux (the same as yours). I think the code is OK, but if anyone has two minutes to review it, it would be better before merging it into the trunk. I used the path of least resistance: instead of using the WORDS_BIGENDIAN macro, I added a numpy header which gives the endianness every time it is included. IOW, instead of the endianness to be fixed at numpy build time (which would fail for universal builds), it is set everytime the numpy headers are included (which is the only way to make it work). A better solution IMO would be to avoid any endianness dependency at all in the headers, but that does not seem possible without breaking the API (because the endianness-related macro PyArray_NBO and co would need to be set as functions instead). cheers, David David _______________________________________________ Numpy-discussion mailing list [email protected] http://projects.scipy.org/mailman/listinfo/numpy-discussion
