Re: [Numpy-discussion] Slightly off-topic - accuracy of C exp function?

2014-04-26 Thread josef . pktd
On Sat, Apr 26, 2014 at 8:31 PM,  wrote:

>
>
>
> On Sat, Apr 26, 2014 at 8:05 PM,  wrote:
>
>>
>>
>>
>> On Sat, Apr 26, 2014 at 6:37 PM, Matthew Brett 
>> wrote:
>>
>>> Hi,
>>>
>>> On Wed, Apr 23, 2014 at 11:59 AM, Matthew Brett 
>>> wrote:
>>> > Hi,
>>> >
>>> > On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith 
>>> wrote:
>>> >> On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett <
>>> matthew.br...@gmail.com> wrote:
>>> >>> Hi,
>>> >>>
>>> >>> I'm exploring Mingw-w64 for numpy building, and I've found it gives a
>>> >>> slightly different answer for 'exp' than - say - gcc on OSX.
>>> >>>
>>> >>> The difference is of the order of the eps value for the output number
>>> >>> (2 * eps for a result of ~2.0).
>>> >>>
>>> >>> Is accuracy somewhere specified for C functions like exp?  Or is
>>> >>> accuracy left as an implementation detail for the C library author?
>>> >>
>>> >> C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point
>>> >> operations ... and of the library functions in  and
>>> >>  that return floating point results is implemenetation
>>> >> defined. The implementation may state that the accuracy is unknown."
>>> >> (This last sentence is basically saying that with regard to some
>>> >> higher up clauses that required all conforming implementations to
>>> >> document this stuff, saying "eh, who knows" counts as documenting it.
>>> >> Hooray for standards!)
>>> >>
>>> >> Presumably the accuracy in this case is a function of the C library
>>> >> anyway, not the compiler?
>>> >
>>> > Mingw-w64 implementation is in assembly:
>>> >
>>> >
>>> http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/exp.def.h
>>> >
>>> >> Numpy has its own implementations for a
>>> >> bunch of the math functions, and it's been unclear in the past whether
>>> >> numpy or the libc implementations were better in any particular case.
>>> >
>>> > I only investigated this particular value, in which case it looked as
>>> > though the OSX value was closer to the exact value (via sympy.mpmath)
>>> > - by ~1 unit-at-the-last-place.  This was causing a divergence in the
>>> > powell optimization path and therefore a single scipy test failure.  I
>>> > haven't investigated further - was wondering what investigation I
>>> > should do, more than running the numpy / scipy test suites.
>>>
>>> Investigating further, with this script:
>>>
>>> https://gist.github.com/matthew-brett/11301221
>>>
>>> The following are tests of np.exp accuracy for input values between 0
>>> and 10, for numpy 1.8.1.
>>>
>>> If np.exp(x) performs perfectly, it will return the nearest floating
>>> point value to the exact value of exp(x).  If it does, this scores a
>>> zero for error in the tables below.  If 'proportion of zeros' is 1 -
>>> then np.exp performs perfectly for all tested values of exp (as is the
>>> case for linux here).
>>>
>>> OSX 10.9
>>>
>>> Proportion of zeros: 0.99789
>>> Sum of error: 2.15021267458e-09
>>> Sum of squared error: 2.47149370032e-14
>>> Max / min error: 5.96046447754e-08 -2.98023223877e-08
>>> Sum of squared relative error: 5.22456992025e-30
>>> Max / min relative error: 2.19700100681e-16 -2.2098803255e-16
>>> eps:  2.22044604925e-16
>>> Proportion of relative err >= eps: 0.0
>>>
>>> Debian Jessie / Sid
>>>
>>> Proportion of zeros: 1.0
>>> Sum of error: 0.0
>>> Sum of squared error: 0.0
>>> Max / min error: 0.0 0.0
>>> Sum of squared relative error: 0.0
>>> Max / min relative error: 0.0 0.0
>>> eps:  2.22044604925e-16
>>> Proportion of relative err >= eps: 0.0
>>>
>>> Mingw-w64 Windows 7
>>>
>>> Proportion of zeros: 0.82089
>>> Sum of error: 8.08415331122e-07
>>> Sum of squared error: 2.90045099615e-12
>>> Max / min error: 5.96046447754e-08 -5.96046447754e-08
>>> Sum of squared relative error: 4.18466468175e-28
>>> Max / min relative error: 2.22041308226e-16 -2.22042100773e-16
>>> eps:  2.22044604925e-16
>>> Proportion of relative err >= eps: 0.0
>>>
>>> Take-home : exp implementation for mingw-w64 is exactly (floating
>>> point) correct 82% of the time, and one unit-at-the-last-place off for
>>> the rest [1].  OSX is off by 1 ULP only 0.2% of the time.
>>>
>>
>>
>> Windows 64 with MKL
>>
>> \WinPython-64bit-3.3.2.2\python-3.3.2.amd64>python
>> "E:\Josef\eclipsegworkspace\statsmodels-git\local_scripts\local_scripts\try_exp_error.py"
>> Proportion of zeros: 0.99793
>> Sum of error: -2.10546855506e-07
>> Sum of squared error: 3.33304327526e-14
>> Max / min error: 5.96046447754e-08 -5.96046447754e-08
>> Sum of squared relative error: 4.98420694339e-30
>> Max / min relative error: 2.20881302691e-16 -2.18321571939e-16
>> eps:  2.22044604925e-16
>> Proportion of relative err >= eps: 0.0
>>
>>
>> Windows 32 bit python with official MingW binaries
>>
>> Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit
>> (Intel)] on win32
>>
>> Proportion of zeros: 0.99464
>> Sum of error: -3.91621083118e-07
>> Sum of squared error: 9.2239247812e-14
>>  Max / min error: 5.96046447754e-08 -5.96046447754e-08
>

Re: [Numpy-discussion] Slightly off-topic - accuracy of C exp function?

2014-04-26 Thread josef . pktd
On Sat, Apr 26, 2014 at 8:05 PM,  wrote:

>
>
>
> On Sat, Apr 26, 2014 at 6:37 PM, Matthew Brett wrote:
>
>> Hi,
>>
>> On Wed, Apr 23, 2014 at 11:59 AM, Matthew Brett 
>> wrote:
>> > Hi,
>> >
>> > On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith  wrote:
>> >> On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett <
>> matthew.br...@gmail.com> wrote:
>> >>> Hi,
>> >>>
>> >>> I'm exploring Mingw-w64 for numpy building, and I've found it gives a
>> >>> slightly different answer for 'exp' than - say - gcc on OSX.
>> >>>
>> >>> The difference is of the order of the eps value for the output number
>> >>> (2 * eps for a result of ~2.0).
>> >>>
>> >>> Is accuracy somewhere specified for C functions like exp?  Or is
>> >>> accuracy left as an implementation detail for the C library author?
>> >>
>> >> C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point
>> >> operations ... and of the library functions in  and
>> >>  that return floating point results is implemenetation
>> >> defined. The implementation may state that the accuracy is unknown."
>> >> (This last sentence is basically saying that with regard to some
>> >> higher up clauses that required all conforming implementations to
>> >> document this stuff, saying "eh, who knows" counts as documenting it.
>> >> Hooray for standards!)
>> >>
>> >> Presumably the accuracy in this case is a function of the C library
>> >> anyway, not the compiler?
>> >
>> > Mingw-w64 implementation is in assembly:
>> >
>> >
>> http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/exp.def.h
>> >
>> >> Numpy has its own implementations for a
>> >> bunch of the math functions, and it's been unclear in the past whether
>> >> numpy or the libc implementations were better in any particular case.
>> >
>> > I only investigated this particular value, in which case it looked as
>> > though the OSX value was closer to the exact value (via sympy.mpmath)
>> > - by ~1 unit-at-the-last-place.  This was causing a divergence in the
>> > powell optimization path and therefore a single scipy test failure.  I
>> > haven't investigated further - was wondering what investigation I
>> > should do, more than running the numpy / scipy test suites.
>>
>> Investigating further, with this script:
>>
>> https://gist.github.com/matthew-brett/11301221
>>
>> The following are tests of np.exp accuracy for input values between 0
>> and 10, for numpy 1.8.1.
>>
>> If np.exp(x) performs perfectly, it will return the nearest floating
>> point value to the exact value of exp(x).  If it does, this scores a
>> zero for error in the tables below.  If 'proportion of zeros' is 1 -
>> then np.exp performs perfectly for all tested values of exp (as is the
>> case for linux here).
>>
>> OSX 10.9
>>
>> Proportion of zeros: 0.99789
>> Sum of error: 2.15021267458e-09
>> Sum of squared error: 2.47149370032e-14
>> Max / min error: 5.96046447754e-08 -2.98023223877e-08
>> Sum of squared relative error: 5.22456992025e-30
>> Max / min relative error: 2.19700100681e-16 -2.2098803255e-16
>> eps:  2.22044604925e-16
>> Proportion of relative err >= eps: 0.0
>>
>> Debian Jessie / Sid
>>
>> Proportion of zeros: 1.0
>> Sum of error: 0.0
>> Sum of squared error: 0.0
>> Max / min error: 0.0 0.0
>> Sum of squared relative error: 0.0
>> Max / min relative error: 0.0 0.0
>> eps:  2.22044604925e-16
>> Proportion of relative err >= eps: 0.0
>>
>> Mingw-w64 Windows 7
>>
>> Proportion of zeros: 0.82089
>> Sum of error: 8.08415331122e-07
>> Sum of squared error: 2.90045099615e-12
>> Max / min error: 5.96046447754e-08 -5.96046447754e-08
>> Sum of squared relative error: 4.18466468175e-28
>> Max / min relative error: 2.22041308226e-16 -2.22042100773e-16
>> eps:  2.22044604925e-16
>> Proportion of relative err >= eps: 0.0
>>
>> Take-home : exp implementation for mingw-w64 is exactly (floating
>> point) correct 82% of the time, and one unit-at-the-last-place off for
>> the rest [1].  OSX is off by 1 ULP only 0.2% of the time.
>>
>
>
> Windows 64 with MKL
>
> \WinPython-64bit-3.3.2.2\python-3.3.2.amd64>python
> "E:\Josef\eclipsegworkspace\statsmodels-git\local_scripts\local_scripts\try_exp_error.py"
> Proportion of zeros: 0.99793
> Sum of error: -2.10546855506e-07
> Sum of squared error: 3.33304327526e-14
> Max / min error: 5.96046447754e-08 -5.96046447754e-08
> Sum of squared relative error: 4.98420694339e-30
> Max / min relative error: 2.20881302691e-16 -2.18321571939e-16
> eps:  2.22044604925e-16
> Proportion of relative err >= eps: 0.0
>
>
> Windows 32 bit python with official MingW binaries
>
> Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit
> (Intel)] on win32
>
> Proportion of zeros: 0.99464
> Sum of error: -3.91621083118e-07
> Sum of squared error: 9.2239247812e-14
> Max / min error: 5.96046447754e-08 -5.96046447754e-08
> Sum of squared relative error: 1.3334972729e-29
> Max / min relative error: 2.21593462148e-16 -2.2098803255e-16
> eps:  2.22044604925e-16
> Proportion of relative err >= eps: 0.0
>
>
>
>>

Re: [Numpy-discussion] Slightly off-topic - accuracy of C exp function?

2014-04-26 Thread josef . pktd
On Sat, Apr 26, 2014 at 6:37 PM, Matthew Brett wrote:

> Hi,
>
> On Wed, Apr 23, 2014 at 11:59 AM, Matthew Brett 
> wrote:
> > Hi,
> >
> > On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith  wrote:
> >> On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett 
> wrote:
> >>> Hi,
> >>>
> >>> I'm exploring Mingw-w64 for numpy building, and I've found it gives a
> >>> slightly different answer for 'exp' than - say - gcc on OSX.
> >>>
> >>> The difference is of the order of the eps value for the output number
> >>> (2 * eps for a result of ~2.0).
> >>>
> >>> Is accuracy somewhere specified for C functions like exp?  Or is
> >>> accuracy left as an implementation detail for the C library author?
> >>
> >> C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point
> >> operations ... and of the library functions in  and
> >>  that return floating point results is implemenetation
> >> defined. The implementation may state that the accuracy is unknown."
> >> (This last sentence is basically saying that with regard to some
> >> higher up clauses that required all conforming implementations to
> >> document this stuff, saying "eh, who knows" counts as documenting it.
> >> Hooray for standards!)
> >>
> >> Presumably the accuracy in this case is a function of the C library
> >> anyway, not the compiler?
> >
> > Mingw-w64 implementation is in assembly:
> >
> >
> http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/exp.def.h
> >
> >> Numpy has its own implementations for a
> >> bunch of the math functions, and it's been unclear in the past whether
> >> numpy or the libc implementations were better in any particular case.
> >
> > I only investigated this particular value, in which case it looked as
> > though the OSX value was closer to the exact value (via sympy.mpmath)
> > - by ~1 unit-at-the-last-place.  This was causing a divergence in the
> > powell optimization path and therefore a single scipy test failure.  I
> > haven't investigated further - was wondering what investigation I
> > should do, more than running the numpy / scipy test suites.
>
> Investigating further, with this script:
>
> https://gist.github.com/matthew-brett/11301221
>
> The following are tests of np.exp accuracy for input values between 0
> and 10, for numpy 1.8.1.
>
> If np.exp(x) performs perfectly, it will return the nearest floating
> point value to the exact value of exp(x).  If it does, this scores a
> zero for error in the tables below.  If 'proportion of zeros' is 1 -
> then np.exp performs perfectly for all tested values of exp (as is the
> case for linux here).
>
> OSX 10.9
>
> Proportion of zeros: 0.99789
> Sum of error: 2.15021267458e-09
> Sum of squared error: 2.47149370032e-14
> Max / min error: 5.96046447754e-08 -2.98023223877e-08
> Sum of squared relative error: 5.22456992025e-30
> Max / min relative error: 2.19700100681e-16 -2.2098803255e-16
> eps:  2.22044604925e-16
> Proportion of relative err >= eps: 0.0
>
> Debian Jessie / Sid
>
> Proportion of zeros: 1.0
> Sum of error: 0.0
> Sum of squared error: 0.0
> Max / min error: 0.0 0.0
> Sum of squared relative error: 0.0
> Max / min relative error: 0.0 0.0
> eps:  2.22044604925e-16
> Proportion of relative err >= eps: 0.0
>
> Mingw-w64 Windows 7
>
> Proportion of zeros: 0.82089
> Sum of error: 8.08415331122e-07
> Sum of squared error: 2.90045099615e-12
> Max / min error: 5.96046447754e-08 -5.96046447754e-08
> Sum of squared relative error: 4.18466468175e-28
> Max / min relative error: 2.22041308226e-16 -2.22042100773e-16
> eps:  2.22044604925e-16
> Proportion of relative err >= eps: 0.0
>
> Take-home : exp implementation for mingw-w64 is exactly (floating
> point) correct 82% of the time, and one unit-at-the-last-place off for
> the rest [1].  OSX is off by 1 ULP only 0.2% of the time.
>


Windows 64 with MKL

\WinPython-64bit-3.3.2.2\python-3.3.2.amd64>python
"E:\Josef\eclipsegworkspace\statsmodels-git\local_scripts\local_scripts\try_exp_error.py"
Proportion of zeros: 0.99793
Sum of error: -2.10546855506e-07
Sum of squared error: 3.33304327526e-14
Max / min error: 5.96046447754e-08 -5.96046447754e-08
Sum of squared relative error: 4.98420694339e-30
Max / min relative error: 2.20881302691e-16 -2.18321571939e-16
eps:  2.22044604925e-16
Proportion of relative err >= eps: 0.0


Windows 32 bit python with official MingW binaries

Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit
(Intel)] on win32

Proportion of zeros: 0.99464
Sum of error: -3.91621083118e-07
Sum of squared error: 9.2239247812e-14
Max / min error: 5.96046447754e-08 -5.96046447754e-08
Sum of squared relative error: 1.3334972729e-29
Max / min relative error: 2.21593462148e-16 -2.2098803255e-16
eps:  2.22044604925e-16
Proportion of relative err >= eps: 0.0



>
> Is mingw-w64 accurate enough?  Do we have any policy on this?
>

I wouldn't worry about a missing or an extra eps in our applications, but
the competition is more accurate.

Josef


>
> Cheers,
>
> Matthew
>
> [1] http://matthe

Re: [Numpy-discussion] Proposed new function for joining arrays: np.interleave

2014-04-26 Thread Alexander Belopolsky
On Mon, Apr 7, 2014 at 11:12 AM, Björn Dahlgren  wrote:

> I think the code needed for the general n dimensional case with m number
> of arrays
> is non-trivial enough for it to be useful to provide such a function in
> numpy
>

As of version 1.8.1, I count 571 public names in numpy namespace:

>>> len([x for x in dir(numpy) if not x.startswith('_')])
571

Rather than adding 572nd name, we should investigate why it is "non-trivial
enough" to express this using existing functions.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slightly off-topic - accuracy of C exp function?

2014-04-26 Thread Matthew Brett
Hi,

On Wed, Apr 23, 2014 at 11:59 AM, Matthew Brett  wrote:
> Hi,
>
> On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith  wrote:
>> On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett  
>> wrote:
>>> Hi,
>>>
>>> I'm exploring Mingw-w64 for numpy building, and I've found it gives a
>>> slightly different answer for 'exp' than - say - gcc on OSX.
>>>
>>> The difference is of the order of the eps value for the output number
>>> (2 * eps for a result of ~2.0).
>>>
>>> Is accuracy somewhere specified for C functions like exp?  Or is
>>> accuracy left as an implementation detail for the C library author?
>>
>> C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point
>> operations ... and of the library functions in  and
>>  that return floating point results is implemenetation
>> defined. The implementation may state that the accuracy is unknown."
>> (This last sentence is basically saying that with regard to some
>> higher up clauses that required all conforming implementations to
>> document this stuff, saying "eh, who knows" counts as documenting it.
>> Hooray for standards!)
>>
>> Presumably the accuracy in this case is a function of the C library
>> anyway, not the compiler?
>
> Mingw-w64 implementation is in assembly:
>
> http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/exp.def.h
>
>> Numpy has its own implementations for a
>> bunch of the math functions, and it's been unclear in the past whether
>> numpy or the libc implementations were better in any particular case.
>
> I only investigated this particular value, in which case it looked as
> though the OSX value was closer to the exact value (via sympy.mpmath)
> - by ~1 unit-at-the-last-place.  This was causing a divergence in the
> powell optimization path and therefore a single scipy test failure.  I
> haven't investigated further - was wondering what investigation I
> should do, more than running the numpy / scipy test suites.

Investigating further, with this script:

https://gist.github.com/matthew-brett/11301221

The following are tests of np.exp accuracy for input values between 0
and 10, for numpy 1.8.1.

If np.exp(x) performs perfectly, it will return the nearest floating
point value to the exact value of exp(x).  If it does, this scores a
zero for error in the tables below.  If 'proportion of zeros' is 1 -
then np.exp performs perfectly for all tested values of exp (as is the
case for linux here).

OSX 10.9

Proportion of zeros: 0.99789
Sum of error: 2.15021267458e-09
Sum of squared error: 2.47149370032e-14
Max / min error: 5.96046447754e-08 -2.98023223877e-08
Sum of squared relative error: 5.22456992025e-30
Max / min relative error: 2.19700100681e-16 -2.2098803255e-16
eps:  2.22044604925e-16
Proportion of relative err >= eps: 0.0

Debian Jessie / Sid

Proportion of zeros: 1.0
Sum of error: 0.0
Sum of squared error: 0.0
Max / min error: 0.0 0.0
Sum of squared relative error: 0.0
Max / min relative error: 0.0 0.0
eps:  2.22044604925e-16
Proportion of relative err >= eps: 0.0

Mingw-w64 Windows 7

Proportion of zeros: 0.82089
Sum of error: 8.08415331122e-07
Sum of squared error: 2.90045099615e-12
Max / min error: 5.96046447754e-08 -5.96046447754e-08
Sum of squared relative error: 4.18466468175e-28
Max / min relative error: 2.22041308226e-16 -2.22042100773e-16
eps:  2.22044604925e-16
Proportion of relative err >= eps: 0.0

Take-home : exp implementation for mingw-w64 is exactly (floating
point) correct 82% of the time, and one unit-at-the-last-place off for
the rest [1].  OSX is off by 1 ULP only 0.2% of the time.

Is mingw-w64 accurate enough?  Do we have any policy on this?

Cheers,

Matthew

[1] http://matthew-brett.github.io/pydagogue/floating_error.html
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64-bit windows numpy / scipy wheels for testing

2014-04-26 Thread Carl Kleffner
Hi,

basically the toolchain was created with a local fork of the "mingw-builds"
build process along with some addons and patches. It is NOT a mingw-w64
fork. BTW: there are numerous mingw-w64 based toolchains out there, most of
them build without any information about the build process and patches they
used.

As long as the "mingw-builds" maintainers continue working on their
project, maintaining usuable toolchain for Python development on Windows
should be feasible.

More details are given here:
http://article.gmane.org/gmane.comp.python.numeric.general/57446

Regards

Carl



2014-04-25 7:57 GMT+02:00 Sturla Molden :

> Matthew Brett  wrote:
>
> > Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
> > (main author of ATLAS), I've built 64-bit windows numpy and scipy
> > wheels for testing.
>
> Thanks for your great effort to solve this mess.
>
> By Murphy's law, I do not have access to a Windows computer on which to
> test now. :-(
>
> This approach worries me a bit though: Will we have to maintain a fork of
> MinGW-w64 for building NumPy and SciPy? Should this toolset be distributed
> along with NumPy and SciPy on Windows? I presume it is needed to build C
> and Cython extensions?
>
> On the positive side: Does this mean we finally can use gfortran on
> Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy
> now? Or is Mac OS X a blocker?
>
> Sturla
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.9.x branch

2014-04-26 Thread josef . pktd
On Sat, Apr 26, 2014 at 1:12 PM, Sebastian Berg
wrote:

> On Mi, 2014-04-23 at 10:11 -0400, josef.p...@gmail.com wrote:
> > On Wed, Apr 23, 2014 at 5:32 AM, Sebastian Berg
> >  wrote:
> > > On Di, 2014-04-22 at 15:35 -0600, Charles R Harris wrote:
> > >> Hi All,
> > >>
> > >>
> > >> I'd like to branch 1.9.x at the end of the month. There are a couple
> > >> of reasons for the timing. First, we have a lot of new stuff in the
> > >> development branch. Second, there is work ongoing in masked arrays
> > >> that I'd like to keep out of the release so that it has more time to
> > >> settle. Third, it's past time ;)
> > >
> > > Sounds good.
> > >
> > >> There are currently a number of 1.9.0 blockers, which can be seen
> > >> here.
> > >>
> > >> Datetime timezone handling broken in 1.7.x
> > >>
> > >> I don't think there is time to get this done for 1.9.0 and it needs to
> > >> be pushed off to 1.10.0.
> > >>
> > >> Return multiple field selection as ro view
> > >>
> > >> I have a branch for this, but because the returned type differs from a
> > >> copy by alignment spacing there was a test failure. Merging that
> > >> branch might cause some incompatibilities.
> > >>
> > >
> > > I am a bit worried here that comparisons might make trouble.
> > >
> > >> Object array creation new conversion to int
> > >>
> > >>
> > >> This one needs a decision. Julian, Sebastian, thoughts?
> > >>
> > >
> > > Maybe for all to consider this is about what happens for object arrays
> > > if you do things like:
> > >
> > > # Array cast to object array (np.array(arr) would be identical):
> > > a = np.arange(10).astype(object)
> > > # Array passed into new array creation (not just *one* array):
> > > b = np.array([np.arange(10)], dtype=object)
> > > # Numerical array is assigned to object array:
> > > c = np.empty(10, dtype=object)
> > > c[...] = np.arange(10)
> > >
> > > Before this change, the result was:
> > > type(a[0]) is int
> > > type(b[0,0]) is np.int_  # Note the numpy type
> > > type(c[0]) is int
> > >
> > > After this change, they are all `int`. Though note that the numpy type
> > > is preserved for example for long double. On the one hand preserving
> the
> > > numpy type might be nice, but on the other hand we don't care much
> about
> > > the dtypes of scalars and in practice the python types are probably
> more
> > > often wanted.
> >
> > what if I don't like python?
> >
>
> Fair point :). I also think it is more consistent if we use the numpy
> types (and the user has to cast to the python type explicitly). Could
> argue that if someone casts to object, they might like python objects,
> but if you don't want them that is tricky, too.
>
> There is the option that the numerical array -> object array cast always
> returns an array of numpy scalars. Making it consistent (opposite to
> what is currently in numpy master).
>
> This would mean that `numerical_arr.astype(object)` would give an array
> of numpy scalars always. Getting python scalars would only be possible
> using `arr.item()` (or `tolist`). I guess that change is less likely to
> cause problems, because the numpy types can do more things normally
> though they are slower.
>
> So a (still a bit unsure) +1 from me for making numeric -> object casts
> return arrays of numpy scalars unless we have reason to expect that to
> cause problems. Not sure how easy that would be to change, it wasn't a
> one line change when I tried, so there is something slightly tricky to
> clear out, but probably not too hard either.
>

More general background question.

Why is there casting involved in object arrays?

I thought object arrays will just take and return whatever you put in.
If I use python ints, I might want python ints.
If I use numpy int_s, I might want numpy ints.

b1 = np.array([np.arange(10)], dtype=object)
versus
b2 = np.array([list(range(10))], dtype=object)


>>> b1 = np.array([np.arange(10)], dtype=object)
>>> type(b1[0,2])

>>> type(b1[0][2])

>>>

>>> b2 = np.array([list(range(10))], dtype=object)
>>> type(b2[0,2])

>>> type(b2[0][2])


another version

>>> type(np.array(np.arange(10).tolist(), dtype=object)[2])

>>> type(np.array(list(np.arange(10)), dtype=object)[2])


Josef


>
> - Sebastian
>
> > >>> np.int_(0)**(-1)
> > inf
> > >>> 0**-1
> > Traceback (most recent call last):
> >   File "", line 1, in 
> > 0**-1
> > ZeroDivisionError: 0.0 cannot be raised to a negative power
> >
> >
> > >>> type(np.arange(5)[0])
> > 
> > >>> np.arange(5)[0]**-1
> > inf
> >
> > >>> type(np.arange(5)[0].item())
> > 
> > >>> np.arange(5)[0].item()**-1
> > Traceback (most recent call last):
> >   File "", line 1, in 
> > np.arange(5)[0].item()**-1
> > ZeroDivisionError: 0.0 cannot be raised to a negative power
> >
> > >>> np.__version__
> > '1.6.1'
> >
> >
> > I remember struggling through this (avoiding python operations) quite
> > a bit in my early bugfixes to scipy.stats.distributions.
> >
> > (IIRC I ended up avoiding most ints.)
> >
> > Josef
> >
> > >
> > > Since I jus

Re: [Numpy-discussion] 1.9.x branch

2014-04-26 Thread Sebastian Berg
On Mi, 2014-04-23 at 10:11 -0400, josef.p...@gmail.com wrote:
> On Wed, Apr 23, 2014 at 5:32 AM, Sebastian Berg
>  wrote:
> > On Di, 2014-04-22 at 15:35 -0600, Charles R Harris wrote:
> >> Hi All,
> >>
> >>
> >> I'd like to branch 1.9.x at the end of the month. There are a couple
> >> of reasons for the timing. First, we have a lot of new stuff in the
> >> development branch. Second, there is work ongoing in masked arrays
> >> that I'd like to keep out of the release so that it has more time to
> >> settle. Third, it's past time ;)
> >
> > Sounds good.
> >
> >> There are currently a number of 1.9.0 blockers, which can be seen
> >> here.
> >>
> >> Datetime timezone handling broken in 1.7.x
> >>
> >> I don't think there is time to get this done for 1.9.0 and it needs to
> >> be pushed off to 1.10.0.
> >>
> >> Return multiple field selection as ro view
> >>
> >> I have a branch for this, but because the returned type differs from a
> >> copy by alignment spacing there was a test failure. Merging that
> >> branch might cause some incompatibilities.
> >>
> >
> > I am a bit worried here that comparisons might make trouble.
> >
> >> Object array creation new conversion to int
> >>
> >>
> >> This one needs a decision. Julian, Sebastian, thoughts?
> >>
> >
> > Maybe for all to consider this is about what happens for object arrays
> > if you do things like:
> >
> > # Array cast to object array (np.array(arr) would be identical):
> > a = np.arange(10).astype(object)
> > # Array passed into new array creation (not just *one* array):
> > b = np.array([np.arange(10)], dtype=object)
> > # Numerical array is assigned to object array:
> > c = np.empty(10, dtype=object)
> > c[...] = np.arange(10)
> >
> > Before this change, the result was:
> > type(a[0]) is int
> > type(b[0,0]) is np.int_  # Note the numpy type
> > type(c[0]) is int
> >
> > After this change, they are all `int`. Though note that the numpy type
> > is preserved for example for long double. On the one hand preserving the
> > numpy type might be nice, but on the other hand we don't care much about
> > the dtypes of scalars and in practice the python types are probably more
> > often wanted.
> 
> what if I don't like python?
> 

Fair point :). I also think it is more consistent if we use the numpy
types (and the user has to cast to the python type explicitly). Could
argue that if someone casts to object, they might like python objects,
but if you don't want them that is tricky, too.

There is the option that the numerical array -> object array cast always
returns an array of numpy scalars. Making it consistent (opposite to
what is currently in numpy master).

This would mean that `numerical_arr.astype(object)` would give an array
of numpy scalars always. Getting python scalars would only be possible
using `arr.item()` (or `tolist`). I guess that change is less likely to
cause problems, because the numpy types can do more things normally
though they are slower.

So a (still a bit unsure) +1 from me for making numeric -> object casts
return arrays of numpy scalars unless we have reason to expect that to
cause problems. Not sure how easy that would be to change, it wasn't a
one line change when I tried, so there is something slightly tricky to
clear out, but probably not too hard either.

- Sebastian

> >>> np.int_(0)**(-1)
> inf
> >>> 0**-1
> Traceback (most recent call last):
>   File "", line 1, in 
> 0**-1
> ZeroDivisionError: 0.0 cannot be raised to a negative power
> 
> 
> >>> type(np.arange(5)[0])
> 
> >>> np.arange(5)[0]**-1
> inf
> 
> >>> type(np.arange(5)[0].item())
> 
> >>> np.arange(5)[0].item()**-1
> Traceback (most recent call last):
>   File "", line 1, in 
> np.arange(5)[0].item()**-1
> ZeroDivisionError: 0.0 cannot be raised to a negative power
> 
> >>> np.__version__
> '1.6.1'
> 
> 
> I remember struggling through this (avoiding python operations) quite
> a bit in my early bugfixes to scipy.stats.distributions.
> 
> (IIRC I ended up avoiding most ints.)
> 
> Josef
> 
> >
> > Since I just realized that things are safe (float128 does not cast to
> > float after all), I changed my mind and am tempted to keep the new
> > behaviour. That is, if it does not create any problems (there was some
> > issue in scipy, not sure how bad).
> >
> > - Sebastian
> >
> >> Median of np.matrix is broken(
> >>
> >>
> >> Not sure what the status of this one is.
> >>
> >> 1.8 deprecations: Follow-up ticket
> >>
> >>
> >> Things that might should be removed.
> >>
> >> ERROR: test_big_arrays (test_io.TestSavezLoad) on OS X + Python 3.3
> >>
> >>
> >> I believe this one was fixed. For general problems reading/writing big
> >> files on OS X, I believe they were fixed in Maverick and I'm inclined
> >> to recommend an OS upgrade rather than work to chunk all the io.
> >>
> >> Deprecate NPY_CHAR
> >> This one is waiting on a fix from Pearu to make f2py use numpy
> >> strings. I think we will need to do this ourselves if we want to carry
> >> through the deprecation. In

Re: [Numpy-discussion] 64-bit windows numpy / scipy wheels for testing

2014-04-26 Thread Pauli Virtanen
25.04.2014 08:57, Sturla Molden kirjoitti:
[clip]
> On the positive side: Does this mean we finally can use gfortran on
> Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy
> now? Or is Mac OS X a blocker?

Yes, Windows is the only platform on which Fortran was problematic. OSX
is somewhat saner in this respect.

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64-bit windows numpy / scipy wheels for testing

2014-04-26 Thread Sturla Molden
Matthew Brett  wrote:

> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
> (main author of ATLAS), I've built 64-bit windows numpy and scipy
> wheels for testing.

Thanks for your great effort to solve this mess.

By Murphy's law, I do not have access to a Windows computer on which to
test now. :-(

This approach worries me a bit though: Will we have to maintain a fork of
MinGW-w64 for building NumPy and SciPy? Should this toolset be distributed
along with NumPy and SciPy on Windows? I presume it is needed to build C
and Cython extensions?

On the positive side: Does this mean we finally can use gfortran on
Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy
now? Or is Mac OS X a blocker?

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64-bit windows numpy / scipy wheels for testing

2014-04-26 Thread josef . pktd
On Sat, Apr 26, 2014 at 10:20 AM,  wrote:

>
>
>
> On Sat, Apr 26, 2014 at 10:10 AM,  wrote:
>
>>
>>
>>
>> On Fri, Apr 25, 2014 at 1:21 AM, Matthew Brett 
>> wrote:
>>
>>> Hi,
>>>
>>> On Thu, Apr 24, 2014 at 5:26 PM,   wrote:
>>> >
>>> >
>>> >
>>> > On Thu, Apr 24, 2014 at 7:29 PM,  wrote:
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
>>> >>  wrote:
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Thu, Apr 24, 2014 at 5:08 PM,  wrote:
>>> 
>>> 
>>> 
>>> 
>>>  On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
>>>   wrote:
>>> >
>>> >
>>> > Hi Matthew,
>>> >
>>> > On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
>>> >  wrote:
>>> >>
>>> >> Hi,
>>> >>
>>> >> Thanks to Cark Kleffner's toolchain and some help from Clint
>>> Whaley
>>> >> (main author of ATLAS), I've built 64-bit windows numpy and scipy
>>> >> wheels for testing.
>>> >>
>>> >> The build uses Carl's custom mingw-w64 build with static linking.
>>> >>
>>> >> There are two harmless test failures on scipy (being discussed on
>>> the
>>> >> list at the moment) - tests otherwise clean.
>>> >>
>>> >> Wheels are here:
>>> >>
>>> >>
>>> >>
>>> https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
>>> >>
>>> >>
>>> https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
>>> >>
>>> >> You can test with:
>>> >>
>>> >> pip install -U pip # to upgrade pip to latest
>>> >> pip install -f https://nipy.bic.berkeley.edu/scipy_installersnumpy
>>> >> scipy
>>> >>
>>> >> Please do send feedback.
>>> >>
>>> >> ATLAS binary here:
>>> >>
>>> >>
>>> >>
>>> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
>>> >>
>>> >> Many thanks for Carl in particular for doing all the hard work,
>>> >>
>>> >
>>> > Cool. After all these long years... Now all we need is a box
>>> running
>>> > tests for CI.
>>> >
>>> > Chuck
>>> >
>>> > ___
>>> > NumPy-Discussion mailing list
>>> > NumPy-Discussion@scipy.org
>>> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>> >
>>> 
>>>  I get two test failures with numpy
>>> 
>>>  Josef
>>> 
>>>  >>> np.test()
>>>  Running unit tests for numpy
>>>  NumPy version 1.8.1
>>>  NumPy is installed in C:\Python27\lib\site-packages\numpy
>>>  Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500
>>> 64 bit
>>>  (AMD64)]
>>>  nose version 1.1.2
>>> 
>>> 
>>> ==
>>>  FAIL: test_iterator.test_iter_broadcasting_errors
>>> 
>>> --
>>>  Traceback (most recent call last):
>>>    File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
>>>  runTest
>>>  self.test(*self.arg)
>>>    File
>>>  "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
>>> line 657,
>>>  in test_iter_broadcasting_errors
>>>  '(2)->(2,newaxis)') % msg)
>>>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>>> 44,
>>>  in assert_
>>>  raise AssertionError(msg)
>>>  AssertionError: Message "operands could not be broadcast together
>>> with
>>>  remapped shapes [original->remapped]: (2,3)->(2,3)
>>> (2,)->(2,newaxis) and
>>>  requested shape (4,3)" doesn't contain remapped operand
>>>  shape(2)->(2,newaxis)
>>> 
>>> 
>>> ==
>>>  FAIL: test_iterator.test_iter_array_cast
>>> 
>>> --
>>>  Traceback (most recent call last):
>>>    File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
>>>  runTest
>>>  self.test(*self.arg)
>>>    File
>>>  "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
>>> line 836,
>>>  in test_iter_array_cast
>>>  assert_equal(i.operands[0].strides, (-96,8,-32))
>>>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>>> 255,
>>>  in assert_equal
>>>  assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,
>>> err_msg),
>>>  verbose)
>>>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>>> 317,
>>>  in assert_equal
>>>  raise AssertionError(msg)
>>>  AssertionError:
>>>  Items are not equal:
>>>  item=0
>>> 
>>>   ACTUAL: 96L
>>>   DESIRED: -96
>>> 
>>> 
>>> --
>>>  Ran 4828 tests in 46.306s
>>> 
>>>  FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
>>>  
>>> 
>>>

Re: [Numpy-discussion] 64-bit windows numpy / scipy wheels for testing

2014-04-26 Thread josef . pktd
On Sat, Apr 26, 2014 at 10:10 AM,  wrote:

>
>
>
> On Fri, Apr 25, 2014 at 1:21 AM, Matthew Brett wrote:
>
>> Hi,
>>
>> On Thu, Apr 24, 2014 at 5:26 PM,   wrote:
>> >
>> >
>> >
>> > On Thu, Apr 24, 2014 at 7:29 PM,  wrote:
>> >>
>> >>
>> >>
>> >>
>> >> On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
>> >>  wrote:
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On Thu, Apr 24, 2014 at 5:08 PM,  wrote:
>> 
>> 
>> 
>> 
>>  On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
>>   wrote:
>> >
>> >
>> > Hi Matthew,
>> >
>> > On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
>> >  wrote:
>> >>
>> >> Hi,
>> >>
>> >> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
>> >> (main author of ATLAS), I've built 64-bit windows numpy and scipy
>> >> wheels for testing.
>> >>
>> >> The build uses Carl's custom mingw-w64 build with static linking.
>> >>
>> >> There are two harmless test failures on scipy (being discussed on
>> the
>> >> list at the moment) - tests otherwise clean.
>> >>
>> >> Wheels are here:
>> >>
>> >>
>> >>
>> https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
>> >>
>> >>
>> https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
>> >>
>> >> You can test with:
>> >>
>> >> pip install -U pip # to upgrade pip to latest
>> >> pip install -f https://nipy.bic.berkeley.edu/scipy_installersnumpy
>> >> scipy
>> >>
>> >> Please do send feedback.
>> >>
>> >> ATLAS binary here:
>> >>
>> >>
>> >>
>> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
>> >>
>> >> Many thanks for Carl in particular for doing all the hard work,
>> >>
>> >
>> > Cool. After all these long years... Now all we need is a box running
>> > tests for CI.
>> >
>> > Chuck
>> >
>> > ___
>> > NumPy-Discussion mailing list
>> > NumPy-Discussion@scipy.org
>> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
>> >
>> 
>>  I get two test failures with numpy
>> 
>>  Josef
>> 
>>  >>> np.test()
>>  Running unit tests for numpy
>>  NumPy version 1.8.1
>>  NumPy is installed in C:\Python27\lib\site-packages\numpy
>>  Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64
>> bit
>>  (AMD64)]
>>  nose version 1.1.2
>> 
>> 
>> ==
>>  FAIL: test_iterator.test_iter_broadcasting_errors
>> 
>> --
>>  Traceback (most recent call last):
>>    File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
>>  runTest
>>  self.test(*self.arg)
>>    File
>>  "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
>> line 657,
>>  in test_iter_broadcasting_errors
>>  '(2)->(2,newaxis)') % msg)
>>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>> 44,
>>  in assert_
>>  raise AssertionError(msg)
>>  AssertionError: Message "operands could not be broadcast together
>> with
>>  remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis)
>> and
>>  requested shape (4,3)" doesn't contain remapped operand
>>  shape(2)->(2,newaxis)
>> 
>> 
>> ==
>>  FAIL: test_iterator.test_iter_array_cast
>> 
>> --
>>  Traceback (most recent call last):
>>    File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
>>  runTest
>>  self.test(*self.arg)
>>    File
>>  "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
>> line 836,
>>  in test_iter_array_cast
>>  assert_equal(i.operands[0].strides, (-96,8,-32))
>>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>> 255,
>>  in assert_equal
>>  assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
>>  verbose)
>>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
>> 317,
>>  in assert_equal
>>  raise AssertionError(msg)
>>  AssertionError:
>>  Items are not equal:
>>  item=0
>> 
>>   ACTUAL: 96L
>>   DESIRED: -96
>> 
>> 
>> --
>>  Ran 4828 tests in 46.306s
>> 
>>  FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
>>  
>> 
>> >>>
>> >>> Strange. That second one looks familiar, at least the "-96" part.
>> Wonder
>> >>> why this doesn't show up with the MKL builds.
>> >>
>> >>
>> >> ok tried again, this time deleting the old numpy directories before
>>

Re: [Numpy-discussion] 64-bit windows numpy / scipy wheels for testing

2014-04-26 Thread josef . pktd
On Fri, Apr 25, 2014 at 1:21 AM, Matthew Brett wrote:

> Hi,
>
> On Thu, Apr 24, 2014 at 5:26 PM,   wrote:
> >
> >
> >
> > On Thu, Apr 24, 2014 at 7:29 PM,  wrote:
> >>
> >>
> >>
> >>
> >> On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
> >>  wrote:
> >>>
> >>>
> >>>
> >>>
> >>> On Thu, Apr 24, 2014 at 5:08 PM,  wrote:
> 
> 
> 
> 
>  On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
>   wrote:
> >
> >
> > Hi Matthew,
> >
> > On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
> >  wrote:
> >>
> >> Hi,
> >>
> >> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
> >> (main author of ATLAS), I've built 64-bit windows numpy and scipy
> >> wheels for testing.
> >>
> >> The build uses Carl's custom mingw-w64 build with static linking.
> >>
> >> There are two harmless test failures on scipy (being discussed on
> the
> >> list at the moment) - tests otherwise clean.
> >>
> >> Wheels are here:
> >>
> >>
> >>
> https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
> >>
> >>
> https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
> >>
> >> You can test with:
> >>
> >> pip install -U pip # to upgrade pip to latest
> >> pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy
> >> scipy
> >>
> >> Please do send feedback.
> >>
> >> ATLAS binary here:
> >>
> >>
> >>
> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
> >>
> >> Many thanks for Carl in particular for doing all the hard work,
> >>
> >
> > Cool. After all these long years... Now all we need is a box running
> > tests for CI.
> >
> > Chuck
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> 
>  I get two test failures with numpy
> 
>  Josef
> 
>  >>> np.test()
>  Running unit tests for numpy
>  NumPy version 1.8.1
>  NumPy is installed in C:\Python27\lib\site-packages\numpy
>  Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64
> bit
>  (AMD64)]
>  nose version 1.1.2
> 
>  ==
>  FAIL: test_iterator.test_iter_broadcasting_errors
>  --
>  Traceback (most recent call last):
>    File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
>  runTest
>  self.test(*self.arg)
>    File
>  "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
> line 657,
>  in test_iter_broadcasting_errors
>  '(2)->(2,newaxis)') % msg)
>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
> 44,
>  in assert_
>  raise AssertionError(msg)
>  AssertionError: Message "operands could not be broadcast together with
>  remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis)
> and
>  requested shape (4,3)" doesn't contain remapped operand
>  shape(2)->(2,newaxis)
> 
>  ==
>  FAIL: test_iterator.test_iter_array_cast
>  --
>  Traceback (most recent call last):
>    File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
>  runTest
>  self.test(*self.arg)
>    File
>  "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
> line 836,
>  in test_iter_array_cast
>  assert_equal(i.operands[0].strides, (-96,8,-32))
>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
> 255,
>  in assert_equal
>  assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
>  verbose)
>    File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
> 317,
>  in assert_equal
>  raise AssertionError(msg)
>  AssertionError:
>  Items are not equal:
>  item=0
> 
>   ACTUAL: 96L
>   DESIRED: -96
> 
>  --
>  Ran 4828 tests in 46.306s
> 
>  FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
>  
> 
> >>>
> >>> Strange. That second one looks familiar, at least the "-96" part.
> Wonder
> >>> why this doesn't show up with the MKL builds.
> >>
> >>
> >> ok tried again, this time deleting the old numpy directories before
> >> installing
> >>
> >> Ran 4760 tests in 42.124s
> >>
> >> OK (KNOWNFAIL=10, SKIP=8)
> >> 
> >>
> >>
> >> so pip also seems to be reusing leftover files.
> >>
> >> all clear.
> >
> >
> > Running the statsmodels test suite, I

Re: [Numpy-discussion] 64-bit windows numpy / scipy wheels for testing

2014-04-26 Thread Matthew Brett
Hi,

On Thu, Apr 24, 2014 at 5:26 PM,   wrote:
>
>
>
> On Thu, Apr 24, 2014 at 7:29 PM,  wrote:
>>
>>
>>
>>
>> On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
>>  wrote:
>>>
>>>
>>>
>>>
>>> On Thu, Apr 24, 2014 at 5:08 PM,  wrote:




 On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
  wrote:
>
>
> Hi Matthew,
>
> On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
>  wrote:
>>
>> Hi,
>>
>> Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
>> (main author of ATLAS), I've built 64-bit windows numpy and scipy
>> wheels for testing.
>>
>> The build uses Carl's custom mingw-w64 build with static linking.
>>
>> There are two harmless test failures on scipy (being discussed on the
>> list at the moment) - tests otherwise clean.
>>
>> Wheels are here:
>>
>>
>> https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
>>
>> https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
>>
>> You can test with:
>>
>> pip install -U pip # to upgrade pip to latest
>> pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy
>> scipy
>>
>> Please do send feedback.
>>
>> ATLAS binary here:
>>
>>
>> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
>>
>> Many thanks for Carl in particular for doing all the hard work,
>>
>
> Cool. After all these long years... Now all we need is a box running
> tests for CI.
>
> Chuck
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>

 I get two test failures with numpy

 Josef

 >>> np.test()
 Running unit tests for numpy
 NumPy version 1.8.1
 NumPy is installed in C:\Python27\lib\site-packages\numpy
 Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit
 (AMD64)]
 nose version 1.1.2

 ==
 FAIL: test_iterator.test_iter_broadcasting_errors
 --
 Traceback (most recent call last):
   File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
 runTest
 self.test(*self.arg)
   File
 "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 
 657,
 in test_iter_broadcasting_errors
 '(2)->(2,newaxis)') % msg)
   File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44,
 in assert_
 raise AssertionError(msg)
 AssertionError: Message "operands could not be broadcast together with
 remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and
 requested shape (4,3)" doesn't contain remapped operand
 shape(2)->(2,newaxis)

 ==
 FAIL: test_iterator.test_iter_array_cast
 --
 Traceback (most recent call last):
   File "C:\Python27\lib\site-packages\nose\case.py", line 197, in
 runTest
 self.test(*self.arg)
   File
 "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 
 836,
 in test_iter_array_cast
 assert_equal(i.operands[0].strides, (-96,8,-32))
   File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255,
 in assert_equal
 assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
 verbose)
   File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317,
 in assert_equal
 raise AssertionError(msg)
 AssertionError:
 Items are not equal:
 item=0

  ACTUAL: 96L
  DESIRED: -96

 --
 Ran 4828 tests in 46.306s

 FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
 

>>>
>>> Strange. That second one looks familiar, at least the "-96" part. Wonder
>>> why this doesn't show up with the MKL builds.
>>
>>
>> ok tried again, this time deleting the old numpy directories before
>> installing
>>
>> Ran 4760 tests in 42.124s
>>
>> OK (KNOWNFAIL=10, SKIP=8)
>> 
>>
>>
>> so pip also seems to be reusing leftover files.
>>
>> all clear.
>
>
> Running the statsmodels test suite, I get a failure in
> test_discrete.TestProbitCG where fmin_cg converges to something that differs
> in the 3rd decimal.
>
> I usually only test the 32-bit version, so I don't know if this is specific
> to this scipy version, but we haven't seen this in a long time.
> I used our nightly binaries http://statsmodels.sourceforge.net/binaries/

That's interesting, you saw also we're ge