Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Robert Kern
On Tue, Apr 12, 2011 at 13:17, Charles R Harris
 wrote:
>
>
> On Tue, Apr 12, 2011 at 11:56 AM, Robert Kern  wrote:
>>
>> On Tue, Apr 12, 2011 at 12:27, Charles R Harris
>>  wrote:
>>
>> > IIRC, the behavior with respect to scalars sort of happened in the code
>> > on
>> > the fly, so this is a good discussion to have. We should end up with
>> > documented rules and tests to enforce them. I agree with Mark that the
>> > tests
>> > have been deficient up to this point.
>>
>> It's been documented for a long time now.
>>
>> http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules
>>
>
> Nope, the kind stuff is missing. Note the cast to float32 that Mark pointed
> out.

The float32-array*float64-scalar case? That's covered in the last
paragraph; they are the same kind so array dtype wins.

> Also that the casting of python integers depends on their sign and
> magnitude.
>
> In [1]: ones(3, '?') + 0
> Out[1]: array([1, 1, 1], dtype=int8)
>
> In [2]: ones(3, '?') + 1000
> Out[2]: array([1001, 1001, 1001], dtype=int16)

bool and int cross kinds. Not a counterexample. I'm not saying that
the size of the values should be ignored for cross-kind upcasting. I'm
saying that you don't need value-size calculations to preserve the
float32-array*float64-scalar behavior.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Robert Kern
On Tue, Apr 12, 2011 at 11:49, Mark Wiebe  wrote:
> On Tue, Apr 12, 2011 at 9:30 AM, Robert Kern  wrote:

>> You're missing the key part of the rule that numpy uses: for
>> array*scalar cases, when both array and scalar are the same kind (both
>> floating point or both integers), then the array dtype always wins.
>> Only when they are different kinds do you try to negotiate a common
>> safe type between the scalar and the array.
>
> I'm afraid I'm not seeing the point you're driving at, can you provide some
> examples which tease apart these issues? Here's the same example but with
> different kinds, and to me it seems to have the same character as the case
> with float32/float64:
 np.__version__
> '1.4.1'
 1e60*np.ones(2,dtype=np.complex64)
> array([ Inf NaNj,  Inf NaNj], dtype=complex64)
 np.__version__
> '2.0.0.dev-4cb2eb4'
 1e60*np.ones(2,dtype=np.complex64)
> array([  1.e+60+0.j,   1.e+60+0.j])

The point is that when you multiply an array by a scalar, and the
array-dtype is the same kind as the scalar-dtype, the output dtype is
the array-dtype. That's what gets you the behavior of the
float32-array staying the same when you multiply it with a Python
float(64). min_scalar_type should never be consulted in this case, so
you don't need to try to account for this case in its rules. This
cross-kind example is irrelevant to the point I'm trying to make.

For cross-kind operations, then you do need to find a common output
type that is safe for both array and scalar. However, please keep in
mind that for floating point types, keeping precision is more
important than range!

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Pearu Peterson
Note that hello.foo(a) returns the value of Fortran `a` value. This explains
the printed 2 value.
So, use

>>> a = hello.foo(a)

and not

>>> hello.foo(a)

As Sameer noted in previous mail, passing Python scalar values to Fortran by
reference is not
possible because Python scalars are immutable. Hence the need to use `a =
foo(a)`.

HTH,
Pearu

On Tue, Apr 12, 2011 at 9:52 PM, Mathew Yeates  wrote:

> bizarre
> I get
> =
> >>> hello.foo(a)
>  Hello from Fortran!
>  a= 1
> 2
> >>> a
> 1
> >>> hello.foo(a)
>  Hello from Fortran!
>  a= 1
> 2
> >>> print a
> 1
> >>>
> =
>
> i.e. The value of 2 gets printed! This is numpy 1.3.0
>
> -Mathew
>
>
> On Tue, Apr 12, 2011 at 11:45 AM, Pearu Peterson
>  wrote:
> >
> >
> > On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates 
> wrote:
> >>
> >> I have
> >> subroutine foo (a)
> >>  integer a
> >>  print*, "Hello from Fortran!"
> >>  print*, "a=",a
> >>  a=2
> >>  end
> >>
> >> and from python I want to do
> >> >>> a=1
> >> >>> foo(a)
> >>
> >> and I want a's value to now be 2.
> >> How do I do this?
> >
> > With
> >
> >  subroutine foo (a)
> >  integer a
> > !f2py intent(in, out) a
> >  print*, "Hello from Fortran!"
> >  print*, "a=",a
> >  a=2
> >  end
> >
> > you will have desired effect:
> >
>  a=1
>  a = foo(a)
>  print a
> > 2
> >
> > HTH,
> > Pearu
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> >
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Mathew Yeates
bizarre
I get
=
>>> hello.foo(a)
 Hello from Fortran!
 a= 1
2
>>> a
1
>>> hello.foo(a)
 Hello from Fortran!
 a= 1
2
>>> print a
1
>>>
=

i.e. The value of 2 gets printed! This is numpy 1.3.0

-Mathew


On Tue, Apr 12, 2011 at 11:45 AM, Pearu Peterson
 wrote:
>
>
> On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates  wrote:
>>
>> I have
>> subroutine foo (a)
>>      integer a
>>      print*, "Hello from Fortran!"
>>      print*, "a=",a
>>      a=2
>>      end
>>
>> and from python I want to do
>> >>> a=1
>> >>> foo(a)
>>
>> and I want a's value to now be 2.
>> How do I do this?
>
> With
>
>  subroutine foo (a)
>      integer a
> !f2py intent(in, out) a
>      print*, "Hello from Fortran!"
>      print*, "a=",a
>      a=2
>      end
>
> you will have desired effect:
>
 a=1
 a = foo(a)
 print a
> 2
>
> HTH,
> Pearu
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Mark Wiebe
On Tue, Apr 12, 2011 at 11:17 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:

>
>
> On Tue, Apr 12, 2011 at 11:56 AM, Robert Kern wrote:
>
>> On Tue, Apr 12, 2011 at 12:27, Charles R Harris
>>  wrote:
>>
>> > IIRC, the behavior with respect to scalars sort of happened in the code
>> on
>> > the fly, so this is a good discussion to have. We should end up with
>> > documented rules and tests to enforce them. I agree with Mark that the
>> tests
>> > have been deficient up to this point.
>>
>> It's been documented for a long time now.
>>
>> http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules
>>
>>
> Nope, the kind stuff is missing. Note the cast to float32 that Mark pointed
> out. Also that the casting of python integers depends on their sign and
> magnitude.
>
> In [1]: ones(3, '?') + 0
> Out[1]: array([1, 1, 1], dtype=int8)
>
> In [2]: ones(3, '?') + 1000
> Out[2]: array([1001, 1001, 1001], dtype=int16)
>

This is the behaviour with master - it's a good idea to cross-check with an
older NumPy. I think we're discussing 3 things here, what NumPy 1.5 and
earlier did, what NumPy 1.6 beta currently does, and what people think NumPy
did. The old implementation had a bit of a spaghetti-factor to it, and had
problems like asymmetry and silent overflows. The new implementation is in
my opinion cleaner and follows well-defined semantics while trying to stay
true to the old implementation. I admit the documentation I wrote doesn't
fully explain them, but here's the rule for a set of arbitrary arrays (not
necessarily just 2):

- if all the arrays are scalars, do type promotion on the types as is
- otherwise, do type promotion on min_scalar_type(a) of each array a

The function min_scalar_type returns the array type if a has >= 1
dimensions, or the smallest type of the same kind (allowing int->uint in the
case of positive-valued signed integers) to which the value can be cast
without overflow if a has 0 dimensions.

The promote_types function used for the type promotion is symmetric and
associative, so the result won't change when shuffling the inputs. There's a
bit of a wrinkle in the implementation to handle the fact that the uint type
values aren't a strict subset of the same-sized int type values, but
otherwise this is what happens.

https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/convert_datatype.c#L1075

The change I'm proposing is to modify this as follows:

- if all the arrays are scalars, do type promotion on the types as is
- if the maximum kind of all the scalars is > the maximum kind of all the
arrays, do type promotion on the types as is
- otherwise, do type promotion on min_scalar_type(a) of each array a

One case where this may not capture a possible desired semantics is
[complex128 scalar] * [float32 array] -> [complex128]. In this case
[complex64] may be desired. This is directly analogous to the original
[float64 scalar] * [int8 array], however, and in the latter case it's clear
a float64 should result.

-Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Data standardizing

2011-04-12 Thread Climate Research
Hi
I am purely new to python and numpy..  I am using python for doing
statistical calculations to Climate data..

I  have  a  data set in the following format..

Year  Jan  feb   MarApr.   Dec
1900  10001001   ,, ,
1901  10111012   ,, ,
1902  10091007   ,  ,
,   '  ,, ,
,   ,
2010  10081002   ,, ,

I actually want to standardize each of these values with corresponding
standard deviations for  each monthly data column..
I have found out the standard deviations for each column..  but now i need
to  find the standared deviation  only for a prescribed mean value
ie,  when i am finding the standared deviation for the January data
column..  the mean should be calculated only for the january data, say from
1950-1970. With this mean  i  want to calculate the SD  for entire column.
Any help will be appreciated..
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Pearu Peterson
On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates  wrote:

> I have
> subroutine foo (a)
>  integer a
>  print*, "Hello from Fortran!"
>  print*, "a=",a
>  a=2
>  end
>
> and from python I want to do
> >>> a=1
> >>> foo(a)
>
> and I want a's value to now be 2.
> How do I do this?
>

With

 subroutine foo (a)
 integer a
!f2py intent(in, out) a
 print*, "Hello from Fortran!"
 print*, "a=",a
 a=2
 end

you will have desired effect:

>>> a=1
>>> a = foo(a)
>>> print a
2

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Sameer Grover
On 12/04/11 23:36, Mathew Yeates wrote:
> I have
> subroutine foo (a)
>integer a
>print*, "Hello from Fortran!"
>print*, "a=",a
>a=2
>end
>
> and from python I want to do
 a=1
 foo(a)
> and I want a's value to now be 2.
> How do I do this?
>
> Mathew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

Someone can correct me if I'm wrong but I don't think this is possible 
with integers because they are treated as immuatable types in python. 
You can, however, do this with numpy arrays with "intent(inplace)". For 
example,

subroutine foo(a)
   integer::a(2)
!f2py intent(inplace)::a
   write(*,*) "Hello from Fortran!"
   write(*,*) "a=",a
   a(1)=2
end subroutine foo

a = np.array([1,2])
foo(a)
#a is now [2,2]

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Charles R Harris
On Tue, Apr 12, 2011 at 11:56 AM, Robert Kern  wrote:

> On Tue, Apr 12, 2011 at 12:27, Charles R Harris
>  wrote:
>
> > IIRC, the behavior with respect to scalars sort of happened in the code
> on
> > the fly, so this is a good discussion to have. We should end up with
> > documented rules and tests to enforce them. I agree with Mark that the
> tests
> > have been deficient up to this point.
>
> It's been documented for a long time now.
>
> http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules
>
>
Nope, the kind stuff is missing. Note the cast to float32 that Mark pointed
out. Also that the casting of python integers depends on their sign and
magnitude.

In [1]: ones(3, '?') + 0
Out[1]: array([1, 1, 1], dtype=int8)

In [2]: ones(3, '?') + 1000
Out[2]: array([1001, 1001, 1001], dtype=int16)


Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py pass by reference

2011-04-12 Thread Mathew Yeates
I have
subroutine foo (a)
  integer a
  print*, "Hello from Fortran!"
  print*, "a=",a
  a=2
  end

and from python I want to do
>>> a=1
>>> foo(a)

and I want a's value to now be 2.
How do I do this?

Mathew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Robert Kern
On Tue, Apr 12, 2011 at 12:27, Charles R Harris
 wrote:

> IIRC, the behavior with respect to scalars sort of happened in the code on
> the fly, so this is a good discussion to have. We should end up with
> documented rules and tests to enforce them. I agree with Mark that the tests
> have been deficient up to this point.

It's been documented for a long time now.

http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Charles R Harris
On Tue, Apr 12, 2011 at 10:20 AM, Mark Wiebe  wrote:

> On Tue, Apr 12, 2011 at 8:24 AM, Robert Kern wrote:
>
>> On Mon, Apr 11, 2011 at 23:43, Mark Wiebe  wrote:
>> > On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant <
>> oliph...@enthought.com>
>> > wrote:
>>
>> >> It would be good to see a simple test case and understand why the
>> boolean
>> >> multiplied by the scalar double is becoming a float16. In other
>> words,
>> >>  why does
>> >> (1-test)*t
>> >> return a float16 array
>> >> This does not sound right at all and it would be good to understand why
>> >> this occurs, now.   How are you handling scalars multiplied by arrays
>> in
>> >> general?
>> >
>> > The reason it's float16 is that the first function in the multiply
>> function
>> > list for which both types can be safely cast to the output type,
>>
>> Except that float64 cannot be safely cast to float16.
>>
>
> That's true, but it was already being done this way with respect to
> float32. Rereading the documentation for min_scalar_type, I see the
> explanation could elaborate on the purpose of the function further. Float64
> cannot be safely cast to float32, but this is what NumPy does:
>
>
Yep, I remember noticing that on occasion. I didn't think it was really the
right thing to do...



Chuck

>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Charles R Harris
On Tue, Apr 12, 2011 at 10:49 AM, Mark Wiebe  wrote:

> On Tue, Apr 12, 2011 at 9:30 AM, Robert Kern wrote:
>
>> On Tue, Apr 12, 2011 at 11:20, Mark Wiebe  wrote:
>> > On Tue, Apr 12, 2011 at 8:24 AM, Robert Kern 
>> wrote:
>> >>
>> >> On Mon, Apr 11, 2011 at 23:43, Mark Wiebe  wrote:
>> >> > On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant
>> >> > 
>> >> > wrote:
>> >>
>> >> >> It would be good to see a simple test case and understand why the
>> >> >> boolean
>> >> >> multiplied by the scalar double is becoming a float16. In other
>> >> >> words,
>> >> >>  why does
>> >> >> (1-test)*t
>> >> >> return a float16 array
>> >> >> This does not sound right at all and it would be good to understand
>> why
>> >> >> this occurs, now.   How are you handling scalars multiplied by
>> arrays
>> >> >> in
>> >> >> general?
>> >> >
>> >> > The reason it's float16 is that the first function in the multiply
>> >> > function
>> >> > list for which both types can be safely cast to the output type,
>> >>
>> >> Except that float64 cannot be safely cast to float16.
>> >
>> > That's true, but it was already being done this way with respect to
>> float32.
>> > Rereading the documentation for min_scalar_type, I see the explanation
>> could
>> > elaborate on the purpose of the function further. Float64 cannot be
>> safely
>> > cast to float32, but this is what NumPy does:
>>  import numpy as np
>>  np.__version__
>> > '1.4.1'
>>  np.float64(3.5) * np.ones(2,dtype=np.float32)
>> > array([ 3.5,  3.5], dtype=float32)
>> 
>>
>> You're missing the key part of the rule that numpy uses: for
>> array*scalar cases, when both array and scalar are the same kind (both
>> floating point or both integers), then the array dtype always wins.
>> Only when they are different kinds do you try to negotiate a common
>> safe type between the scalar and the array.
>
>
> I'm afraid I'm not seeing the point you're driving at, can you provide some
> examples which tease apart these issues? Here's the same example but with
> different kinds, and to me it seems to have the same character as the case
> with float32/float64:
>
> >>> np.__version__
> '1.4.1'
> >>> 1e60*np.ones(2,dtype=np.complex64)
> array([ Inf NaNj,  Inf NaNj], dtype=complex64)
>
> >>> np.__version__
> '2.0.0.dev-4cb2eb4'
> >>> 1e60*np.ones(2,dtype=np.complex64)
> array([  1.e+60+0.j,   1.e+60+0.j])
>
>
IIRC, the behavior with respect to scalars sort of happened in the code on
the fly, so this is a good discussion to have. We should end up with
documented rules and tests to enforce them. I agree with Mark that the tests
have been deficient up to this point.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error importing numpy in cygwin

2011-04-12 Thread Ralf Gommers
On Tue, Apr 12, 2011 at 9:17 AM, David Cournapeau  wrote:
> On Tue, Apr 12, 2011 at 10:25 AM, Gabriella Turek  wrote:
>>
>> Hello I'm working with cygwin 1.7.9. I've installed python 2.6 from the 
>> cygwin distro. I've also installed nympy from the distro (v. 1.4.1), and 
>> when that failed, I tried to installed directly form source (v. 1.5.1)
>> In both cases when I try to run a script that imports numpy (including 
>> running the numpy tests) I get the following
>> error message:
>
> It seems ctypes import fails. Are you sure that your python is
> correctly installed ? What does the following does:
>
> python -c "import ctypes"
>
> If that does not work, the problem is how python was installed, not with 
> numpy.

It's a known problem that ctypes is not always built for Python, see for example
http://projects.scipy.org/numpy/ticket/1475
http://bugs.python.org/issue1516
http://bugs.python.org/issue2401

Also this ticket (ctypes + Cygwin) may be related:
http://projects.scipy.org/numpy/ticket/904

The submitter of #1475 suggests that it's worth considering if numpy
should depend on ctypes. I agree that it would be better not to.
Ctypes is not in the Python 2.4 stdlib, and even after that can give
problems on anything that's not Linux. Would it be possible to use
Cython instead with a reasonable amount of effort?

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Mark Wiebe
On Tue, Apr 12, 2011 at 9:30 AM, Robert Kern  wrote:

> On Tue, Apr 12, 2011 at 11:20, Mark Wiebe  wrote:
> > On Tue, Apr 12, 2011 at 8:24 AM, Robert Kern 
> wrote:
> >>
> >> On Mon, Apr 11, 2011 at 23:43, Mark Wiebe  wrote:
> >> > On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant
> >> > 
> >> > wrote:
> >>
> >> >> It would be good to see a simple test case and understand why the
> >> >> boolean
> >> >> multiplied by the scalar double is becoming a float16. In other
> >> >> words,
> >> >>  why does
> >> >> (1-test)*t
> >> >> return a float16 array
> >> >> This does not sound right at all and it would be good to understand
> why
> >> >> this occurs, now.   How are you handling scalars multiplied by arrays
> >> >> in
> >> >> general?
> >> >
> >> > The reason it's float16 is that the first function in the multiply
> >> > function
> >> > list for which both types can be safely cast to the output type,
> >>
> >> Except that float64 cannot be safely cast to float16.
> >
> > That's true, but it was already being done this way with respect to
> float32.
> > Rereading the documentation for min_scalar_type, I see the explanation
> could
> > elaborate on the purpose of the function further. Float64 cannot be
> safely
> > cast to float32, but this is what NumPy does:
>  import numpy as np
>  np.__version__
> > '1.4.1'
>  np.float64(3.5) * np.ones(2,dtype=np.float32)
> > array([ 3.5,  3.5], dtype=float32)
> 
>
> You're missing the key part of the rule that numpy uses: for
> array*scalar cases, when both array and scalar are the same kind (both
> floating point or both integers), then the array dtype always wins.
> Only when they are different kinds do you try to negotiate a common
> safe type between the scalar and the array.


I'm afraid I'm not seeing the point you're driving at, can you provide some
examples which tease apart these issues? Here's the same example but with
different kinds, and to me it seems to have the same character as the case
with float32/float64:

>>> np.__version__
'1.4.1'
>>> 1e60*np.ones(2,dtype=np.complex64)
array([ Inf NaNj,  Inf NaNj], dtype=complex64)

>>> np.__version__
'2.0.0.dev-4cb2eb4'
>>> 1e60*np.ones(2,dtype=np.complex64)
array([  1.e+60+0.j,   1.e+60+0.j])

 -Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Robert Kern
On Tue, Apr 12, 2011 at 11:20, Mark Wiebe  wrote:
> On Tue, Apr 12, 2011 at 8:24 AM, Robert Kern  wrote:
>>
>> On Mon, Apr 11, 2011 at 23:43, Mark Wiebe  wrote:
>> > On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant
>> > 
>> > wrote:
>>
>> >> It would be good to see a simple test case and understand why the
>> >> boolean
>> >> multiplied by the scalar double is becoming a float16.     In other
>> >> words,
>> >>  why does
>> >> (1-test)*t
>> >> return a float16 array
>> >> This does not sound right at all and it would be good to understand why
>> >> this occurs, now.   How are you handling scalars multiplied by arrays
>> >> in
>> >> general?
>> >
>> > The reason it's float16 is that the first function in the multiply
>> > function
>> > list for which both types can be safely cast to the output type,
>>
>> Except that float64 cannot be safely cast to float16.
>
> That's true, but it was already being done this way with respect to float32.
> Rereading the documentation for min_scalar_type, I see the explanation could
> elaborate on the purpose of the function further. Float64 cannot be safely
> cast to float32, but this is what NumPy does:
 import numpy as np
 np.__version__
> '1.4.1'
 np.float64(3.5) * np.ones(2,dtype=np.float32)
> array([ 3.5,  3.5], dtype=float32)


You're missing the key part of the rule that numpy uses: for
array*scalar cases, when both array and scalar are the same kind (both
floating point or both integers), then the array dtype always wins.
Only when they are different kinds do you try to negotiate a common
safe type between the scalar and the array.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Mark Wiebe
On Tue, Apr 12, 2011 at 8:24 AM, Robert Kern  wrote:

> On Mon, Apr 11, 2011 at 23:43, Mark Wiebe  wrote:
> > On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant  >
> > wrote:
>
> >> It would be good to see a simple test case and understand why the
> boolean
> >> multiplied by the scalar double is becoming a float16. In other
> words,
> >>  why does
> >> (1-test)*t
> >> return a float16 array
> >> This does not sound right at all and it would be good to understand why
> >> this occurs, now.   How are you handling scalars multiplied by arrays in
> >> general?
> >
> > The reason it's float16 is that the first function in the multiply
> function
> > list for which both types can be safely cast to the output type,
>
> Except that float64 cannot be safely cast to float16.
>

That's true, but it was already being done this way with respect to float32.
Rereading the documentation for min_scalar_type, I see the explanation could
elaborate on the purpose of the function further. Float64 cannot be safely
cast to float32, but this is what NumPy does:

>>> import numpy as np
>>> np.__version__
'1.4.1'
>>> np.float64(3.5) * np.ones(2,dtype=np.float32)
array([ 3.5,  3.5], dtype=float32)
>>>

> after
> > applying the min_scalar_type function to the scalars, is float16.
>
> This is implemented incorrectly, then. It makes no sense for floats,
> for which the limiting attribute is precision, not range. For floats,
> the result of min_scalar_type should be the type of the object itself,
> nothing else. E.g. min_scalar_type(x)==float64 if type(x) is float no
> matter what value it has.
>

I believe this behavior is necessary to avoid undesirable promotion of
arrays to float64. If you are working with extremely large float32 arrays,
and multiply or add a Python float, you would always have to say
np.float32(value), something rather tedious. Looking at the archives, I see
you've explained this before. ;)

http://mail.scipy.org/pipermail/numpy-discussion/2007-April/027079.html

The re-implementation of type casting, other than for the case at hand which
I consider to be a wrong behavior, follows the existing pattern as closely
as I could understand it from the previous implementation. I tightened the
rules just slightly to avoid some problematic downcasts which previously
were occurring:

>>> np.__version__
'1.4.1'
>>> 10*np.ones(2,dtype=np.int8)
array([-31072, -31072], dtype=int16)
>>> 1e60*np.ones(2,dtype=np.float32)
array([ Inf,  Inf], dtype=float32)
>>>

>>> np.__version__
'2.0.0.dev-4cb2eb4'
>>> 10*np.ones(2,dtype=np.int8)
array([10, 10], dtype=int32)
>>> 1e60*np.ones(2,dtype=np.float32)
array([  1.e+60,   1.e+60])
>>>

-Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd numerical difference between Numpy 1.5.1 and Numpy > 1.5.1

2011-04-12 Thread Robert Kern
On Mon, Apr 11, 2011 at 23:43, Mark Wiebe  wrote:
> On Mon, Apr 11, 2011 at 8:48 PM, Travis Oliphant 
> wrote:

>> It would be good to see a simple test case and understand why the boolean
>> multiplied by the scalar double is becoming a float16.     In other words,
>>  why does
>> (1-test)*t
>> return a float16 array
>> This does not sound right at all and it would be good to understand why
>> this occurs, now.   How are you handling scalars multiplied by arrays in
>> general?
>
> The reason it's float16 is that the first function in the multiply function
> list for which both types can be safely cast to the output type,

Except that float64 cannot be safely cast to float16.

> after
> applying the min_scalar_type function to the scalars, is float16.

This is implemented incorrectly, then. It makes no sense for floats,
for which the limiting attribute is precision, not range. For floats,
the result of min_scalar_type should be the type of the object itself,
nothing else. E.g. min_scalar_type(x)==float64 if type(x) is float no
matter what value it has.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Extending numpy statistics functions (like mean)

2011-04-12 Thread Bruce Southey
On 04/11/2011 05:03 PM, Keith Goodman wrote:
> On Mon, Apr 11, 2011 at 2:36 PM, Sergio Pascual  
> wrote:
>> Hi list.
>>
>> For mi application, I would like to implement some new statistics
>> functions over numpy arrays, such as truncated mean. Ideally this new
>> function should have the same arguments
>> than numpy.mean: axis, dtype and out. Is there a way of writing this
>> function that doesn't imply writing it in C from scratch?
>>
>> I have read the documentation, but as far a I see ufuncs convert a N
>> dimensional array into another and generalized ufuncs require fixed
>> dimensions. numpy mean converts a N dimensional array either in a
>> number or a N - 1 dimensional array.
> Here's a slow, brute force method:
>
>>> a = np.arange(9).reshape(3,3)
>>> a
> array([[0, 1, 2],
> [3, 4, 5],
> [6, 7, 8]])
>>> idx = a>  6
>>> b = a. copy()
>>> b[idx] = 0
>>> b
> array([[0, 1, 2],
> [3, 4, 5],
> [6, 0, 0]])
>>> 1.0 * b.sum(axis=0) / (~idx).sum(axis=0)
> array([ 3. ,  2.5,  3.5])
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
The truncated functions are easily handled by masked arrays and somewhat 
harder by using indexing (as seen below). There is limited functionality 
in scipy.stats as well. So first check scipy.stats to see if the 
functions you need are there. Otherwise please post a list of possible 
functions to the scipy-dev list because that is the most likely home.


 >>> import numpy as np
 >>> from numpy import ma
 >>> y = np.arange(35).reshape(5,7)
 >>> b=y>20
 >>> z=ma.masked_where(y <= 20, y)
 >>> z.mean()
27.5
 >>> z.mean(axis=0)
masked_array(data = [24.5 25.5 26.5 27.5 28.5 29.5 30.5],
  mask = [False False False False False False False],
fill_value = 1e+20)

 >>> z.mean(axis=1)
masked_array(data = [-- -- -- 24.0 31.0],
  mask = [ True  True  True False False],
fill_value = 1e+20)

 >>> y[b].mean()
27.5
 >>> y[b[:,5]].mean(axis=0)
array([ 24.5,  25.5,  26.5,  27.5,  28.5,  29.5,  30.5])
 >>> y[b[:,5]].mean(axis=1)
array([ 24.,  31.])


Bruce
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error importing numpy in cygwin

2011-04-12 Thread David Cournapeau
On Tue, Apr 12, 2011 at 10:25 AM, Gabriella Turek  wrote:
>
> Hello I'm working with cygwin 1.7.9. I've installed python 2.6 from the 
> cygwin distro. I've also installed nympy from the distro (v. 1.4.1), and when 
> that failed, I tried to installed directly form source (v. 1.5.1)
> In both cases when I try to run a script that imports numpy (including 
> running the numpy tests) I get the following
> error message:

It seems ctypes import fails. Are you sure that your python is
correctly installed ? What does the following does:

python -c "import ctypes"

If that does not work, the problem is how python was installed, not with numpy.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion