[Numpy-discussion] type conversion question

2013-04-18 Thread K . -Michael Aye
I don't understand why sometimes a direct assignment of a new dtype is 
possible (but messes up the values), and why at other times a seemingly 
harmless upcast (in my potentially ignorant point of view) is not 
possible.
So, maybe a direct assignment of a new dtype is actually never a good 
idea? (I'm asking), and one should always go the route of newarray= 
array(oldarray, dtype=newdtype), but why then sometimes the upcast 
provides an error and forbids it and sometimes not?


Examples:

In [140]: slope.read_center_window()

In [141]: slope.data.dtype
Out[141]: dtype('float32')

In [142]: slope.data[1,1]
Out[142]: 10.044398

In [143]: val = slope.data[1,1]

In [144]: slope.data.dtype='float64'

In [145]: slope.data[1,1]
Out[145]: 586.98938070189865

#-
#Here, the value of data[1,1] has completely changed (and so has the 
rest of the array), and no error was given.
# But then...
#

In [146]: val.dtype
Out[146]: dtype('float32')

In [147]: val
Out[147]: 10.044398

In [148]: val.dtype='float64'
---
AttributeErrorTraceback (most recent call last)
ipython-input-148-52a373a41cac in module()
 1 val.dtype='float64'

AttributeError: attribute 'dtype' of 'numpy.generic' objects is not writable

=== end of code

So why is there an error in the 2nd case, but no error in the first 
case? Is there a logic to it?

Thanks,
Michael



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] type conversion question

2013-04-18 Thread K . -Michael Aye
On 2013-04-19 01:02:59 +, Benjamin Root said:

 
 
 
 On Thu, Apr 18, 2013 at 7:31 PM, K.-Michael Aye kmichael@gmail.com 
 wrote:
 I don't understand why sometimes a direct assignment of a new dtype is
 possible (but messes up the values), and why at other times a seemingly
 harmless upcast (in my potentially ignorant point of view) is not
 possible.
 So, maybe a direct assignment of a new dtype is actually never a good
 idea? (I'm asking), and one should always go the route of newarray=
 array(oldarray, dtype=newdtype), but why then sometimes the upcast
 provides an error and forbids it and sometimes not?
 
 
 Examples:
 
 In [140]: slope.read_center_window()
 
 In [141]: slope.data.dtype
 Out[141]: dtype('float32')
 
 In [142]: slope.data[1,1]
 Out[142]: 10.044398
 
 In [143]: val = slope.data[1,1]
 
 In [144]: slope.data.dtype='float64'
 
 In [145]: slope.data[1,1]
 Out[145]: 586.98938070189865
 
 #-
 #Here, the value of data[1,1] has completely changed (and so has the
 rest of the array), and no error was given.
 # But then...
 #
 
 In [146]: val.dtype
 Out[146]: dtype('float32')
 
 In [147]: val
 Out[147]: 10.044398
 
 In [148]: val.dtype='float64'
 ---
 AttributeError                            Traceback (most recent call last)
 ipython-input-148-52a373a41cac in module()
  1 val.dtype='float64'
 
 AttributeError: attribute 'dtype' of 'numpy.generic' objects is not writable
 
 === end of code
 
 So why is there an error in the 2nd case, but no error in the first
 case? Is there a logic to it?
 
 
 When you change a dtype like that in the first one, you aren't really 
 upcasting anything.  You are changing how numpy interprets the 
 underlying bits.  Because you went from a 32-bit element size to a 
 64-bit element size, you are actually seeing the double-precision 
 representation of 2 of your original data points together.
 
 The correct way to cast is to do something like a = 
 slope.data.astype('float64').  That makes a copy and does the casting 
 as safely as possible.
 
 As for the second one, you have what is called a numpy scalar.  These 
 aren't quite the same thing as a numpy array, and can be a bit more 
 restrictive.  Can you imagine what sort of issues that would pose if 
 one could start viewing and modifying neighboring chunks of memory 
 without ever having to mess around with pointers?  It would be a 
 hacker's dream!
 
 I hope that clears things up.
 Ben Root

yes, thanks!

Michael


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bug in numpy.mean() ?

2012-01-24 Thread K . -Michael Aye
I know I know, that's pretty outrageous to even suggest, but please 
bear with me, I am stumped as you may be:

2-D data file here:
http://dl.dropbox.com/u/139035/data.npy

Then:
In [3]: data.mean()
Out[3]: 3067.024383998

In [4]: data.max()
Out[4]: 3052.4343

In [5]: data.shape
Out[5]: (1000, 1000)

In [6]: data.min()
Out[6]: 3040.498

In [7]: data.dtype
Out[7]: dtype('float32')


A mean value calculated per loop over the data gives me 3045.747251076416
I first thought I still misunderstand how data.mean() works, per axis 
and so on, but did the same with a flattenend version with the same 
results.

Am I really soo tired that I can't see what I am doing wrong here?
For completion, the data was read by a osgeo.gdal dataset method called 
ReadAsArray()
My numpy.__version__ gives me 1.6.1 and my whole setup is based on 
Enthought's EPD.

Best regards,
Michael



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] bug in numpy.mean() ?

2012-01-24 Thread K . -Michael Aye
Thank you Bruce and all, 
I knew I was doing something wrong (should have read the mean method 
doc more closely). Am of course glad that's so easy understandable.
But: If the error can get so big, wouldn't it be a better idea for the 
accumulator to always be of type 'float64' and then convert later to 
the type of the original array? 
As one can see in this case, the result would be much closer to the true value.


Michael


On 2012-01-24 19:01:40 +, Val Kalatsky said:



Just what Bruce said. 

You can run the following to confirm:
np.mean(data - data.mean())

If for some reason you do not want to convert to float64 you can add 
the result of the previous line to the bad mean:

bad_mean = data.mean()
good_mean = bad_mean + np.mean(data - bad_mean)

Val

On Tue, Jan 24, 2012 at 12:33 PM, K.-Michael Aye 
kmichael@gmail.com wrote:

I know I know, that's pretty outrageous to even suggest, but please
bear with me, I am stumped as you may be:

2-D data file here:
http://dl.dropbox.com/u/139035/data.npy

Then:
In [3]: data.mean()
Out[3]: 3067.024383998

In [4]: data.max()
Out[4]: 3052.4343

In [5]: data.shape
Out[5]: (1000, 1000)

In [6]: data.min()
Out[6]: 3040.498

In [7]: data.dtype
Out[7]: dtype('float32')


A mean value calculated per loop over the data gives me 3045.747251076416
I first thought I still misunderstand how data.mean() works, per axis
and so on, but did the same with a flattenend version with the same
results.

Am I really soo tired that I can't see what I am doing wrong here?
For completion, the data was read by a osgeo.gdal dataset method called
ReadAsArray()
My numpy.__version__ gives me 1.6.1 and my whole setup is based on
Enthought's EPD.

Best regards,
Michael



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] howto store 2D function values and their grid points

2011-12-06 Thread K . -Michael Aye
Dear all,

I can't wrap my head around this. Mathematically it's not hard, I just 
don't know how to store and access it without many loops.

I have a function f(x,y).

I would like to calculate it at x = arange(20,101,20) and y = arange(2,30,2)

How do I store that in a multi-dimensional array and preserve the grid 
points where I did the calculation so that I could plot later groups of 
function plots like this:

for elem in y-values
plot(x_values, f(x,y=elem)

or
for elem in x-values:
plot(y_values, f(x=elem,y)

I can smell that the solution can not be so hard, but I currently 
understand how to keep the points where I did the evaluation of the 
function.

Maybe it is also imaginable to do something with functional tricks as 'map'?

Thanks for any suggestions!

Best regards,
Michael


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Why arange has no stop-point opt-in?

2010-12-30 Thread K . -Michael Aye
Dear all,

I'm a bit puzzled that there seems just no way to cleanly code an 
interval with evenly spaced numbers that includes the stop point given?
linspace offers to include the stop point, but arange does not?
Am I missing something? (I am aware, that I could do 
arange(9,15.0001,0.1) but that's what I want to avoid!)

Best regards and Happy New Year!
Michael


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Why arange has no stop-point opt-in?

2010-12-30 Thread K . -Michael Aye
On 2010-12-30 16:43:12 +0200, josef.p...@gmail.com said:

 
 Since linspace exists, I don't see much point in adding the stop point
 in arange. I use arange mainly for integers as numpy equivalent of
 python's range. And I often need arange(n+1) which is less writing
 than arange(n, include_end_point=True)

I agree with the point of writing gets more in some cases.
But arange(a, n+1, 0.1) would of course fail in this case.
And the big difference is, that I need to calculate first how many 
steps it is for linspace to achieve what I believe is a frequent user 
case.
As we already have the 'convenience' of both linspace and arange, which 
in principle could be done by one function alone if we'd precalculate 
all required information ourselves, why not go the full way, and take 
all overhead away from the user?

Michael

 
 Josef
 
 
 Friedrich
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 3 dim array unpacking

2010-07-12 Thread K . -Michael Aye
Dear numpy hackers,

I can't find the syntax for unpacking the 3 dimensions of a rgb array.
so i have a MxNx3 image array 'img' and would like to do:

red, green, blue = img[magical_slicing]

Which slicing magic do I need to apply?

Thanks for your help!

BR,
Michael


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion