Re: [Numpy-discussion] Manipulate neighboring points in 2D array

2012-12-27 Thread Zachary Pincus
> I have 2D array, let's say: `np.random.random((100,100))` and I want to do 
> simple manipulation on each point neighbors, like divide their values by 3.
> 
> So for each array value, x, and it neighbors n:
> 
> n n nn/3 n/3 n/3
> n x n -> n/3  x  n/3
> n n nn/3 n/3 n/3
> 
> I searched a bit, and found about scipy ndimage filters, but if I'm not 
> wrong, there is no such function. Of course me being wrong is quite possible, 
> as I did not comprehend whole ndimage module, but I tried generic filter for 
> example and browser other functions.
> 
> Is there better way to make above manipulation, instead using for loop over 
> every array element?

I am not sure I understand the above manipulation... typically neighborhood 
operators take an array element and the its neighborhood and then give a single 
output that becomes the value of the new array at that point. That is, a 3x3 
neighborhood filter would act as a function F(R^{3x3}) -> R. It appears that 
what you're talking about above is a function F(R^{3x3}) -> R^{3x3}. But how is 
this output to map onto the original array positions? Is the function to be 
applied to non-overlapping neighborhoods? Is it to be applied to all 
neighborhoods and then summed at each position to give the output array?

If you can describe the problem in a bit more detail, with perhaps some sample 
input and output for what you desire (and/or with some pseudocode describing 
how it would work in a looping-over-each-element approach), I'm sure folks can 
figure out how best to do this in numpy.

Zach
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.testing.asserts and masked array

2012-12-27 Thread Chao YUE
Thanks. I tried again, it works.

On Thu, Dec 27, 2012 at 10:35 PM, Ralf Gommers wrote:

> from numpy.ma import testutils
>



-- 
***
Chao YUE
Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
UMR 1572 CEA-CNRS-UVSQ
Batiment 712 - Pe 119
91191 GIF Sur YVETTE Cedex
Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.testing.asserts and masked array

2012-12-27 Thread Ralf Gommers
On Thu, Dec 27, 2012 at 12:23 AM, Chao YUE  wrote:

> Dear all,
>
> I found here
> http://mail.scipy.org/pipermail/numpy-discussion/2009-January/039681.html
> that to use* numpy.ma.testutils.assert_almost_equal* for masked array
> assertion, but I cannot find the np.ma.testutils module?
> Am I getting somewhere wrong? my numpy version is 1.6.2 thanks!


"from numpy.ma import testutils" works for me.

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Manipulate neighboring points in 2D array

2012-12-27 Thread deb
Hi,

I have 2D array, let's say: `np.random.random((100,100))` and I want
to do simple manipulation on each point neighbors, like divide their
values by 3.

So for each array value, x, and it neighbors n:

n n nn/3 n/3 n/3
n x n -> n/3  x  n/3
n n nn/3 n/3 n/3

I searched a bit, and found about scipy ndimage filters, but if I'm
not wrong, there is no such function. Of course me being wrong is
quite possible, as I did not comprehend whole ndimage module, but I
tried generic filter for example and browser other functions.

Is there better way to make above manipulation, instead using for loop
over every array element?
TIA___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pre-allocate array

2012-12-27 Thread Chris Barker - NOAA Federal
On Thu, Dec 27, 2012 at 8:44 AM, Nikolaus Rath  wrote:

> I have an array that I know will need to grow to X elements. However, I
> will need to work with it before it's completely filled.

what sort of "work with it" do you mean? -- resize() is dangerous if
there are any other views on the data block...


> bigarray = np.empty(X)
> current_size = 0
> for i in something:
> buf = produce_data(i)
> bigarray[current_size:current_size+len(buf)] = buf
> current_size += len(buf)
> # Do things with bigarray[:current_size]
>
> This avoids having to allocate new buffers and copying data around, but
> I have to separately manage the current array size.

yup -- but not a bad option, really.

> Alternatively, I
> could do
>
> bigarray = np.empty(0)
> current_size = 0
> for i in something:
> buf = produce_data(i)
> bigarray.resize(len(bigarray)+len(buf))
> bigarray[-len(buf):] = buf
> # Do things with bigarray
>
> this is much more elegant, but the resize() calls may have to copy data
> around.

Yes, they will -- but whether that's a problem or not depends on your
use-case. If you are adding elements one-by-one, the re-allocatiing
and copying of memory could be a big overhead. But if buf is not that
"small", then the overhead gets lost in teh wash. Yopu'd have to
profile to be sure, but I found that if, in this case, "buf" is on
order of larger than 1/16 of the size of bigarray, you'll not see it
(vague memory...)

> Is there any way to tell numpy to allocate all the required memory while
> using only a part of it for the array? Something like:
>
> bigarray = np.empty(50, will_grow_to=X)
> bigarray.resize(X) # Guaranteed to work without copying stuff  around

no -- though you could probably fudge it by messing with the strides
-- though you'd need to either keep track of how much memory was
originally allocated, or how much is currently used yourself, like you
did above.

NOTE: I've written a couple of "growable array" classes for just this
problem. One in pure Python, and one in Cython that isn't quite
finished. I've enclosed the pure python one, let me know if your
interested in the Cython version (it may need some work to b fully
functional).

-Chris




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov


accumulator.py
Description: Binary data


test_accumulator.py
Description: Binary data
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Pre-allocate array

2012-12-27 Thread Nikolaus Rath
Hello,

I have an array that I know will need to grow to X elements. However, I
will need to work with it before it's completely filled. I see two ways
of doing this:

bigarray = np.empty(X)
current_size = 0
for i in something:
buf = produce_data(i)
bigarray[current_size:current_size+len(buf)] = buf
current_size += len(buf)
# Do things with bigarray[:current_size]

This avoids having to allocate new buffers and copying data around, but
I have to separately manage the current array size. Alternatively, I
could do

bigarray = np.empty(0)
current_size = 0
for i in something:
buf = produce_data(i)
bigarray.resize(len(bigarray)+len(buf))
bigarray[-len(buf):] = buf
# Do things with bigarray

this is much more elegant, but the resize() calls may have to copy data
around.

Is there any way to tell numpy to allocate all the required memory while
using only a part of it for the array? Something like:

bigarray = np.empty(50, will_grow_to=X)
bigarray.resize(X) # Guaranteed to work without copying stuff  around


Thanks,
-Nikolaus


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype "reduction"

2012-12-27 Thread Nicolas Rougier

Yep, I'm trying to construct dtype2 programmaticaly and was hoping for some 
function giving me a "canonical" expression of the dtype. I've started playing 
with fields but it's just a bit harder than I though (lot of different cases 
and recursion).

Thanks for the answer.


Nicolas

On Dec 27, 2012, at 1:32 , Nathaniel Smith wrote:

> On Wed, Dec 26, 2012 at 8:09 PM, Nicolas Rougier
>  wrote:
>> 
>> 
>> Hi all,
>> 
>> 
>> I'm looking for a way to "reduce" dtype1 into dtype2 (when it is possible of 
>> course).
>> Is there some easy way to do that by any chance ?
>> 
>> 
>> dtype1 = np.dtype( [ ('vertex',  [('x', 'f4'),
>>  ('y', 'f4'),
>>  ('z', 'f4')]),
>>('normal',  [('x', 'f4'),
>> ('y', 'f4'),
>> ('z', 'f4')]),
>>('color',   [('r', 'f4'),
>> ('g', 'f4'),
>> ('b', 'f4'),
>> ('a', 'f4')]) ] )
>> 
>> dtype2 = np.dtype( [ ('vertex',  'f4', 3),
>> ('normal',  'f4', 3),
>> ('color',   'f4', 4)] )
>> 
> 
> If you have an array whose dtype is dtype1, and you want to convert it
> into an array with dtype2, then you just do
>  my_dtype2_array = my_dtype1_array.view(dtype2)
> 
> If you have dtype1 and you want to programmaticaly construct dtype2,
> then that's a little more fiddly and depends on what exactly you're
> trying to do, but start by poking around with dtype1.names and
> dtype1.fields, which contain information on how dtype1 is put together
> in the form of regular python structures.
> 
> -n
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion