Re: [Numpy-discussion] trim_zeros in more than one dimension?

2010-04-21 Thread Keith Goodman
On Tue, Apr 20, 2010 at 6:03 AM, Andreas Hilboll li...@hilboll.de wrote:
 Hi there,

 is there an easy way to do something like trim_zeros() does, but for a
 n-dimensional array? I have a 2d array with only zeros in the first and
 last rows and columns, and would like to trim this array to only the
 non-zero part ...

 Thanks,

 Andreas.

I have some code that does something similar for NaNs.

Create an array:

 x = np.arange(12, dtype=float).reshape(3,4)
 x[[0,-1],:] = np.nan
 x[:,[0,-1]] = np.nan
 x
array([[ NaN,  NaN,  NaN,  NaN],
   [ NaN,   5.,   6.,  NaN],
   [ NaN,  NaN,  NaN,  NaN]])

Use the la package to convert the array to a labeled array:

 y = la.larry(x)

Then remove all rows and columns that contain only NaN:

 y.vacuum().A
   array([[ 5.,  6.]])

or

 y.vacuum().squeeze().A
   array([ 5.,  6.])

It should work for arbitrary number of dimensions.

The code (Simplified BSD License) is here:

http://bazaar.launchpad.net/~kwgoodman/larry/trunk/annotate/head%3A/la/deflarry.py
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release candidate 3 for NumPy 1.4.1 and SciPy 0.7.2

2010-04-21 Thread Sebastian Haase
On Tue, Apr 20, 2010 at 2:23 AM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:


 On Mon, Apr 19, 2010 at 9:19 PM, Ralf Gommers ralf.gomm...@googlemail.com
 wrote:


 On Mon, Apr 19, 2010 at 4:21 PM, Ralf Gommers
 ralf.gomm...@googlemail.com wrote:


 On Mon, Apr 19, 2010 at 3:35 PM, Sebastian Haase seb.ha...@gmail.com
 wrote:

 Hi,
 Congratulations. I might be unnecessarily dense - but what SciPy am I
 supposed to use with the new numpy 1.4.1 for Python 2.5? I'm surprised
 that there are no SciPy 0.7.2 binaries for Python 2.5 - is that
 technically not possible ?

 You're not being dense, there are no 2.5 scipy binaries. I did not
 succeed in building them. The scipy 0.7 branch is so old (from Jan 2009)
 that it has never been compiled on OS X 10.6, and I did not yet find a way
 to get it to work. For Windows I also had problems, there it should be
 compiled against numpy 1.2 while the 2.6 binaries are compiled against numpy
 1.3. The variations in numpy, python and OS just added up to make it
 unworkable.

 I can give it another try after the final release, but first priority is
 to finally release.

 To remind myself of the issue I tried building it again, and managed to
 build a 2.5 binary against numpy 1.3 on OS X at least. Can anyone tell me
 why 2.5 binaries are supposed to be built against numpy 1.2 and 2.6 binaries
 against numpy 1.3?


 It seems I've crawled a bit further up the learning curve since last time I
 tried. Scipy binaries for python 2.5 (built against numpy 1.2) are now on
 Sourceforge, please test them.

What does built against numpy 1.2 actually mean exactly ? Is that
just a building time thing and they actually work together (at run
time) with numpy 1.4.1 ? That would be all fine then ... what
platform(s) are you talking about ? (What about Windows?)

- Sebastian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trim_zeros in more than one dimension?

2010-04-21 Thread Charles R Harris
On Tue, Apr 20, 2010 at 7:03 AM, Andreas Hilboll li...@hilboll.de wrote:

 Hi there,

 is there an easy way to do something like trim_zeros() does, but for a
 n-dimensional array? I have a 2d array with only zeros in the first and
 last rows and columns, and would like to trim this array to only the
 non-zero part ...


I think for your application it would be easier to use a subarray, something
like a[1:-1, 1:-1].

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release candidate 3 for NumPy 1.4.1 and SciPy 0.7.2

2010-04-21 Thread Dag Sverre Seljebotn
Ralf Gommers wrote:
 Hi,
 
 I am pleased to announce the third release candidate of both Scipy 0.7.2 
 and NumPy 1.4.1. Please test, and report any problems on the NumPy or 
 SciPy list.

I had a round of segfaults in the SciPy/NumPy interlink, which I 
eventually tracked down to a leftover _dotblas.so from an older NumPy 
version lying in site-packages (apparently that file is gone in the 
newer NumPy, but would still be imported under some circumstances, and I 
foolishly just installed over the old version and subsequently forgot 
that I had done so).

This could have been avoided with a warning about removing any old numpy 
lying around first (though perhaps others are not as stupid as I was).

-- 
Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release candidate 3 for NumPy 1.4.1 and SciPy 0.7.2

2010-04-21 Thread Dag Sverre Seljebotn
Dag Sverre Seljebotn wrote:
 Ralf Gommers wrote:
 Hi,

 I am pleased to announce the third release candidate of both Scipy 
 0.7.2 and NumPy 1.4.1. Please test, and report any problems on the 
 NumPy or SciPy list.
 
 I had a round of segfaults in the SciPy/NumPy interlink, which I 
 eventually tracked down to a leftover _dotblas.so from an older NumPy 
 version lying in site-packages (apparently that file is gone in the 
 newer NumPy, but would still be imported under some circumstances, and I 
 foolishly just installed over the old version and subsequently forgot 
 that I had done so).

Sorry for the noise, the segfault was due to something else, I only 
thought I had fixed it. But it's my own MKL-specific mess causing it, so 
nothing to worry about.


-- 
Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reformulating a series of correlatio ns as one fft, ifft pair

2010-04-21 Thread Paul Northug
Stéfan van der Walt stefan at sun.ac.za writes:
snip
 
 I haven't checked your code in detail, but I'll mention two common
 problems with this approach in case it fits:
 
 1) When convolving a KxL with an MxN array, they both need to be zero
 padded to (K+M-1)x(L+N-1).
 2) One of the signals needs to be reversed and conjugated.
 
 Also have a look at scipy.signal.fftconvolve.  For real 2D images, I use:
 
 def fft_corr(A, B, *args, **kwargs):
 return ss.fftconvolve(A, B[::-1, ::-1, ...], *args, **kwargs)

Thanks Stefan. 

Do you mean either reverse or conjugate, not and? I am still a little confused.
Suppose you had a 3d problem and you wanted to convolve on two axes and
correlate on the third. I can see now how you could do so by using fftconvolve
and reversing the axis you wanted to correlate as you've shown. 

But how would you implement it the other way by fft'ing and then conjugating one
of the fft pair, multiplying and ifft'ing? You can't conjugate along an axis.
Also I am not sure which one is faster, conjugating a list or reversing it.

I would like to correct another mistake I made when I said that the fft method
is much slower in this case. This is only true when the padding doesn't give you
dimensions that are split radix method-friendly or just powers of 2.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] My GSoC Proposal to Implement a Subset of NumPy for PyPy

2010-04-21 Thread Dan Roberts
Thanks for the reply.  You're certainly right that your work is extremely
beneficial to mine.  At present I'm afraid a great deal of NumPy C code
isn't easily reusable and it's great you're addressing that.  I may not have
been thinking in line with Maciej, but I was thinking ufuncs would be
written in pure Python and jit compiled to an efficient form.  (We can make
lots of nice assumptions about them) That said, I think being able to write
generic ufuncs is a very good idea, and absolutely doable.

On Apr 20, 2010 7:48 AM, Travis Oliphant oliph...@enthought.com wrote:

On Apr 16, 2010, at 11:50 PM, Dan Roberts wrote:

 Hello NumPy Users,
 Hi everybody, my name i...

Hi Daniel,

This sounds like a great project, and I think it has promise.   I would
especially pay attention to the requests to make it easy to write ufuncs and
generalized ufuncs in RPython.   That has the most possibility of being
immediately useful.

Your timing is also very good.I am going to be spending some time
re-factoring NumPy to separate out the CPython interface from the underlying
algorithms.   I think this re-factoring should help you in your long-term
goals.   If you have any input or suggestions while the refactoring is
taking place, we are always open to suggestions and criticisms.

Thanks for writing a NumPy-related proposal.

Best regards,

-Travis

Thanks,
Daniel Roberts



 ___
 NumPy-Discussion mailing list
 NumPy-Discuss...


  --
Travis Oliphant
Enthought Inc.
1-512-536-1057
http://www.enthought.com
oliph...@enthought.com






___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] My GSoC Proposal to Implement a Subset of NumPy for PyPy

2010-04-21 Thread Dan Roberts
Oops, I intended to dig more through the NumPy source before I sent the
final version of that message, so I could be speaking from an informed
standpoint.

Thanks,
Daniel Roberts



 ___
 NumPy-Discussion mailing list
 NumPy-Discuss...




--
Travis Oliphant
Enthought Inc.
1-512-536-1057
http://www.enthought.com
oliph...@enthought.com
...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy compilation error

2010-04-21 Thread David Cournapeau
On Tue, Apr 20, 2010 at 10:34 AM, Christopher Barker
chris.bar...@noaa.gov wrote:


 Pradeep Jha wrote:
 Thank you so much Robert. You are awesome :) That totally the problem.
 One more question for you. Which  are the things that you have to
 declare in PYTHONPATH manually?

 I never put anything in PYTHONPATH -- if you install everything you
 need, you won't need to. When I'm using something under development, I
 use setuptools' setup.py develop

I don't think it is wise to advocate the use of develop for python
newcomers. It comes with issues which are difficult to track down
(stalled files for entry points which are not removed by uninstall -u,
etc...). Those are non issues for the experienced users, but a pain in
my experience for beginners.

The easy and reliable solution for non root install is PYTHONPATH for
python  2.6 and install --user for python = 2.6.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] My GSoC Proposal to Implement a Subset of NumPy for PyPy

2010-04-21 Thread Dag Sverre Seljebotn
Dan Roberts wrote:
 Thanks for the reply.  You're certainly right that your work is 
 extremely beneficial to mine.  At present I'm afraid a great deal of 
 NumPy C code isn't easily reusable and it's great you're addressing 
 that.  I may not have been thinking in line with Maciej, but I was 
 thinking ufuncs would be written in pure Python and jit compiled to an 
 efficient form.  (We can make lots of nice assumptions about them) That 
 said, I think being able to write generic ufuncs is a very good idea, 
 and absolutely doable.

This might be relevant?

http://conference.scipy.org/proceedings/SciPy2008/paper_16/

-- 
Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release candidate 3 for NumPy 1.4.1 and SciPy 0.7.2

2010-04-21 Thread Ralf Gommers
On Wed, Apr 21, 2010 at 1:52 AM, Sebastian Haase seb.ha...@gmail.comwrote:

 On Tue, Apr 20, 2010 at 2:23 AM, Ralf Gommers
 ralf.gomm...@googlemail.com wrote:
  It seems I've crawled a bit further up the learning curve since last time
 I
  tried. Scipy binaries for python 2.5 (built against numpy 1.2) are now on
  Sourceforge, please test them.
 
 What does built against numpy 1.2 actually mean exactly ? Is that
 just a building time thing and they actually work together (at run
 time) with numpy 1.4.1 ? That would be all fine then ... what
 platform(s) are you talking about ? (What about Windows?)

 Correct, and it's the same for Windows and OS X binaries. Some scipy
modules include a numpy header file (mostly arrayobject.h). This is now
forward-compatible, so scipy compiled against numpy 1.2 works with 1.3 and
1.4.1 as well.

It won't work with numpy 2.0 though, and was the reason for the issues with
numpy 1.4.0. In 1.4.0 the layout of the ndarray object in memory changed
(hence this minor release to undo that change), causing segfaults when used
with scipy or other extensions compiled against older numpy.

Cheers,
Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trim_zeros in more than one dimension?

2010-04-21 Thread Robert Kern
On Tue, Apr 20, 2010 at 18:45, Charles R Harris
charlesr.har...@gmail.com wrote:

 On Tue, Apr 20, 2010 at 7:03 AM, Andreas Hilboll li...@hilboll.de wrote:

 Hi there,

 is there an easy way to do something like trim_zeros() does, but for a
 n-dimensional array? I have a 2d array with only zeros in the first and
 last rows and columns, and would like to trim this array to only the
 non-zero part ...

 I think for your application it would be easier to use a subarray, something
 like a[1:-1, 1:-1].

Yes, but I think he's asking for a function that will find the
appropriate slices for him, justl like trim_zeros() does for 1D
arrays.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Disabling Extended Precision in NumPy (like -ffloat-store)

2010-04-21 Thread Adrien Guillon
Hello all,

I've recently started to use NumPy to prototype some numerical
algorithms, which will eventually find their way to a GPU (where I
want to limit myself to single-precision operations for performance
reasons).  I have recently switched to the use of the single type in
NumPy to ensure I use single-precision floating point operations.

My understanding, however, is that Intel processors may use extended
precision for some operations anyways unless this is explicitly
disabled, which is done with gcc via the -ffloat-store operation.
Since I am prototyping algorithms for a different processor
architecture, where the extended precision registers simply do not
exist, I would really like to force NumPy to limit itself to using
single-precision operations throughout the calculation (no extended
precision in registers).

How can I do this?

Thanks!

AJ
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Disabling Extended Precision in NumPy (like -ffloat-store)

2010-04-21 Thread Pauli Virtanen
ke, 2010-04-21 kello 10:47 -0400, Adrien Guillon kirjoitti:
[clip]
 My understanding, however, is that Intel processors may use extended
 precision for some operations anyways unless this is explicitly
 disabled, which is done with gcc via the -ffloat-store operation.
 Since I am prototyping algorithms for a different processor
 architecture, where the extended precision registers simply do not
 exist, I would really like to force NumPy to limit itself to using
 single-precision operations throughout the calculation (no extended
 precision in registers).

Probably the only way to ensure this is to recompile Numpy (and
possibly, also Python) with the -ffloat-store option turned on.

When compiling Numpy, you should be able to insert the flag via

OPT=-ffloat-store python setup.py build

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Disabling Extended Precision in NumPy (like -ffloat-store)

2010-04-21 Thread Charles R Harris
On Wed, Apr 21, 2010 at 8:47 AM, Adrien Guillon aj.guil...@gmail.comwrote:

 Hello all,

 I've recently started to use NumPy to prototype some numerical
 algorithms, which will eventually find their way to a GPU (where I
 want to limit myself to single-precision operations for performance
 reasons).  I have recently switched to the use of the single type in
 NumPy to ensure I use single-precision floating point operations.

 My understanding, however, is that Intel processors may use extended
 precision for some operations anyways unless this is explicitly
 disabled, which is done with gcc via the -ffloat-store operation.
 Since I am prototyping algorithms for a different processor
 architecture, where the extended precision registers simply do not
 exist, I would really like to force NumPy to limit itself to using
 single-precision operations throughout the calculation (no extended
 precision in registers).

 How can I do this?


Interesting question. But what precisely do you mean about limiting? Do you
want *all* floating calculations done in float32? Why does the extended
precision cause you problems? How do the operations of the GPU conflict with
those of the intel FPU? So on an so forth.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trim_zeros in more than one dimension?

2010-04-21 Thread Bruce Southey
On 04/21/2010 08:36 AM, Robert Kern wrote:
 On Tue, Apr 20, 2010 at 18:45, Charles R Harris
 charlesr.har...@gmail.com  wrote:

 On Tue, Apr 20, 2010 at 7:03 AM, Andreas Hilbollli...@hilboll.de  wrote:
  
 Hi there,

 is there an easy way to do something like trim_zeros() does, but for a
 n-dimensional array? I have a 2d array with only zeros in the first and
 last rows and columns, and would like to trim this array to only the
 non-zero part ...

 I think for your application it would be easier to use a subarray, something
 like a[1:-1, 1:-1].
  
 Yes, but I think he's asking for a function that will find the
 appropriate slices for him, justl like trim_zeros() does for 1D
 arrays.


If the sum of axis to be removed equals zero then you can conditionally 
remove that axis. For 2-d you can do:
import numpy as np
ba=np.array([[0,0,0], [1,2, 0], [3,4, 0], [0,0,0]])
ndx0=ba.sum(axis=0)0 #indices of axis 0 where the sum is greater than zero
ac=ba[:,ndx0]
ndx1=ac.sum(axis=1)0 #indices of axis 1 where the sum is greater than zero
ad=ac[ndx1,:]
  ba
array([[0, 0, 0],
[1, 2, 0],
[3, 4, 0],
[0, 0, 0]])
  ad
array([[1, 2],
[3, 4]])


If the sum of an dimension is also zero somewhere else in the array then 
you need to correct the appropriate index first (which is why I created 
a separate variable). Obviously higher dimensions are left untested.

Bruce

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is this a bug, and if so, who's?

2010-04-21 Thread Ken Basye
Folks,
   Apologies for asking here, but I ran across this problem yesterday 
and probably need to file a bug.  The problem is I don't know if this is 
a Numpy bug, a Python bug, or both.  Here's an illustration, platform 
information follows.
   TIA,
   Ken


#
import collections
import numpy as np

class A (collections.namedtuple('ANT', ('x', 'y'))):
def __float__(self):
return self.y

# Same as A, but explicitly convert y to a float in __float__()  - this 
works around the assert fail
class B (collections.namedtuple('BNT', ('x', 'y'))):
def __float__(self):
return float(self.y)

a0 = A(1.0, 2.0)
f0 = np.float64(a0)
print f0

a1 = A(float(1.0), float(2.0))
f1 = np.float64(a1)
print f1

b1 = B(np.float64(1.0), np.float64(2.0))
f2 = np.float64(b1)
print f2

a2 = A(np.float64(1.0), np.float64(2.0))
# On some platforms, the next line will trigger an 
assert: 
  

# python: Objects/floatobject.c:1674: float_subtype_new: Assertion 
`PyObject*)(tmp))-ob_type) == PyFloat_Type)' failed.
f3 = np.float64(a2)
print f3
#

Platform info:

Python 2.6.5 (r265:79063, Apr 14 2010, 13:32:56)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2

  numpy.__version__
'1.3.0'

~--$ uname -srvmpio
Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 
x86_64 GNU/Linux

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Disabling Extended Precision in NumPy (like -ffloat-store)

2010-04-21 Thread Christopher Barker
Adrien Guillon wrote:
  I use single-precision floating point operations.
 
 My understanding, however, is that Intel processors may use extended
 precision for some operations anyways unless this is explicitly
 disabled, which is done with gcc via the -ffloat-store operation.

IIUC, that forces the FPU to not use the 80 bits in temps, but I think 
the fpu still uses 64 bits -- or does that flag actually force singles 
to stay single (32 bit) through the whole process as well?

-CHB



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy compilation error

2010-04-21 Thread Christopher Barker
David Cournapeau wrote:
 I don't think it is wise to advocate the use of develop for python
 newcomers.

Fair enough.

What I know is that every scheme I've come up with for working with my 
own under-development packages has been a pain in the #$@, and -develop 
has worked well for me.

 The easy and reliable solution for non root install is PYTHONPATH for
 python  2.6 and install --user for python = 2.6.

I hadn't read the thread carefully enough to realize that the OP was 
asking about a non-root install, but in any case, I'd still encourage 
that folks set ONE standard place in their PYTHONPATH, rather than one 
for each package, and make sure you're careful about running more than 
one version of Python.

-Chris




-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion