Re: [Numpy-discussion] get range of numpy type

2008-06-04 Thread Nathan Bell
On Wed, Jun 4, 2008 at 12:16 AM, Christopher Burns [EMAIL PROTECTED] wrote:
 Is there a way to get the range of a numpy type?  I'd like to clamp a
 parameter to be within the range of a numpy type, np.uint8, np.uint32...

 Something like:
 if x  max_value_of(np.uint8):
x = max_value_of(np.uint8)

That kind of information is available via numpy.finfo() and numpy.iinfo():

In [12]: finfo('d').max
Out[12]: 1.7976931348623157e+308

In [13]: iinfo('i').max
Out[13]: 2147483647

In [14]: iinfo(uint8).max
Out[14]: 255


-- 
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread J. Stark
Chris,

many thanks. Could I suggest that this information be featured 
prominently in the Read Me in the Installer, and perhaps also at 
http://www.scipy.org/Download where this is given as the official 
binary distribution for MacOSX. You might want to change the error 
message too, since I think that some people will interpret System 
Python to mean the default Python provided by the standard system 
install. Since this is 2.5.1 on Leopard, the error message could be 
confusing.

On this topic, I would be interested to hear people's advice on using 
the system provided Python v an independent install. In 25 years of 
using Macs I have learned through several painful lessons that its 
wise to customize the system as little as possible: this minimizes 
both conflicts and reduces problems when doing system upgrades. I 
have therefore always used the default Python provided by OSX, so far 
with no obvious disadvantages for the types of scripts I use 
(primarily home written SciPy scientific code). However, I note that 
many people run either the pythomac.org distribution, or the 
ActiveState. What are the advantages to this?

Many thanks

J.

Jaroslav,

The installer works with the MacPython from python.org, not Apple's python
(the one that ships with Leopard).

The MacPython is installed in the /Library/Frameworks... It should work if
your python is here:

cburns$  python -c import sys; print sys.prefix
/Library/Frameworks/Python.framework/Versions/2.5

Won't work if your python is here:

cburns$  python-apple -c import sys; print sys.prefix
/System/Library/Frameworks/Python.framework/Versions/2.5

Sorry for the confusion,
Chris

On Tue, Jun 3, 2008 at 2:33 PM, J. Stark [EMAIL PROTECTED] wrote:

  I have just tried to run the 1.1.0 OSX installer on a MacBookAir
  running 10.5.3 and the installer fails with

  You cannot install numpy 1.1.0 on this volume. numpy requires System
  Python 2.5 to install.

  The system python version reports as

  jaroslav$ python
  Python 2.5.1 (r251:54863, Apr 15 2008, 22:57:26)
  [GCC 4.0.1 (Apple Inc. build 5465)] on darwin

  which is the same version that Leopard has had all along, as far as I
  am aware. On the the other hand, there have been some reports on
  PythonMac about odd python behaviour following the 10.5.3 upgrade.

  Has anybody used this installer successfully under 10.5.3, and/or
  have any idea of what is going on.

  Incidentally, this is a new machine with just the default system
  installation.

  Jaroslav
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] inconsistent behavior in binary_repr

2008-06-04 Thread Damian Eads
Hi,

I noticed some odd behavior in binary_repr when the width parameter is 
used. In most cases it works,

In [23]: numpy.binary_repr(1, width=8)
Out[23]: '0001'

In [24]: numpy.binary_repr(2, width=8)
Out[24]: '0010'

In [25]: numpy.binary_repr(3, width=8)
Out[25]: '0011'

In [26]: numpy.binary_repr(4, width=8)
Out[26]: '0100'

except when 0 is passed, I get the following

In [27]: numpy.binary_repr(0, width=8)
Out[27]: '0'

Is this what the output is intended to be for the 0 case?

Damian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inconsistent behavior in binary_repr

2008-06-04 Thread Robert Kern
On Wed, Jun 4, 2008 at 2:56 AM, Damian Eads [EMAIL PROTECTED] wrote:
 Hi,

 I noticed some odd behavior in binary_repr when the width parameter is
 used. In most cases it works,

 In [23]: numpy.binary_repr(1, width=8)
 Out[23]: '0001'

 In [24]: numpy.binary_repr(2, width=8)
 Out[24]: '0010'

 In [25]: numpy.binary_repr(3, width=8)
 Out[25]: '0011'

 In [26]: numpy.binary_repr(4, width=8)
 Out[26]: '0100'

 except when 0 is passed, I get the following

 In [27]: numpy.binary_repr(0, width=8)
 Out[27]: '0'

 Is this what the output is intended to be for the 0 case?

No. SVN trunk seems to work correctly. What version of numpy do you have?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installation info

2008-06-04 Thread David Cournapeau
Robert Kern wrote:


 There are a lot of them. Feel free to add any additional tests you
 think are necessary, and we'll see how painful it is at build-time.
   

What would be acceptable ? I quickly tested on my macbook, on mac os X: 
it takes ~ 2 seconds / 25 functions tests. If speed really is a problem, 
than we can first test them as a group, and then one by one for the 
platforms where it does not work (something like AC_CHECK_FUNCS_ONCE, if 
you are familiar with autotools) ?

It should not be too complicated to add this to distutils, and I believe 
it would make the configuration more robust,

cheers,

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installation info

2008-06-04 Thread David Cournapeau
Charles R Harris wrote:

 It probably just grew to fix problems as they arose. It should be 
 possible to test for every function and fall back to the double 
 versions that are more reliably present. It would be nicer if all 
 compilers tried to conform to recent standards, i.e., be less than 9 
 years out of date, but that is a bit much to ask for.

Most compilers are not compatible (none of them supports C99 at 100 %; 
the better question is which subset is powerfull and implemented 
thouroughly).

I had some surprising problems with numscons and mingw, and obviously MS 
compilers/runtime (each version has a different configuration for the 
part of the code we are talking about).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy-discussion] inconsistent behavior in binary_repr

2008-06-04 Thread Damian Eads
Whoops. In one xterm, I'm going off the Fedora package and in the other, 
the SVN source tree. SVN seems to work. Sorry for the unnecessary message.

On Wed, Jun 4, 2008 at 2:59 AM, Robert Kern wrote:
  In [27]: numpy.binary_repr(0, width=8)
  Out[27]: '0'
 
  Is this what the output is intended to be for the 0 case?

No. SVN trunk seems to work correctly. What version of numpy do you have?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread J. Stark
Robert,

I see your point, but why not just install a separate NumPy to run 
with the system Python? That is what I have always done in the past 
without problems.

I guess I always feel a sense of uncertainty with having two separate 
Python installations as to which actually gets used in any particular 
situation. I appreciate that for experts who use Python daily, this 
isn't an issue, but for someone like myself who may have gaps of 
several months between projects that use Python, this is a real issue 
as I forget those kinds of subtleties.

J.

On Wed, Jun 4, 2008 at 1:48 AM, J. Stark [EMAIL PROTECTED] wrote:
  On this topic, I would be interested to hear people's advice on using
  the system provided Python v an independent install. In 25 years of
  using Macs I have learned through several painful lessons that its
  wise to customize the system as little as possible: this minimizes
  both conflicts and reduces problems when doing system upgrades. I
  have therefore always used the default Python provided by OSX, so far
  with no obvious disadvantages for the types of scripts I use
  (primarily home written SciPy scientific code). However, I note that
  many people run either the pythomac.org distribution, or the
  ActiveState. What are the advantages to this?

By installing a separate Python, you are actually customizing the
system *less* than if you used the system Python and installed a bunch
of extra packages. Parts of Apple's software uses the system Python.
If you upgrade packages inside there (like numpy!) you might run into
problems.

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread Robin
On Wed, Jun 4, 2008 at 9:25 AM, J. Stark [EMAIL PROTECTED] wrote:
 Robert,

 I see your point, but why not just install a separate NumPy to run
 with the system Python? That is what I have always done in the past
 without problems.

I think the problem is the system python already comes with a (much
older) cut down version of numpy which you can find in:
/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/numpy
This makes all sorts of problems when installing a new version...
Obviously you can't have two different versions of the same package
with the same name in the same python installation (how do you choose
which one you mean with import numpy.)
I think there were problems with the path so when a new numpy is
installed in 2.5/Extras/lib/site-packages it is actually after the
existing one on the path and doesn't get picked up. Even if it does
work, the worry is that you're changing a supplied component and Apple
stuff might depend upon the version supplied (or other software people
distribute to use the 'system' python might expect it to be there).

I think theres much less chance of problems using the system python
for system things and leaving it well alone - and installing the
python.org for everyday use. The only problem with this is that the
system python works with dtrace while the normal one doesn't...

Cheers

Robin

 I guess I always feel a sense of uncertainty with having two separate
 Python installations as to which actually gets used in any particular
 situation. I appreciate that for experts who use Python daily, this
 isn't an issue, but for someone like myself who may have gaps of
 several months between projects that use Python, this is a real issue
 as I forget those kinds of subtleties.

 J.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread Vincent Noel
Another way to do things which might be useful, if you're not afraid
to modify the system python install, (more-or-less suggested at
http://wiki.python.org/moin/MacPython/Leopard), is to create a
symbolic link to make everything look as if you had installed
macpython, ie

sudo ln -s /System/Library/Frameworks/Python.framework/
/Library/Frameworks/Python.framework

Since, according to the MacPython page, the Leopard python is the same
as the MacPython (2.5.1),
all the packages you'll find on the web that suppose you have
MacPython installed should be happy (easy_installing eggs works fine
as well). HOWEVER you gotta add

export PATH=/Library/Frameworks/Python.framework/Versions/Current/bin:$PATH
export 
PYTHONPATH=/Library/Frameworks/Python.framework/Versions/Current/lib/python2.5/site-packages

in your ~/.bash_profile, otherwise the (older) system numpy will get
used. This is because the system python adds /System/.../2.5/Extras in
front of the /site-packages directory (weird, but hey).

Following this road, I was able to install NumPy 1.1, matplotlib 0.98
and ipython without any problem -- the best thing is that the system
wxPython is used, when it can be a PITA to setup correctly through
other ways. As was said by others, I guess there might be unforeseen
consequences, but everything seems to work fine for now.

Cheers
Vincent


On Wed, Jun 4, 2008 at 10:25 AM, J. Stark [EMAIL PROTECTED] wrote:
 Robert,

 I see your point, but why not just install a separate NumPy to run
 with the system Python? That is what I have always done in the past
 without problems.

 I guess I always feel a sense of uncertainty with having two separate
 Python installations as to which actually gets used in any particular
 situation. I appreciate that for experts who use Python daily, this
 isn't an issue, but for someone like myself who may have gaps of
 several months between projects that use Python, this is a real issue
 as I forget those kinds of subtleties.

 J.

On Wed, Jun 4, 2008 at 1:48 AM, J. Stark [EMAIL PROTECTED] wrote:
  On this topic, I would be interested to hear people's advice on using
  the system provided Python v an independent install. In 25 years of
  using Macs I have learned through several painful lessons that its
  wise to customize the system as little as possible: this minimizes
  both conflicts and reduces problems when doing system upgrades. I
  have therefore always used the default Python provided by OSX, so far
  with no obvious disadvantages for the types of scripts I use
  (primarily home written SciPy scientific code). However, I note that
  many people run either the pythomac.org distribution, or the
  ActiveState. What are the advantages to this?

By installing a separate Python, you are actually customizing the
system *less* than if you used the system Python and installed a bunch
of extra packages. Parts of Apple's software uses the system Python.
If you upgrade packages inside there (like numpy!) you might run into
problems.

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bvp on 64 bits machine

2008-06-04 Thread lorenzo bolla
Hello all.

I'm not sure that this is the correct mailing list to post to: please excuse
me if it's not.

I've been using bvp (http://www.elisanet.fi/ptvirtan/software/bvp/index.html)
by Pauli Virtanen happily on 32 bits machines.
When I used it on 64 bits machines I found a bug that I think I've solved
with the following patch:

=

$ diff colnew.py.old colnew.py
347c347
 ispace = _N.empty([nispace], _N.int_)
---
 ispace = _N.empty([nispace], _N.int32)
402c402
 ], _N.int_)
---
 ], _N.int32)

=

The problem is cause by the fact that _N.int_ is different for 32 and 64
bits machines. Forcing it to be an _N.int32 did the trick.
Pauli, would you like to commit it to your source distribution?

Regards,
Lorenzo.

-- 
Whereof one cannot speak, thereof one must be silent. -- Ludwig
Wittgenstein
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread David Cournapeau
Robin wrote:
 I think theres much less chance of problems using the system python
 for system things and leaving it well alone - and installing the
 python.org for everyday use. The only problem with this is that the
 system python works with dtrace while the normal one doesn't...
   

The source for dtrace integration are not freely available ?

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread Robin
On Wed, Jun 4, 2008 at 10:59 AM, David Cournapeau
[EMAIL PROTECTED] wrote:
 Robin wrote:
 I think theres much less chance of problems using the system python
 for system things and leaving it well alone - and installing the
 python.org for everyday use. The only problem with this is that the
 system python works with dtrace while the normal one doesn't...


 The source for dtrace integration are not freely available ?

Perhaps they are (I suppose they should be) but I can't find them - in
any case I don't think I would patch python distribution and build
from source just for this feature... I only mentioned it in passing.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread Tommy Grav
You have to very careful when you do this. For example
the system numpy is in ../python2.5/Extras/lib/ under the
framework, while I think the numpy binary installer installs
things in ../python2.5/lib/site-packages/. So if one is not
careful one ends up with two numpy packages with all
the problems that can cause.

I have installed Activepython on my machine (PPC w/ 10.5.3)
and it has worked more or less flawlessly. The system python
is still there and is untouched since I installed Leopard and
I do all my development against the activepython distribution.

Cheers
Tommy


On Jun 4, 2008, at 6:02 AM, Vincent Noel wrote:

 Another way to do things which might be useful, if you're not afraid
 to modify the system python install, (more-or-less suggested at
 http://wiki.python.org/moin/MacPython/Leopard), is to create a
 symbolic link to make everything look as if you had installed
 macpython, ie

 sudo ln -s /System/Library/Frameworks/Python.framework/
 /Library/Frameworks/Python.framework

 Since, according to the MacPython page, the Leopard python is the same
 as the MacPython (2.5.1),
 all the packages you'll find on the web that suppose you have
 MacPython installed should be happy (easy_installing eggs works fine
 as well). HOWEVER you gotta add

 export PATH=/Library/Frameworks/Python.framework/Versions/Current/ 
 bin:$PATH
 export PYTHONPATH=/Library/Frameworks/Python.framework/Versions/ 
 Current/lib/python2.5/site-packages

 in your ~/.bash_profile, otherwise the (older) system numpy will get
 used. This is because the system python adds /System/.../2.5/Extras in
 front of the /site-packages directory (weird, but hey).

 Following this road, I was able to install NumPy 1.1, matplotlib 0.98
 and ipython without any problem -- the best thing is that the system
 wxPython is used, when it can be a PITA to setup correctly through
 other ways. As was said by others, I guess there might be unforeseen
 consequences, but everything seems to work fine for now.

 Cheers
 Vincent


 On Wed, Jun 4, 2008 at 10:25 AM, J. Stark [EMAIL PROTECTED]  
 wrote:
 Robert,

 I see your point, but why not just install a separate NumPy to run
 with the system Python? That is what I have always done in the past
 without problems.

 I guess I always feel a sense of uncertainty with having two separate
 Python installations as to which actually gets used in any particular
 situation. I appreciate that for experts who use Python daily, this
 isn't an issue, but for someone like myself who may have gaps of
 several months between projects that use Python, this is a real issue
 as I forget those kinds of subtleties.

 J.

 On Wed, Jun 4, 2008 at 1:48 AM, J. Stark [EMAIL PROTECTED]  
 wrote:
 On this topic, I would be interested to hear people's advice on  
 using
 the system provided Python v an independent install. In 25 years of
 using Macs I have learned through several painful lessons that its
 wise to customize the system as little as possible: this minimizes
 both conflicts and reduces problems when doing system upgrades. I
 have therefore always used the default Python provided by OSX, so  
 far
 with no obvious disadvantages for the types of scripts I use
 (primarily home written SciPy scientific code). However, I note  
 that
 many people run either the pythomac.org distribution, or the
 ActiveState. What are the advantages to this?

 By installing a separate Python, you are actually customizing the
 system *less* than if you used the system Python and installed a  
 bunch
 of extra packages. Parts of Apple's software uses the system Python.
 If you upgrade packages inside there (like numpy!) you might run  
 into
 problems.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a  
 harmless
 enigma that is made terrible by our own mad attempt to interpret  
 it as
 though it had an underlying truth.
 -- Umberto Eco
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread Christopher Barker
Tommy Grav wrote:
 I have installed Activepython on my machine (PPC w/ 10.5.3)
 and it has worked more or less flawlessly.

And I've been using the python.org one for ages, also with NO issues. I 
tried to use Apple's Python for a while back with 10.2, but there were 
always problems, and Apple's has never patched or upgraded anything 
python within an OS version. This really is an issue: I just ran into a 
python bug that was fixed between 2.5.1 and 2.5.2 -- do you really not 
want to have the option of fixing that? You've also got an old wxPython, 
and old numpy, an old who knows what?

Also, it seems some folks have been having breakage of already compiled 
pyc files when upgrading to the latest 10.5

If you install your own Python, you have control over everything you 
need to, and you won't break anything Apple is using.

One other issue -- if you want to build re-distributable apps with 
py2app, or Universal binaries of packages, you'll need the python.,org 
python (Or some monkey-patching of the system python).

 export PYTHONPATH=/Library/Frameworks/Python.framework/Versions/ 
 Current/lib/python2.5/site-packages

This will probably not break anything Apple did, as their stuff won't be 
using your bash_profile, but it still scares me a bit.

  The best thing is that the system
 wxPython is used, when it can be a PITA to setup correctly through
 other ways.

huh? The installer provided at the wxPython pages has always worked 
flawlessly for me (for the python.org build) -- what could be easier?

By the way, this was discussed a lot on the pythonmac list -- though 
with no consensus reached :-( . For historical interest however, the 
Python supplied with 10.5 is the first one that core pythonmac folks 
have considered not too broken to use.

One more plug for my opinion -- the Python-on-OS-X world is far too 
fractured -- Apple's build, python.org's, Activestate's, fink, macports, 
build-from tarball... It's a mess, and a real pain for folks that want 
to just build/install a binary package. If the community could settle on 
one build to support, life would be so much easier. Python.org's 2.5 
build is really the only option for that (I think ActiveState's may be 
compatible), as it is the only one that supports:

10.3.9 and 10.4 and 10.5
PPC and Intel
py2app
building Universal binary packages.

That's why the binary distributed by numpy is for that build, and it's a 
good choice. (the working should be change,d tough, it's not the 
system python).

-Chris




-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread Vincent Noel
On Wed, Jun 4, 2008 at 6:36 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
   The best thing is that the system
 wxPython is used, when it can be a PITA to setup correctly through
 other ways.

 huh? The installer provided at the wxPython pages has always worked
 flawlessly for me (for the python.org build) -- what could be easier?

Not installing anything is easier :-)
More seriously though, today I understand how things are supposed to work
and the distinction between the pythons in /System/Library/ and /Library,
but it took me a while to figure it out, and during the process I ended up
with lots of things installed in weird places (I still have some stuff in
/Library/Python that I'm not sure what is for) and not being recognized
(since I was not sure which Python to use). My remark came from these
bad memories (including a few times recompiling wxPython),
I apologize if things usually go smoothly.

I guess it's just another way to say it might be good, as you suggest,
to standardize on a single
specific distribution, I guess. It would limit the confusion.

Out of curiosity, what is the difference between python.org's python
and macpython's ?
I couldn't find a clear explanation. Is there any reason to use
macpython's package instead
of python.org's ?

Cheers
Vincent
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy, py2exe, and SSE

2008-06-04 Thread Zachary Pincus
Hello all,

I've been toying around with bundling up a numpy-using python program  
for windows by using py2exe. All in all, it works great, except for  
one thing: the numpy superpack installer for windows has (correctly)  
selected SSE3 binary libraries to install on my machine. This causes  
(of course) crashes when the binary py2exe bundle, which includes the  
numpy libraries installed on my machine, is run on a non-SSE3 machine.

So, two questions:
  1) What's the best target for safe binaries on i386 machines? No  
SSE at all? SSE? SSE2?
  2) What's the best way to get the libraries desired installed for  
numpy? E.g. compile  if myself with some --no-sse option? (And if so,  
any clue as to the required options.) Or is there some way to get the  
numpy windows installer to install a specific kind of binary?

Thanks,
Zach Pincus
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which Python to Use on OSX, Was: 1.1.0 OSX Installer Fails Under 10.5.3?

2008-06-04 Thread Barry Wark
On Wed, Jun 4, 2008 at 2:40 AM, Robin [EMAIL PROTECTED] wrote:
 On Wed, Jun 4, 2008 at 9:25 AM, J. Stark [EMAIL PROTECTED] wrote:
 Robert,

 I see your point, but why not just install a separate NumPy to run
 with the system Python? That is what I have always done in the past
 without problems.

 I think the problem is the system python already comes with a (much
 older) cut down version of numpy which you can find in:
 /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/numpy
 This makes all sorts of problems when installing a new version...
 Obviously you can't have two different versions of the same package
 with the same name in the same python installation (how do you choose
 which one you mean with import numpy.)
 I think there were problems with the path so when a new numpy is
 installed in 2.5/Extras/lib/site-packages it is actually after the
 existing one on the path and doesn't get picked up. Even if it does
 work, the worry is that you're changing a supplied component and Apple
 stuff might depend upon the version supplied (or other software people
 distribute to use the 'system' python might expect it to be there).

This is not entirely accurate on OS X 10.5. If you install anything
using the system python, it is put in
/Library/Python/2.5/site-packages. Apple's system-supplied packages
are kept in /System/Library/Frameworks/Python.framework/... and, as
you note, the site-packages directory in
/System/Library/Frameworks/Python.framework does come first on the
sys.path. In this way, it is very difficult to override the
Apple-provided packages in a way that breaks system tools that depend
on them. HOWEVER, if you install your packages using setuptools
(including an updated numpy), setuptools will place them first on the
sys.path at import (Apple's tools intentionally restrict their
sys.path to the System/Library/Frameworks/Python.framework
site-packages for this reason). So, if you use setuptools to install
numpy, matplotlib, even Twisted (as of version 8, it is
easy_install-able), etc., you can continue to use the system python
without fear of messing up apple's system tools and without having to
install a separate python instance in /Library/Frameworks.

Apple did a pretty good job on this one for Leopard. As others have
noted, sticking with this system python gains you dtrace support, and
continued improvements from Apple (many apple tools now use python
and/or PyObjC).


 I think theres much less chance of problems using the system python
 for system things and leaving it well alone - and installing the
 python.org for everyday use. The only problem with this is that the
 system python works with dtrace while the normal one doesn't...

 Cheers

 Robin

 I guess I always feel a sense of uncertainty with having two separate
 Python installations as to which actually gets used in any particular
 situation. I appreciate that for experts who use Python daily, this
 isn't an issue, but for someone like myself who may have gaps of
 several months between projects that use Python, this is a real issue
 as I forget those kinds of subtleties.

 J.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Renaming record array fields (bug)

2008-06-04 Thread Sameer DCosta
Hi,

There is a bug renaming record array fields if some field names are
the same. I reopened this ticket
http://scipy.org/scipy/numpy/ticket/674 and attached a tiny patch.
Maybe I should have opened a new ticket. Anyway, here is an example
that causes a segfault on the latest svn version.

import numpy as np
dt = np.dtype([('foo', float), ('bar', float)])
a = np.zeros(10, dt)
b = list(a.dtype.names);
b[0] = notfoo
#b[1] = notbar # uncomment this line for no segfault
a.dtype.names = b
print a, a.dtype # this will crash r5253


Sameer
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PyArray_Resize with scipy.weave

2008-06-04 Thread Orest Kozyar
The following code fails:

from scipy import weave
from numpy import zeros

arr = zeros((10,2))
code = 
PyArray_Dims dims;
dims.len = 2;
dims.ptr = Narr;
dims.ptr[0] += 10;
PyArray_Resize(arr_array, dims, 1);

weave.inline(code, ['arr'], verbose=1)

The error message is:
In function 'PyObject* compiled_func(PyObject*, PyObject*)':
filename:678: error: too few arguments to function

678 is the line number for PyArray_Resize.  According to the NumPy
Handbook, PyArray_Resize requires three arguments, which I am
providing. Am I missing something obvious here?  There are times when
I need to be able to resize the array in C++ because I cannot predict
exactly how big the array needs to be before I pass it to weave.  Any
advice or pointers greatly appreciated!

Thanks,
Orest
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is there a function to calculate ecnomic beta coefficient in numpy given two time series data.

2008-06-04 Thread Vineet Jain (gmail)
Timeseries1 = daily or weekly close of stock a 

Timeseries2 = daily or weekly close of market index (spx, , etc)

 

Beta of stock a is what I would like to compute as explained in this article
on Wikipedia:

 

http://en.wikipedia.org/wiki/Beta_coefficient

 

I'm trying to compute the beta of entire stock market (about 15,000
instruments) one stock at a time and would like to use the spiders and 
to represent the overall market. 

 

Thanks,

 

Vineet

 

 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Dan Yamins
I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit processor.
In
this, setting, I'm working with large arrays of binary data.  E.g, I want to
make calls like:
   Z = numpy.inner(a,b)
where and b are fairly large  -- e.g. 2 rows by 100 columns.

However, when such a call is made, I get a memory error that I don't
understand.
Specifically:

 s = numpy.random.binomial(1,.5,(2,100))   #creates 2x100 bin.
array
 r = numpy.inner(s,s)
Python(1714) malloc: *** mmap(size=16) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug

Naively, the numpy.inner call should be fine on my system, since my computer
has
enough memory. (And, when it's run, I have checked to see that at least 5 GB
of
RAM is free.)  The error message thus suggests there's some problem to do
with
memory mapping going on here: that somehow, numpy.inner is calling the mmap
modeul, and that the address space is being exceeded.  And that all my extra
RAM
isn't being used here at all.

So, I have three questions about this:
1) Why is mmap being called in the first place?  I've written to Travis
Oliphant, and he's explained that numpy.inner does NOT directly do any
memory
mapping and shouldn't call mmap.  Instead, it should just operate with
things in
memory -- in which case my 8 GB should allow the computation to go through
just
fine.  What's going on?

2) How can I stop this from happening?  I want to be able to leverage
large
amounts of ram on my machine to scale up my computations and not be
dependent on
the limitations of the address space size.  If the mmap is somehow being
called
by the OS, is there some option I can set that will make it do things in
regular
memory instead?  (Sorry if this is a stupid question.)

3) Even if I had to use memory mapping, why is the 1.6 GB requirement
failing?  I'm using a recent enough version of python, and I have a 64-bit
processor with sufficient amount of memory.  I should be able to allocate at
least 4 GB of address space, right?  But the system seems to be balking at
the
1.6 GB request. (Again, sorry if this is stupid.)

Any help would be greatly appreciated!  Thanks,
Dan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is there a function to calculate ecnomic beta coefficient in numpy given two time series data.

2008-06-04 Thread Keith Goodman
On Wed, Jun 4, 2008 at 5:39 PM, Vineet Jain (gmail) [EMAIL PROTECTED] wrote:
 Timeseries1 = daily or weekly close of stock a

 Timeseries2 = daily or weekly close of market index (spx, , etc)



 Beta of stock a is what I would like to compute as explained in this article
 on Wikipedia:



 http://en.wikipedia.org/wiki/Beta_coefficient



 I'm trying to compute the beta of entire stock market (about 15,000
 instruments) one stock at a time and would like to use the spiders and 
 to represent the overall market.

Unless you run out of memory (or if you want to handle missing returns
which may occur on different dates in each series) there is no need to
do it one stock at a time:

 import numpy.matlib as mp
 mrkt = mp.randn(250,1)  # - 250 days of returns
 stocks = mp.randn(250, 4)  #  4 stocks
 beta, resids, rank, s = mp.linalg.lstsq(mrkt, stocks)
 beta
   matrix([[-0.01701467,  0.11242168,  0.00207398,  0.03920687]])

And you can use mp.log to convert the price ratios to log returns.

It might also be useful to shuffle (mp.random.shuffle) the market
returns and repeat the beta calculation many times to estimate the
noise level of your beta estimates.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Charles R Harris
On Wed, Jun 4, 2008 at 6:42 PM, Dan Yamins [EMAIL PROTECTED] wrote:

 I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit processor.
 In
 this, setting, I'm working with large arrays of binary data.  E.g, I want
 to
 make calls like:
Z = numpy.inner(a,b)
 where and b are fairly large  -- e.g. 2 rows by 100 columns.

 However, when such a call is made, I get a memory error that I don't
 understand.
 Specifically:

  s = numpy.random.binomial(1,.5,(2,100))   #creates 2x100 bin.
 array
  r = numpy.inner(s,s)
 Python(1714) malloc: *** mmap(size=16) failed (error code=12)
 *** error: can't allocate region
 *** set a breakpoint in malloc_error_break to debug

 Naively, the numpy.inner call should be fine on my system, since my
 computer has
 enough memory. (And, when it's run, I have checked to see that at least 5
 GB of
 RAM is free.)  The error message thus suggests there's some problem to do
 with
 memory mapping going on here: that somehow, numpy.inner is calling the mmap
 modeul, and that the address space is being exceeded.  And that all my
 extra RAM
 isn't being used here at all.

 So, I have three questions about this:
 1) Why is mmap being called in the first place?  I've written to Travis
 Oliphant, and he's explained that numpy.inner does NOT directly do any
 memory
 mapping and shouldn't call mmap.  Instead, it should just operate with
 things in
 memory -- in which case my 8 GB should allow the computation to go through
 just
 fine.  What's going on?

 2) How can I stop this from happening?  I want to be able to leverage
 large
 amounts of ram on my machine to scale up my computations and not be
 dependent on
 the limitations of the address space size.  If the mmap is somehow being
 called
 by the OS, is there some option I can set that will make it do things in
 regular
 memory instead?  (Sorry if this is a stupid question.)

 3) Even if I had to use memory mapping, why is the 1.6 GB requirement
 failing?  I'm using a recent enough version of python, and I have a 64-bit
 processor with sufficient amount of memory.  I should be able to allocate
 at
 least 4 GB of address space, right?  But the system seems to be balking at
 the
 1.6 GB request. (Again, sorry if this is stupid.)


Are both python and your version of OS X fully 64 bits?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is there a function to calculate ecnomic beta coefficient in numpy given two time series data.

2008-06-04 Thread Keith Goodman
On Wed, Jun 4, 2008 at 6:04 PM, Keith Goodman [EMAIL PROTECTED] wrote:
 It might also be useful to shuffle (mp.random.shuffle) the market
 returns and repeat the beta calculation many times to estimate the
 noise level of your beta estimates.

I guess that is more of a measure of how different from zero your
betas are. Maybe use a bootstrap to measure the noise in beta.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Anne Archibald
2008/6/4 Dan Yamins [EMAIL PROTECTED]:

 So, I have three questions about this:
 1) Why is mmap being called in the first place?  I've written to Travis
 Oliphant, and he's explained that numpy.inner does NOT directly do any
 memory
 mapping and shouldn't call mmap.  Instead, it should just operate with
 things in
 memory -- in which case my 8 GB should allow the computation to go through
 just
 fine.  What's going on?

 2) How can I stop this from happening?  I want to be able to leverage
 large
 amounts of ram on my machine to scale up my computations and not be
 dependent on
 the limitations of the address space size.  If the mmap is somehow being
 called
 by the OS, is there some option I can set that will make it do things in
 regular
 memory instead?  (Sorry if this is a stupid question.)

I don't know much about OSX, but I do know that many malloc()
implementations take advantage of a modern operating system's virtual
memory when allocating large blocks of memory. For small blocks,
malloc uses memory arenas, but if you ask for a large block malloc()
will request a whole bunch of pages from the operating system. This
way when the memory is freed, free() can easily return the chunk of
memory to the OS. On some systems, one way to get such a big hunk of
memory from the system is with an anonymous mmap(). I think that's
what's going on here. So I don't think you want to stop malloc() from
using mmap().

You do, of course, want the memory allocation to succeed, and I'm
afraid I don't have any idea why it can't. Under Linux, I know that
you can run a 64-bit processor in 32-bit mode, which gives you the
usual 4 GB address space limit. I've no idea what OSX does.

Good luck,
Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is there a function to calculate ecnomic beta coefficient in numpy given two time series data.

2008-06-04 Thread Vineet Jain (gmail)
Thanks Keith!

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Keith Goodman
Sent: Wednesday, June 04, 2008 9:04 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Is there a function to calculate ecnomic
beta coefficient in numpy given two time series data.

On Wed, Jun 4, 2008 at 5:39 PM, Vineet Jain (gmail) [EMAIL PROTECTED]
wrote:
 Timeseries1 = daily or weekly close of stock a

 Timeseries2 = daily or weekly close of market index (spx, , etc)



 Beta of stock a is what I would like to compute as explained in this
article
 on Wikipedia:



 http://en.wikipedia.org/wiki/Beta_coefficient



 I'm trying to compute the beta of entire stock market (about 15,000
 instruments) one stock at a time and would like to use the spiders and

 to represent the overall market.

Unless you run out of memory (or if you want to handle missing returns
which may occur on different dates in each series) there is no need to
do it one stock at a time:

 import numpy.matlib as mp
 mrkt = mp.randn(250,1)  # - 250 days of returns
 stocks = mp.randn(250, 4)  #  4 stocks
 beta, resids, rank, s = mp.linalg.lstsq(mrkt, stocks)
 beta
   matrix([[-0.01701467,  0.11242168,  0.00207398,  0.03920687]])

And you can use mp.log to convert the price ratios to log returns.

It might also be useful to shuffle (mp.random.shuffle) the market
returns and repeat the beta calculation many times to estimate the
noise level of your beta estimates.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Dan Yamins
I don't know much about OSX, but I do know that many malloc()
 implementations take advantage of a modern operating system's virtual
 memory when allocating large blocks of memory. For small blocks,
 malloc uses memory arenas, but if you ask for a large block malloc()
 will request a whole bunch of pages from the operating system. This
 way when the memory is freed, free() can easily return the chunk of
 memory to the OS. On some systems, one way to get such a big hunk of
 memory from the system is with an anonymous mmap(). I think that's
 what's going on here. So I don't think you want to stop malloc() from
 using mmap().

 You do, of course, want the memory allocation to succeed, and I'm
 afraid I don't have any idea why it can't. Under Linux, I know that
 you can run a 64-bit processor in 32-bit mode, which gives you the
 usual 4 GB address space limit. I've no idea what OSX does.

 Good luck,
 Anne


Anne, thanks so much for your help.  I still a little confused.   If your
scenario about the the memory allocation is working is right, does that mean
that even if I put a lot of ram on the machine, e.g.  16GB, I still can't
request it in blocks larger than the limit imposed by the processor
architecture (e.g. 4 GB for 32, 8 GB for 64-bit)?What I really want is
to be able to have my ability to request memory limited just by the amount
of memory on the machine, and not have it depend on something about
paging/memory mapping limits.  Is this a stupid/naive thing?

(Sorry for my ignorance, and thanks again for the help!)

best,
Dan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Dan Yamins
On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris [EMAIL PROTECTED]
wrote:



 On Wed, Jun 4, 2008 at 6:42 PM, Dan Yamins [EMAIL PROTECTED] wrote:

 I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit processor.
 In
 this, setting, I'm working with large arrays of binary data.  E.g, I want
 to
 make calls like:
Z = numpy.inner(a,b)
 where and b are fairly large  -- e.g. 2 rows by 100 columns.

 However, when such a call is made, I get a memory error that I don't
 understand.
 Specifically:

  s = numpy.random.binomial(1,.5,(2,100))   #creates 2x100 bin.
 array
  r = numpy.inner(s,s)
 Python(1714) malloc: *** mmap(size=16) failed (error code=12)
 *** error: can't allocate region
 *** set a breakpoint in malloc_error_break to debug



 Are both python and your version of OS X fully 64 bits?



I'm not sure.   My version of OS X is the most recent version, the one that
ships with a new MacPro Dual Quad-core Xeon 3.2MHz chipset.  The processor
is definitely 64-bit, so I think the operating system probably is enable for
that, but am not sure. (How would I find out?)  As for the python version, I
thought that 2.5 and above were 64-enabled, but I'm not sure how I'd check
it.

Thanks!
Dan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Anne Archibald
2008/6/4 Dan Yamins [EMAIL PROTECTED]:

 Anne, thanks so much for your help.  I still a little confused.   If your
 scenario about the the memory allocation is working is right, does that mean
 that even if I put a lot of ram on the machine, e.g.  16GB, I still can't
 request it in blocks larger than the limit imposed by the processor
 architecture (e.g. 4 GB for 32, 8 GB for 64-bit)?What I really want is
 to be able to have my ability to request memory limited just by the amount
 of memory on the machine, and not have it depend on something about
 paging/memory mapping limits.  Is this a stupid/naive thing?

No, that's a perfectly reasonable thing to want, and it *should* be
possible with your hardware. Maybe I'm barking up the wrong tree and
the problem is something else. But if you'll bear with me:

For the last ten years or so, normal computers have used 32 bits to
address memory. This means  that any given process can only get at
2**32 different bytes of memory, which is 4 GB. The magic of virtual
memory means you can possibly have a number of different processes
addressing different 4 GB chunks, some of which may temporarily be
stored on disk at any given time. If you want to get at a chunk of
memory bigger than 4 GB, you need all of these things:

* Your CPU must be 64-bit, or else it has no hope of being able to
access more than 4 GB (in fact more than 3 GB for complicated reasons)
of physical memory.
* Your operating system must be 64-bit (64-bit PC CPUs have a 32-bit
compatibility mode they often find themselves running in) so that it
can know about all the real RAM you have installed and so that it can
support 64-bit programs.
* Your program must be 64-bit, so that it can access huge chunks of RAM.

For compatibility reasons, many 64-bit machines have most of their
software compiled in 32-bit mode. This isn't necessarily a problem -
the programs probably even run faster - but they can't access more
than 4 GB each. So it's worth finding out whether your python
interpreter is compiled as a 64-bit program (and whether that's even
possible under your version of OSX). Unfortunately I  don't know how
to find that out in your case.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Charles R Harris
On Wed, Jun 4, 2008 at 7:41 PM, Dan Yamins [EMAIL PROTECTED] wrote:



 On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris 
 [EMAIL PROTECTED] wrote:



 On Wed, Jun 4, 2008 at 6:42 PM, Dan Yamins [EMAIL PROTECTED] wrote:

 I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit
 processor.  In
 this, setting, I'm working with large arrays of binary data.  E.g, I want
 to
 make calls like:
Z = numpy.inner(a,b)
 where and b are fairly large  -- e.g. 2 rows by 100 columns.

 However, when such a call is made, I get a memory error that I don't
 understand.
 Specifically:

  s = numpy.random.binomial(1,.5,(2,100))   #creates 2x100 bin.
 array
  r = numpy.inner(s,s)
 Python(1714) malloc: *** mmap(size=16) failed (error code=12)
 *** error: can't allocate region
 *** set a breakpoint in malloc_error_break to debug



 Are both python and your version of OS X fully 64 bits?



 I'm not sure.   My version of OS X is the most recent version, the one that
 ships with a new MacPro Dual Quad-core Xeon 3.2MHz chipset.  The processor
 is definitely 64-bit, so I think the operating system probably is enable for
 that, but am not sure. (How would I find out?)  As for the python version, I
 thought that 2.5 and above were 64-enabled, but I'm not sure how I'd check
 it.


Hmm,

In [1]: s = numpy.random.binomial(1,.5,(2,100))

In [2]: inner(s,s)
Out[2]:
array([[45, 22, 17, ..., 20, 26, 23],
   [22, 52, 26, ..., 23, 33, 24],
   [17, 26, 52, ..., 27, 27, 19],
   ...,
   [20, 23, 27, ..., 46, 26, 22],
   [26, 33, 27, ..., 26, 54, 25],
   [23, 24, 19, ..., 22, 25, 44]])

This on 32 bit fedora 8 with 2GiB of actual memory. It was slow and a couple
of hundred megs of something went into swap, but it did complete. So this
looks to me like an OS X problem. Are there any limitations on the user
memory sizes? There might be some system setting accounting for this.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread David Cournapeau
On Wed, 2008-06-04 at 21:38 -0400, Dan Yamins wrote:

 
 Anne, thanks so much for your help.  I still a little confused.   If
 your scenario about the the memory allocation is working is right,
 does that mean that even if I put a lot of ram on the machine, e.g. 
 16GB, I still can't request it in blocks larger than the limit imposed
 by the processor architecture (e.g. 4 GB for 32, 8 GB for 64-bit)?

Definitely. For 32 bits, it is actually smaller than 4 Gb, because the
whole address range is used by the processus and the OS, and the OS
generally needs at least 1 Gb (I don't know the default for mac os X,
but both linux and windows default to 2 Gb: so a processus cannot use
more than 2 Gb of memory).

The whole point of having a 64 bits CPU in 64 bits mode is to have more
than 32 bits for the (virtual) address space. If it is not in 64 bits
mode, the pointer is 32 bits (void* is 4 bytes, as returned by malloc),
and you would have a hard time accessing anything above 32 bits. It is
possible to address more than 32 bits on 32 bits OS using some weird
extensions, with a semi segmented address (you address the memory with
an offset + an address, like in the old times), but I have never used
it.

IOW: with more than 4 Gb of real memory, the hardware can address it,
in 32 bits or 64 bits, but *you* cannot address  it in 32 bits mode
because your pointers are still 4 bytes. To address more than 4 Gb, you
need 8 bytes pointers. 

So the real question is how to do it on mac os X, and unfortunately, I
cannot help you. I know that mac os X supports executing both 32 bits
and 64 bits (super fat binaries with the ability to put up to 4
differents kind of binaries: ppc and x86, 32 bits and 64 bits in both
cases), but I don't know which subsystem you can use, and how to compile
it (you will certainly have to compile your application differently). I
remember having seen mentioned several times that for Leopard, you
cannot build a 64 bits binary using GUI, because no GUI subsystem is 64
bits compatible yet.

 What I really want is to be able to have my ability to request
 memory limited just by the amount of memory on the machine, and not
 have it depend on something about paging/memory mapping limits.  Is
 this a stupid/naive thing? 

It is not stupid, I think everyone whished it was possible.
Unfortunately, in mac os X case at least, you cannot do it so simply
(yet).

David


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Travis E. Oliphant
Dan Yamins wrote:
 I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit 
 processor.  In
 this, setting, I'm working with large arrays of binary data.  E.g, I 
 want to
 make calls like:
Z = numpy.inner(a,b)
 where and b are fairly large  -- e.g. 2 rows by 100 columns.

Hey Dan.   Now, that you mention you are using OS X, I'm fairly 
confident that the problem is that you are using a 32-bit version of 
Python (i.e. you are not running in full 64-bit mode and so the 4GB 
limit applies).

The most common Python on OS X is 32-bit python.  I think a few people 
in the SAGE project have successfully built Python in 64-bit mode on OSX 
(but I don't think they have released anything yet).  You would have to 
use a 64-bit version of Python to compile NumPy if you want to access 
large memory.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Dan Yamins
On Wed, Jun 4, 2008 at 10:07 PM, David Cournapeau 
[EMAIL PROTECTED] wrote:

 On Wed, 2008-06-04 at 21:38 -0400, Dan Yamins wrote:

 
  Anne, thanks so much for your help.  I still a little confused.   If
  your scenario about the the memory allocation is working is right,
  does that mean that even if I put a lot of ram on the machine, e.g. 
  16GB, I still can't request it in blocks larger than the limit imposed
  by the processor architecture (e.g. 4 GB for 32, 8 GB for 64-bit)?

 Definitely. For 32 bits, it is actually smaller than 4 Gb, because the
 whole address range is used by the processus and the OS, and the OS
 generally needs at least 1 Gb (I don't know the default for mac os X,
 but both linux and windows default to 2 Gb: so a processus cannot use
 more than 2 Gb of memory).

 It is not stupid, I think everyone whished it was possible.
 Unfortunately, in mac os X case at least, you cannot do it so simply
 (yet).

 David



Anne and David, thank you both so much for your very lucid explanations.  (I
apologize for my ignorance -- with your help and some additional reading, I
think I understand how memory allocation works a bit better now.)

Best,
Dan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Charles R Harris
On Wed, Jun 4, 2008 at 7:41 PM, Dan Yamins [EMAIL PROTECTED] wrote:



 On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris 
 [EMAIL PROTECTED] wrote:



 On Wed, Jun 4, 2008 at 6:42 PM, Dan Yamins [EMAIL PROTECTED] wrote:

 I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit
 processor.  In
 this, setting, I'm working with large arrays of binary data.  E.g, I want
 to
 make calls like:
Z = numpy.inner(a,b)
 where and b are fairly large  -- e.g. 2 rows by 100 columns.

 However, when such a call is made, I get a memory error that I don't
 understand.
 Specifically:

  s = numpy.random.binomial(1,.5,(2,100))   #creates 2x100 bin.
 array
  r = numpy.inner(s,s)
 Python(1714) malloc: *** mmap(size=16) failed (error code=12)
 *** error: can't allocate region
 *** set a breakpoint in malloc_error_break to debug



 Are both python and your version of OS X fully 64 bits?



 I'm not sure.   My version of OS X is the most recent version, the one that
 ships with a new MacPro Dual Quad-core Xeon 3.2MHz chipset.  The processor
 is definitely 64-bit, so I think the operating system probably is enable for
 that, but am not sure. (How would I find out?)  As for the python version, I
 thought that 2.5 and above were 64-enabled, but I'm not sure how I'd check
 it.


Try

In [3]: numpy.dtype(numpy.uintp).itemsize
Out[3]: 4

which is the size in bytes of the integer needed to hold a pointer. The
output above is for 32 bit python/numpy.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Dan Yamins

 Hey Dan.   Now, that you mention you are using OS X, I'm fairly
 confident that the problem is that you are using a 32-bit version of
 Python (i.e. you are not running in full 64-bit mode and so the 4GB
 limit applies).

 The most common Python on OS X is 32-bit python.  I think a few people
 in the SAGE project have successfully built Python in 64-bit mode on OSX
 (but I don't think they have released anything yet).  You would have to
 use a 64-bit version of Python to compile NumPy if you want to access
 large memory.

 -Travis


Travis, thanks for the message.   I think you're probably right -- I didn't
build python myself but instead downloaded the universal OSX binary from the
python download page -- and that surely wasn't built for 64-bit system.   So
I guess I'll have to figure out to do the 64-bit build.

Which leaves a question that I think Chuck brought up in a way:

 In [1]: s = numpy.random.binomial(1,.5,(2,100))
 In [2]: inner(s,s)
 Out[2]:
 array([[45, 22, 17, ..., 20, 26, 23],
  [22, 52, 26, ..., 23, 33, 24],
   [17, 26, 52, ..., 27, 27, 19],
   ...,
   [20, 23, 27, ..., 46, 26, 22],
   [26, 33, 27, ..., 26, 54, 25],
   [23, 24, 19, ..., 22, 25, 44]])

This on 32 bit fedora 8 with 2GiB of actual memory. It was slow and a
couple of hundred megs of something went into swap, but it did complete. So
this looks to me like an OS X problem. Are there any limitations on the user
memory sizes? There might be some system setting accounting for this.

Chuck, is this another way of asking:  why is my OS X system not paging
memory the way you'd expect a system to respond to the malloc command?Is
python somehow overloaded the malloc command so that when the OS says a swap
would have to occur, somehow instead of just the swap, that an error message
involving mmap is somehow triggered?   (Sorry if this makes no sense.)   I
should add that the tried the same code on a 32-bit windows machine and got
the same error as on OS X.   Maybe the Linux python  builds manage this
stuff better.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Anne Archibald
2008/6/4 Dan Yamins [EMAIL PROTECTED]:




 Try

 In [3]: numpy.dtype(numpy.uintp).itemsize
 Out[3]: 4

 which is the size in bytes of the integer needed to hold a pointer. The
 output above is for 32 bit python/numpy.

 Chuck

 Check, the answer is 4, as you got for the 32-bit.   What would the answer
 be on a 64-bit architecture?  Why is this diagnostic?

In a 64-bit setting, a pointer needs to be 64 bits long, that is,
eight bytes, not four.

What Charles pointed out was that while the inner product is very big,
it seems to fit into memory on his 32-bit Linux machine; is it
possible that OSX is preventing your python process from using even
the meager 2-3 GB that a 32-bit process ought to get? In particular,
try running Charles' script in a fresh python interpreter and see if
it works; it may be that other arrays you had allocated are taking up
some of the space that this one could.

You will probably still want a 64-bit python, though, in order to have
a little elbow room.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Nathan Bell
On Wed, Jun 4, 2008 at 9:50 PM, Dan Yamins [EMAIL PROTECTED] wrote:

 In [3]: numpy.dtype(numpy.uintp).itemsize
 Out[3]: 4

 which is the size in bytes of the integer needed to hold a pointer. The
 output above is for 32 bit python/numpy.


 Check, the answer is 4, as you got for the 32-bit.   What would the answer
 be on a 64-bit architecture?  Why is this diagnostic?

It would be 8 on a 64-bit architecture (with a 64-bit binary):  8
bytes = 64 bits, 4 bytes = 32 bits.

-- 
Nathan Bell [EMAIL PROTECTED]
http://graphics.cs.uiuc.edu/~wnbell/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Charles R Harris
Hi Dan,

On Wed, Jun 4, 2008 at 8:50 PM, Dan Yamins [EMAIL PROTECTED] wrote:





 Try

 In [3]: numpy.dtype(numpy.uintp).itemsize
 Out[3]: 4

 which is the size in bytes of the integer needed to hold a pointer. The
 output above is for 32 bit python/numpy.

 Chuck


 Check, the answer is 4, as you got for the 32-bit.   What would the answer
 be on a 64-bit architecture?  Why is this diagnostic?


On 64 bit ubuntu

In [1]: import numpy

In [2]: numpy.dtype(numpy.uintp).itemsize
Out[2]: 8


This is diagnostic because a pointer of 4 bytes, 32 bits, can only address 4
GiB whereas an 8 byte pointer has 64 bits to address memory. So it looks
like your numpy was compiled by a 32 python executable.

I've been been googling this a bit and it looks like the standard executable
python on the mac is 32 bits, even though 64 bit libraries are available if
you can get a 64 bit version up and running. A lot of libraries are
available in both 32 and 64 bit versions.

Chuck


 Thanks!
 Dan


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Charles R Harris
On Wed, Jun 4, 2008 at 9:07 PM, Anne Archibald [EMAIL PROTECTED]
wrote:

 2008/6/4 Dan Yamins [EMAIL PROTECTED]:
 
 
 
 
  Try
 
  In [3]: numpy.dtype(numpy.uintp).itemsize
  Out[3]: 4
 
  which is the size in bytes of the integer needed to hold a pointer. The
  output above is for 32 bit python/numpy.
 
  Chuck
 
  Check, the answer is 4, as you got for the 32-bit.   What would the
 answer
  be on a 64-bit architecture?  Why is this diagnostic?

 In a 64-bit setting, a pointer needs to be 64 bits long, that is,
 eight bytes, not four.

 What Charles pointed out was that while the inner product is very big,
 it seems to fit into memory on his 32-bit Linux machine; is it
 possible that OSX is preventing your python process from using even
 the meager 2-3 GB that a 32-bit process ought to get? In particular,
 try running Charles' script in a fresh python interpreter and see if
 it works; it may be that other arrays you had allocated are taking up
 some of the space that this one could.

 You will probably still want a 64-bit python, though, in order to have
 a little elbow room.


I think the difference is that Linux takes up 1 GiB, leaving 3 GiB to the
process, while I suspect OS X is taking up 2 GiB. I don't have a Mac, so I
don't really know. When I ran the the problem it took about 1.8 GiB, making
it a close run thing if the Mac only gives 32 bit proccesses 2 GiB.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Dan Yamins
What Charles pointed out was that while the inner product is very big,
 it seems to fit into memory on his 32-bit Linux machine; is it
 possible that OSX is preventing your python process from using even
 the meager 2-3 GB that a 32-bit process ought to get?



Yes -- I think this is what is happening, because it's choking on calling up
1.6 GiB



 In particular,
 try running Charles' script in a fresh python interpreter and see if
 it works; it may be that other arrays you had allocated are taking up
 some of the space that this one could.



I did try this a number of times.   The result was by starting python
freshly or deleting other arrays from memory, I am able to increase the size
of the largest array I can compute the inner product on.   However, even
with nothing else in memory other than numpy and the original matrix whose
inner product I'm taking
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Michael Abshoff
Dan Yamins wrote:


Hello folks,

I did port Sage and hence Python with numpy and scipy to 64 bit OSX and 
below are some sample build instructions for just building python and 
numpy in 64 bit mode.


 Try

 In [3]: numpy.dtype(numpy.uintp).itemsize
 Out[3]: 4

 which is the size in bytes of the integer needed to hold a
 pointer. The output above is for 32 bit python/numpy.

 Chuck


 Check, the answer is 4, as you got for the 32-bit.   What would the 
 answer be on a 64-bit architecture?  Why is this diagnostic?

 Thanks!
 Dan

First try:

export OPT=-g -fwrapv -O3 -m64 -Wall -Wstrict-prototypes
./configure --disable-toolbox-glue 
--prefix=/Users/mabshoff/64bitnumpy/python-2.5.2-bin  

SNIP
checking for int... yes
checking size of int... 4
checking for long... yes
checking size of long... 4
SNIP

Oops, make fail because of the above. Let's try again:

 ./configure --disable-toolbox-glue 
--prefix=/Users/mabshoff/64bitnumpy/python-2.5.2-bin --with-gcc=gcc -m64

SNIP
checking for int... yes
checking size of int... 4
checking for long... yes
checking size of long... 8
SNIP

make  make install

then:

bsd:python-2.5.2-bin mabshoff$ file bin/python
bin/python: Mach-O 64-bit executable x86_64

Let's make the 64 bit python default:

bsd:64bitnumpy mabshoff$ export 
PATH=/Users/mabshoff/64bitnumpy/python-2.5.2-bin/bin/:$PATH
bsd:64bitnumpy mabshoff$ which python
/Users/mabshoff/64bitnumpy/python-2.5.2-bin/bin//python
bsd:64bitnumpy mabshoff$ file `which python`
/Users/mabshoff/64bitnumpy/python-2.5.2-bin/bin//python: Mach-O 64-bit 
executable x86_64

Let's build numpy 1.1.0:

bsd:64bitnumpy mabshoff$ tar xf numpy-1.1.0.tar.gz
bsd:64bitnumpy mabshoff$ cd numpy-1.1.0
bsd:numpy-1.1.0 mabshoff$ python setup.py install
SNIP

bsd:python-2.5.2-bin mabshoff$ python
Python 2.5.2 (r252:60911, Jun  4 2008, 20:47:16)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type help, copyright, credits or license for more information.
  import numpy
  numpy.dtype(numpy.uintp).itemsize
8
  ^D
bsd:python-2.5.2-bin mabshoff$  

Voila ;)

Wi th numpy 1.0.4svn from 20080104 the numpy setup did not work since a 
conftest failed. I did report that to Stefan V. in IRC via #sage-devel 
and I also thought that this was still a problem with numpy 1.1.0, but 
fortuantely that was fixed.

Now on to the test suite:

  numpy.test()
Numpy is installed in 
/Users/mabshoff/64bitnumpy/python-2.5.2-bin/lib/python2.5/site-packages/numpy
Numpy version 1.1.0
Python version 2.5.2 (r252:60911, Jun  4 2008, 20:47:16) [GCC 4.0.1 
(Apple Inc. build 5465)]
  Found 18/18 tests for numpy.core.tests.test_defmatrix
  Found 3/3 tests for numpy.core.tests.test_errstate
  Found 3/3 tests for numpy.core.tests.test_memmap
  Found 286/286 tests for numpy.core.tests.test_multiarray
  Found 70/70 tests for numpy.core.tests.test_numeric
  Found 36/36 tests for numpy.core.tests.test_numerictypes
  Found 12/12 tests for numpy.core.tests.test_records
  Found 143/143 tests for numpy.core.tests.test_regression
  Found 7/7 tests for numpy.core.tests.test_scalarmath
  Found 2/2 tests for numpy.core.tests.test_ufunc
  Found 16/16 tests for numpy.core.tests.test_umath
  Found 63/63 tests for numpy.core.tests.test_unicode
  Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
  Found 5/5 tests for numpy.distutils.tests.test_misc_util
  Found 2/2 tests for numpy.fft.tests.test_fftpack
  Found 3/3 tests for numpy.fft.tests.test_helper
  Found 24/24 tests for numpy.lib.tests.test__datasource
  Found 10/10 tests for numpy.lib.tests.test_arraysetops
  Found 1/1 tests for numpy.lib.tests.test_financial
  Found 53/53 tests for numpy.lib.tests.test_function_base
  Found 5/5 tests for numpy.lib.tests.test_getlimits
  Found 6/6 tests for numpy.lib.tests.test_index_tricks
  Found 15/15 tests for numpy.lib.tests.test_io
  Found 1/1 tests for numpy.lib.tests.test_machar
  Found 4/4 tests for numpy.lib.tests.test_polynomial
  Found 1/1 tests for numpy.lib.tests.test_regression
  Found 49/49 tests for numpy.lib.tests.test_shape_base
  Found 15/15 tests for numpy.lib.tests.test_twodim_base
  Found 43/43 tests for numpy.lib.tests.test_type_check
  Found 1/1 tests for numpy.lib.tests.test_ufunclike
  Found 89/89 tests for numpy.linalg.tests.test_linalg
  Found 3/3 tests for numpy.linalg.tests.test_regression
  Found 94/94 tests for numpy.ma.tests.test_core
  Found 15/15 tests for numpy.ma.tests.test_extras
  Found 17/17 tests for numpy.ma.tests.test_mrecords
  Found 36/36 tests for numpy.ma.tests.test_old_ma
  Found 4/4 tests for numpy.ma.tests.test_subclassing
  Found 7/7 tests for numpy.tests.test_random
  Found 16/16 tests for numpy.testing.tests.test_utils
  Found 5/5 tests for numpy.tests.test_ctypeslib

Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Jonathan Wright
Dan Yamins wrote:
 On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:


 Are both python and your version of OS X fully 64 bits?


 I'm not sure.  
From  python:

python2.5 -c 'import platform;print platform.architecture()'
('32bit', 'ELF')

versus :

('64bit', 'ELF')

You can also try the unix file command (eg: from a terminal):

$ file `which python2.5`
/sware/exp/fable/standalone/redhate4-a64/bin/python: ELF 64-bit LSB 
executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, 
dynamically linked (uses shared libs), not stripped

...etc. We needed this for generating the .so library file name for 
ctypes, and got the answer from comp.lang.python. I hope it also works 
for OS X.

Best,

Jon




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion