Re: [Numpy-discussion] ValueError: matrices are not aligned!!!

2013-09-18 Thread Bradley M. Froehle
In [2]: %debug
 ipython-input-1-2084f8a2223e(5)Av()
  4 def Av(A,v):
 5  return np.dot(A,v)
  6

ipdb !A.shape
(4, 8)
ipdb !v.shape
(4,)

In your code it looks like you are essentially computing A.dot(v)
where A is a 4-by-8 matrix and v is vector of length 4.  That's what
the error is telling you --- that the matrix and vector have
incompatible dimensions.

I've never seen the conjugate gradient method used on non-square
matrices... are you sure this is what you want to be doing?

-Brad

On Wed, Sep 18, 2013 at 11:11 AM, Happyman bahtiyor_zohi...@mail.ru wrote:
 Hello,

 I am trying to solve linear Ax=b problem, but some error occurs in the
 process like:
 --

 Traceback (most recent call last):
 File C:\Python27\Conjugate_Gradient.py, line 66, in module
 x, iter_number = conjGrad(Av,A, x, b)
 File C:\Python27\Conjugate_Gradient.py, line 51, in conjGrad
 u = Av(A,s)
 File C:\Python27\Conjugate_Gradient.py, line 41, in Av
 return np.dot(A,v)
 ValueError: matrices are not aligned
 ---

 I put the code below to check it.

 import numpy as np
 import math

 def Av(A,v):
  return np.dot(A,v)

 def conjGrad(Av,A, x, b, tol=1.0e-9):
 n = len(x)
 r = b - Av(A,x)
 s = r.copy()
 for i in range(n):
  u = Av(A,s)
  alpha = np.dot(s,r)/np.dot(s,u)
  x = x + aplha*s
  r = b - Av(A,x)
  if (math.sqrt(np.dot(r,r)))  tol:
  break
  else:
  beta = - np.dot(r,u)/np.dot(s,u)
  s = r + beta * s
 return x,i

 if __name__ == '__main__':
  # Input values
  A = np.arange(32, dtype=float).reshape((4,8))
  x = np.zeros(8)
  b = np.array([2.5, 4.5, 6.5, 8.0])
  x, iter_number = conjGrad(Av,A, x, b)

 I would appreciate any solution to this problem...
 Thanks in advance
 --
 happy man

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A bug in numpy.random.shuffle?

2013-09-05 Thread Bradley M. Froehle
Looks like a bug. FWIW, NumPy 1.6.1 on Scientific Linux 6.4 does not
suffer from this malady.

-Brad

$ python
Python 2.6.6 (r266:84292, Feb 21 2013, 19:26:11)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2
Type help, copyright, credits or license for more information.
 import sys
 import numpy as np
 from numpy.random import shuffle

 print 'Numpy version:', np.__version__
Numpy version: 1.6.2

 a = np.arange(5)
 for i in range(5):
... print a
... shuffle(a)
...
[0 1 2 3 4]
[2 1 0 4 3]
[0 4 3 1 2]
[4 1 0 2 3]
[2 0 4 3 1]

 b = np.zeros(5, dtype=[('n', 'S1'), ('i', int)])
 b['i']  = range(5)
 print b
[('', 0) ('', 1) ('', 2) ('', 3) ('', 4)]
 print b.dtype
[('n', '|S1'), ('i', 'i8')]
 for i in range(5):
... print b
... shuffle(b)
...
[('', 0) ('', 1) ('', 2) ('', 3) ('', 4)]
[('', 0) ('', 2) ('', 4) ('', 1) ('', 3)]
[('', 2) ('', 0) ('', 4) ('', 1) ('', 3)]
[('', 2) ('', 1) ('', 3) ('', 4) ('', 0)]
[('', 0) ('', 1) ('', 2) ('', 4) ('', 3)]

On Thu, Sep 5, 2013 at 11:11 AM, Fernando Perez fperez@gmail.com wrote:
 Hi all,

 I just ran into this rather weird behavior:

 http://nbviewer.ipython.org/6453869

 In summary, as far as I can tell, shuffle is misbehaving when acting
 on arrays that have structured dtypes. I've seen the problem on 1.7.1
 (official on ubuntu 13.04) as well as master as of a few minutes ago.

 Is this my misuse? It really looks like a bug to me...

 Cheers,

 f

 --
 Fernando Perez (@fperez_org; http://fperez.org)
 fperez.net-at-gmail: mailing lists only (I ignore this when swamped!)
 fernando.perez-at-berkeley: contact me here for any direct mail
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A bug in numpy.random.shuffle?

2013-09-05 Thread Bradley M. Froehle
I put this test case through `git bisect run` and here's what came
back.  I haven't confirmed this manually yet, but the blamed commit
does seem reasonable:

b26c675e2a91e1042f8f8d634763942c87fbbb6e is the first bad commit
commit b26c675e2a91e1042f8f8d634763942c87fbbb6e
Author: Nathaniel J. Smith n...@pobox.com
Date:   Thu Jul 12 13:20:20 2012 +0100

[FIX] Make np.random.shuffle less brain-dead

The logic in np.random.shuffle was... not very sensible. Fixes trac
ticket #2074.

This patch also exposes a completely unrelated issue in
numpy.testing. Filed as Github issue #347 and marked as knownfail for
now.

:04 04 6f3cf0c85a64664db6a71bd59909903f18b51639
0b6c8571dd3c9de8f023389f6bd963e42b12cc26 M numpy
bisect run success

On Thu, Sep 5, 2013 at 11:58 AM, Charles R Harris
charlesr.har...@gmail.com wrote:



 On Thu, Sep 5, 2013 at 12:50 PM, Fernando Perez fperez@gmail.com
 wrote:

 On Thu, Sep 5, 2013 at 11:43 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:


  Oh, nice one ;) Should be fixable if you want to submit a patch.

 Strategy? One option is to do, for structured arrays, a shuffle of the
 indices and then an in-place

 arr = arr[shuffled_indices]

 But there may be a cleaner/faster way to do it.

 I'm happy to submit a patch, but I'm not familiar enough with the
 internals to know what the best approach should be.


 Better open an issue. It looks like a bug in the indexing code.

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py build with mkl lapack

2013-07-08 Thread Bradley M. Froehle
On Mon, Jul 8, 2013 at 10:15 AM, David Cournapeau courn...@gmail.com wrote:


 On Mon, Jul 8, 2013 at 5:05 PM, sunghyun Kim kimsungh...@kaist.ac.kr
 wrote:

 Hi

 I'm trying to use fortran wrapper f2py with intel's mkl

 following is my command

 LIB='-L/opt/intel/Compiler/11.1/064/mkl/lib/em64t/ -lguide -lpthread
 -lmkl_core -lmkl_intel_lp64 -lmkl_sequential'


 Linking order matters: if A needs B, A should appear before B, so
 -lpthread/-lguide should be at the end, mkl_intel_lp64 before mkl_core, and
 mkl_sequential in front of that.

 See the MKL manual for more details,

You may also want to consult the MKL Link Line Advsior [1], which in
your case recommends an ordering like:

-lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm

[1]: http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor

-Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fancy indexing oddity

2013-07-02 Thread Bradley M. Froehle
A colleague just showed me this indexing behavior and I was at a loss
to explain what was going on.  Can anybody else chime in and help me
understand this indexing behavior?

 import numpy as np
 np.__version__
'1.7.1'
 A = np.ones((2,3,5))
 mask = np.array([True]*4 + [False], dtype=bool)
 A.shape
(2, 3, 5)
 A[:,:,mask].shape
(2, 3, 4)
 A[:,1,mask].shape
(2, 4)
 A[1,:,mask].shape
(4, 3) # Why is this not (3, 4)?
 A[1][:,mask].shape
(3, 4)

Thanks!
Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] svd + multiprocessing hangs

2013-06-12 Thread Bradley M. Froehle
Dear Uwe:

I can't reproduce this using the system default versions of Python and
NumPy on Ubuntu 13.04:

$ python uwe.py
before svd
this message is not printed

 sys.version_info
sys.version_info(major=2, minor=7, micro=4, releaselevel='final', serial=0)
 numpy.__version__
'1.7.1'

Any idea how your hand-compiled versions might differ from the system
provided versions?

-Brad


On Wed, Jun 12, 2013 at 9:07 AM, Uwe Schmitt uschm...@mineway.de wrote:

 Dear all,

 the following code hangs on my Ubuntu machine.
 I use self compiled numpy 1.7.1 and Python
 2.7.3

 -

 import numpy
 import numpy.linalg
 import multiprocessing

 def classify():
 X = numpy.random.random((10,3))
 print before svd
 numpy.linalg.svd(X)
 print this message is not printed


 if __name__ == __main__:
 p = multiprocessing.Process(target=classify, args=())
 p.start()
 p.join()

 -

 Regards, Uwe.

 --
 Dr. rer. nat. Uwe Schmitt
 Leitung F/E Mathematik

 mineway GmbH
 Gebäude 4
 Im Helmerswald 2
 66121 Saarbrücken

 Telefon: +49 (0)681 8390 5334
 Telefax: +49 (0)681 830 4376

 uschm...@mineway.de
 www.mineway.de

 Geschäftsführung: Dr.-Ing. Mathias Bauer
 Amtsgericht Saarbrücken HRB 12339








 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] could anyone check on a 32bit system?

2013-04-30 Thread Bradley M. Froehle
On Tue, Apr 30, 2013 at 8:08 PM, Yaroslav Halchenko li...@onerussian.comwrote:

 could anyone on 32bit system with fresh numpy (1.7.1) test following:

  wget -nc http://www.onerussian.com/tmp/data.npy ; python -c 'import
 numpy as np; data1 = np.load(/tmp/data.npy); print
  np.sum(data1[1,:,0,1]) - np.sum(data1, axis=1)[1,0,1]'

 0.0

 because unfortunately it seems on fresh ubuntu raring (in 32bit build only,
 seems ok in 64 bit... also never ran into it on older numpy releases):

  python -c 'import numpy as np; data1 = np.load(/tmp/data.npy); print
  np.sum(data1[1,:,0,1]) - np.sum(data1, axis=1)[1,0,1]'
 -1.11022302463e-16


Perhaps on the 32-bit system one call is using the 80-bit extended
precision register for the summation and the other one is not?

-Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Import error while freezing with cxfreeze

2013-04-05 Thread Bradley M. Froehle
Hi Anand,

On Friday, April 5, 2013, Anand Gadiyar wrote:

 Hi all,

 I have a small program that uses numpy and scipy. I ran into a couple of
 errors while trying to use cxfreeze to create a windows executable.

 I'm running Windows 7 x64, Python 2.7.3 64-bit, Numpy 1.7.1rc1 64-bit,
 Scipy-0.11.0 64-bit, all binary installs from 
 http://www.lfd.uci.edu/~gohlke/pythonlibs/

 I was able to replicate this with scipy-0.12.0c1 as well.

 1) from scipy import constants triggers the below:
 Traceback (most recent call last):
 File D:\Python27\lib\site-packages\cx_Freeze\initscripts\Console.py,
 line 27, in module
 exec_code in m.__dict__
 File mSimpleGui.py, line 10, in module
 File mSystem.py, line 7, in module
 File D:\Python27\lib\site-packages\scipy\__init__.py, line 64, in
 module
 from numpy import show_config as show_numpy_config
 File D:\Python27\lib\site-packages\numpy\__init__.py, line 165, in
 module
 from core import *
 AttributeError: 'module' object has no attribute 'sys'


It's a bug in cx_freeze that has been fixed in the development branch.

See
https://bitbucket.org/anthony_tuininga/cx_freeze/pull-request/17/avoid-polluting-extension-module-namespace/diff


 2) from scipy import interpolate triggers the below:
 Traceback (most recent call last):
 File D:\Python27\lib\site-packages\cx_Freeze\initscripts\Console.py,
 line 27, in module
 exec_code in m.__dict__
 File mSimpleGui.py, line 10, in module
 File mSystem.py, line 9, in module
 File mSensor.py, line 10, in module
 File D:\Python27\lib\site-packages\scipy\interpolate\__init__.py, line
 154, in module
 from rbf import Rbf
 File D:\Python27\lib\site-packages\scipy\interpolate\rbf.py, line 50, in
 module
 from scipy import linalg
 ImportError: cannot import name linalg


You might want to try the dev branch of cxfreeze to see if this has been
fixed as well.

Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Raveling, reshape order keyword unnecessarily confuses index and memory ordering

2013-03-30 Thread Bradley M. Froehle
On Sat, Mar 30, 2013 at 3:21 PM, Matthew Brett matthew.br...@gmail.comwrote:

 On Sat, Mar 30, 2013 at 2:20 PM,  josef.p...@gmail.com wrote:
  On Sat, Mar 30, 2013 at 4:57 PM,  josef.p...@gmail.com wrote:
  On Sat, Mar 30, 2013 at 3:51 PM, Matthew Brett matthew.br...@gmail.com
 wrote:
  On Sat, Mar 30, 2013 at 4:14 AM,  josef.p...@gmail.com wrote:
  On Fri, Mar 29, 2013 at 10:08 PM, Matthew Brett 
 matthew.br...@gmail.com wrote:
 
  Ravel and reshape use the tems 'C' and 'F in the sense of index
 ordering.
 
  This is very confusing.  We think the index ordering and memory
  ordering ideas need to be separated, and specifically, we should
 avoid
  using C and F to refer to index ordering.
 
  Proposal
  -
 
  * Deprecate the use of C and F meaning backwards and forwards
  index ordering for ravel, reshape
  * Prefer Z and N, being graphical representations of unraveling
 in
  2 dimensions, axis1 first and axis0 first respectively (excellent
  naming idea by Paul Ivanov)
 
  What do y'all think?
 
  I always thought F and C are easy to understand, I always thought
 about
  the content and never about the memory when using it.
 
  changing the names doesn't make it easier to understand.
  I think the confusion is because the new A and K refer to existing
 memory
 

 I disagree, I think it's confusing, but I have evidence, and that is
 that four out of four of us tested ourselves and got it wrong.

 Perhaps we are particularly dumb or poorly informed, but I think it's
 rash to assert there is no problem here.


I got all four correct.  I think the concept --- at least for ravel --- is
pretty simple: would you like to read the data off in C ordering or Fortran
ordering.  Since the output array is one-dimensional, its ordering is
irrelevant.

I don't understand the 'Z' / 'N' suggestion at all.  Are they part of some
pneumonic?

I'd STRONGLY advise against deprecating the 'F' and 'C' options.  NumPy
already suffers from too much bikeshedding with names --- I rarely am able
to pull out a script I wrote using NumPy even a few years ago and have
it immediately work.

Cheers,
Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] variables not defined in numpy.random __init.py__ ?

2013-03-25 Thread Bradley M. Froehle
On Mon, Mar 25, 2013 at 12:51 PM, Ralf Gommers ralf.gomm...@gmail.comwrote:

 On Mon, Mar 25, 2013 at 4:23 PM, Dinesh B Vadhia 
 dineshbvad...@hotmail.com wrote:

 **
 Using PyInstaller, the following error occurs:

 Traceback (most recent call last):
   File string, line 9, in module
   File //usr/lib/python2.7/dist-packages/PIL/Image.py, line 355, in init
 __import__(f, globals(), locals(), [])
   File //usr/lib/python2.7/dist-packages/PIL/IptcImagePlugin.py, line
 23, in module
 import os, tempfile
   File /usr/lib/python2.7/tempfile.py, line 34, in module
 from random import Random as _Random
   File //usr/lib/python2.7/dist-packages/numpy/random/__init__.py, line
 90, in module
 ranf = random = sample = random_sample
 NameError: name 'random_sample' is not defined

 Is line 90 in __init.py__ valid?


 It is.


In my reading of this the main problem is that `tempfile` is trying to
import `random` from the Python standard library but instead is importing
the one from within NumPy (i.e., `numpy.random`).  I suspect that somehow
`sys.path` is being set incorrectly --- perhaps because of the `PYTHONPATH`
environment variable.

-Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vectorize and ufunc attribute

2013-03-12 Thread Bradley M. Froehle
T J:

You may want to look into `numpy.frompyfunc` (
http://docs.scipy.org/doc/numpy/reference/generated/numpy.frompyfunc.html).

-Brad


On Tue, Mar 12, 2013 at 12:40 AM, T J tjhn...@gmail.com wrote:

 Prior to 1.7, I had working compatibility code such as the following:


 if has_good_functions:
 # http://projects.scipy.org/numpy/ticket/1096
 from numpy import logaddexp, logaddexp2
 else:
 logaddexp = vectorize(_logaddexp, otypes=[numpy.float64])
 logaddexp2 = vectorize(_logaddexp2, otypes=[numpy.float64])

 # Run these at least once so that .ufunc.reduce exists
 logaddexp([1.,2.,3.],[1.,2.,3.])
 logaddexp2([1.,2.,3.],[1.,2.,3.])

 # And then make reduce available at the top level
 logaddexp.reduce = logaddexp.ufunc.reduce
 logaddexp2.reduce = logaddexp2.ufunc.reduce


 The point was that I wanted to treat the output of vectorize as a hacky
 drop-in replacement for a ufunc.  In 1.7, I discovered that vectorize had
 changed (https://github.com/numpy/numpy/pull/290), and now there is no
 longer a ufunc attribute at all.

 Should this be added back in?  Besides hackish drop-in replacements, I see
 value in to being able to call reduce, accumulate, etc (when possible) on
 the output of vectorize().




 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove interactive setup

2013-03-04 Thread Bradley M. Froehle
On Mon, Mar 4, 2013 at 10:34 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 I note that the way to access it is to run python setup.py with no
 arguments. I wonder what the proper message should be in that case?


How about usage instructions and an error message, similar to what a basic
distutils setup script would provide?

-Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping arrays

2013-03-02 Thread Bradley M. Froehle
On Sat, Mar 2, 2013 at 6:03 PM, Sudheer Joseph sudheer.jos...@yahoo.comwrote:

 Hi all,
 For a 3d array in matlab, I can do the below to reshape it before
 an eof analysis. Is there a way to do the same using numpy? Please help.

 [nlat,nlon,ntim ]=size(ssh);
 tssh=reshape(ssh,nlat*nlon,ntim);
 and afterwards
 eofout=[]
 eofout=reshape(eof1,nlat,nlon,ntime)


Yes, easy:

nlat, nlon, ntim = ssh.shape
tssh = ssh.reshape(nlat*nlon, ntim, order='F')
and afterwards
eofout = eofl.reshape(nlat, nlon, ntim, order='F')

You probably want to go read through
http://www.scipy.org/NumPy_for_Matlab_Users.

Cheers,
Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] a question about freeze on numpy 1.7.0

2013-02-25 Thread Bradley M. Froehle
I submitted a bug report (and patch) to cx_freeze.  You can follow up with
them at http://sourceforge.net/p/cx-freeze/bugs/36/.

-Brad


On Mon, Feb 25, 2013 at 12:06 AM, Gelin Yan dynami...@gmail.com wrote:



 On Mon, Feb 25, 2013 at 3:53 PM, Bradley M. Froehle 
 brad.froe...@gmail.com wrote:

 I can reproduce with NumPy 1.7.0, but I'm not convinced the bug lies
 within NumPy.

 The exception is not being raised on the `del sys` line.  Rather it is
 being raised in numpy.__init__:

   File
 /home/bfroehle/.local/lib/python2.7/site-packages/cx_Freeze/initscripts/Console.py,
 line 27, in module
 exec code in m.__dict__
   File numpytest.py, line 1, in module
 import numpy
   File
 /home/bfroehle/.local/lib/python2.7/site-packages/numpy/__init__.py, line
 147, in module
 from core import *
 AttributeError: 'module' object has no attribute 'sys'

 This is because, somehow, `'sys' in numpy.core.__all__` returns True in
 the cx_Freeze context but False in the regular Python context.

 -Brad


 On Sun, Feb 24, 2013 at 10:49 PM, Gelin Yan dynami...@gmail.com wrote:



 On Mon, Feb 25, 2013 at 9:16 AM, Ondřej Čertík 
 ondrej.cer...@gmail.comwrote:

 Hi Gelin,

 On Sun, Feb 24, 2013 at 12:08 AM, Gelin Yan dynami...@gmail.com
 wrote:
  Hi All
 
   When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I
 quickly
  found out even a simple import numpy may lead to program failed with
  following exception:
 
  AttributeError: 'module' object has no attribute 'sys'
 
  After a poking around some codes I noticed /numpy/core/__init__.py
 has a
  line 'del sys' at the bottom. After I commented this line, and
 repacked the
  whole program, It ran fine.
  I also noticed this 'del sys' didn't exist on numpy 1.6.2
 
  I am curious why this 'del sys' should be here and whether it is safe
 to
  omit it. Thanks.

 The del sys line was introduced in the commit:


 https://github.com/numpy/numpy/commit/4c0576fe9947ef2af8351405e0990cebd83ccbb6

 and it seems to me that it is needed so that the numpy.core namespace
 is not
 cluttered by it.

 Can you post the full stacktrace of your program (and preferably some
 instructions
 how to reproduce the problem)? It should become clear where the problem
 is.

 Thanks,
 Ondrej
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Hi Ondrej

 I attached two files here for demonstration. you need cx_freeze to
 build a standalone executable file. simply running python setup.py build
 and try to run the executable file you may see this exception. This
 example works with numpy 1.6.2. Thanks.

 Regards

 gelin yan


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Hi Bradley

 So is it supposed to be a bug of cx_freeze? Any work around for that
 except omit 'del sys'? If the answer is no, I may consider submit a ticket
 on cx_freeze site. Thanks

 Regards

 gelin yan

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] a question about freeze on numpy 1.7.0

2013-02-24 Thread Bradley M. Froehle
I can reproduce with NumPy 1.7.0, but I'm not convinced the bug lies within
NumPy.

The exception is not being raised on the `del sys` line.  Rather it is
being raised in numpy.__init__:

  File
/home/bfroehle/.local/lib/python2.7/site-packages/cx_Freeze/initscripts/Console.py,
line 27, in module
exec code in m.__dict__
  File numpytest.py, line 1, in module
import numpy
  File
/home/bfroehle/.local/lib/python2.7/site-packages/numpy/__init__.py, line
147, in module
from core import *
AttributeError: 'module' object has no attribute 'sys'

This is because, somehow, `'sys' in numpy.core.__all__` returns True in the
cx_Freeze context but False in the regular Python context.

-Brad


On Sun, Feb 24, 2013 at 10:49 PM, Gelin Yan dynami...@gmail.com wrote:



 On Mon, Feb 25, 2013 at 9:16 AM, Ondřej Čertík ondrej.cer...@gmail.comwrote:

 Hi Gelin,

 On Sun, Feb 24, 2013 at 12:08 AM, Gelin Yan dynami...@gmail.com wrote:
  Hi All
 
   When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I quickly
  found out even a simple import numpy may lead to program failed with
  following exception:
 
  AttributeError: 'module' object has no attribute 'sys'
 
  After a poking around some codes I noticed /numpy/core/__init__.py has a
  line 'del sys' at the bottom. After I commented this line, and repacked
 the
  whole program, It ran fine.
  I also noticed this 'del sys' didn't exist on numpy 1.6.2
 
  I am curious why this 'del sys' should be here and whether it is safe to
  omit it. Thanks.

 The del sys line was introduced in the commit:


 https://github.com/numpy/numpy/commit/4c0576fe9947ef2af8351405e0990cebd83ccbb6

 and it seems to me that it is needed so that the numpy.core namespace is
 not
 cluttered by it.

 Can you post the full stacktrace of your program (and preferably some
 instructions
 how to reproduce the problem)? It should become clear where the problem
 is.

 Thanks,
 Ondrej
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Hi Ondrej

 I attached two files here for demonstration. you need cx_freeze to
 build a standalone executable file. simply running python setup.py build
 and try to run the executable file you may see this exception. This
 example works with numpy 1.6.2. Thanks.

 Regards

 gelin yan


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A proposed change to rollaxis() behavior for negative 'start' values

2013-02-16 Thread Bradley M. Froehle
Have you considered using .transpose(...) instead?

In [4]: a = numpy.ones((3,4,5,6))

In [5]: a.transpose(2,0,1,3).shape
Out[5]: (5, 3, 4, 6)

In [6]: a.transpose(0,2,1,3).shape
Out[6]: (3, 5, 4, 6)

In [7]: a.transpose(0,1,2,3).shape
Out[7]: (3, 4, 5, 6)

In [8]: a.transpose(0,1,3,2).shape
Out[8]: (3, 4, 6, 5)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A proposed change to rollaxis() behavior for negative 'start' values

2013-02-16 Thread Bradley M. Froehle
On Sat, Feb 16, 2013 at 10:45 AM, Bradley M. Froehle brad.froe...@gmail.com
 wrote:

 Have you considered using .transpose(...) instead?


My apologies... I didn't read enough of the thread to see what the issue
was about.  I personally think rollaxis(...) is quite confusing and instead
choose to use .transpose(...) for clarity.

You can interpret my suggestion as a means of implementing moveaxis w/o
calling rollaxis.  In fact rollaxis is built out of the .transpose(...)
primitive anyway.

Cheers,
Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] confused about tensordot

2013-02-15 Thread Bradley M. Froehle
Hi Neal:

The tensordot part:
  np.tensordot (a, b.conj(), ((0,),(0,))

is returning a (13, 13) array whose [i, j]-th entry is   sum( a[k, i] *
b.conj()[k, j] for k in xrange(1004) ).

-Brad


The print statement outputs this:

 (1004, 13) (1004, 13) (13,) (13, 13)

 The correct output should be (13,), but the tensordot output is (13,13).

 It's supposed to take 2 matrixes, each (1004, 13) and do element-wise
 multiply,
 then sum over axis 0.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] confused about tensordot

2013-02-15 Thread Bradley M. Froehle

  It's supposed to take 2 matrixes, each (1004, 13) and do element-wise
  multiply,
  then sum over axis 0.
 

 Can I use tensordot to do what I want?


No.  In your case I'd just do (a*b.conj()).sum(0).  (Assuming that a and b
are arrays, not matrices).

It is most helpful to think of tensordot as a generalization on
matrix multiplication where the axes argument gives the axes of the first
and second arrays which should be summed over.

a = np.random.rand(4,5,6,7)
b = np.random.rand(8,7,5,2)
c = np.tensordot(a, b, axes=((1, 3), (2, 1))) # contract over dimensions
with size 5 and 7
assert c.shape == (4, 6, 8, 2) # the resulting shape is the shape given by
a.shape + b.shape, which contracted dimensions removed.

-Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dealing with the mode argument in qr.

2013-02-06 Thread Bradley M. Froehle
 This would be a problem imho. But I don't see why you can't add raw to
 numpy's qr. And if you add big and thin in numpy, you can add those
 modes in scipy too.


 Currently I've used bfroehle's suggestions, although I'm tempted by 'thin'
 instead of 'reduced'


Thin sounds fine to me.  Either way I think we need to clean up the
docstring to make the different calling styles more obvious.  Perhaps we
can just add a quick list of variants:
q, r = qr(a) # q is m-by-k, r is k-by-n
q, r = qr(a, 'thin')  # same as qr(a)
q, r = qr(a, 'complete') # q is m-by-n, r is n-by-n
   r = qr(a, 'r') # r is 
a2  = qr(a, 'economic') # r is contained in the upper triangular part of a2
a2, tau = qr(a, 'raw') # ...

Regards,
Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_FromAny silently converts None to a singleton nan

2013-01-28 Thread Bradley M. Froehle
 import numpy as np
 np.double(None)
nan

On Mon, Jan 28, 2013 at 3:48 PM, Geoffrey Irving irv...@naml.us wrote:

 I discovered this from C via the PyArray_FromAny function, but here it
 is in Python:

  asarray(None,dtype=float)
 array(nan)

 Is this expected or documented behavior?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Casting Bug or a Feature?

2013-01-16 Thread Bradley M. Froehle
Hi Patrick:

I think it is the behavior I have come to expect.  The only gotcha here
might be the difference between var = var + 0.5 and var += 0.5

For example:

 import numpy as np

 x = np.arange(5); x += 0.5; x
array([0, 1, 2, 3, 4])

 x = np.arange(5); x = x + 0.5; x
array([ 0.5,  1.5,  2.5,  3.5,  4.5])

The first line is definitely what I expect.  The second, the automatic
casting from int64 - double, is documented and generally desirable.

It's hard to avoid these casting issues without making code unnecessarily
complex or allowing only one data type (e.g., as MATLAB does).

If you worry about standardizing behavior you can always use `var =
np.array(var, dtype=np.double, copy=True)` or similar at the start of your
function.

-Brad


On Wed, Jan 16, 2013 at 4:16 PM, Patrick Marsh patrickmars...@gmail.comwrote:

 Greetings,

 I spent a couple hours today tracking down a bug in one of my programs. I
 was getting different answers depending on whether I passed in a numpy
 array or a single number. Ultimately, I tracked it down to something I
 would consider a bug, but I'm not sure if others do. The case comes from
 taking a numpy integer array and adding a float to it.  When doing var =
 np.array(ints) + float, var is cast to an array of floats, which is what I
 would expect. However, if I do np.array(ints) += float, the result is an
 array of integers. I can understand why this happens -- you are shoving the
 sum back into an integer array -- but without thinking through that I would
 expect the behavior of the two additions to be equal...or at least be
 consistent with what occurs with numbers, instead of arrays.  Here's a
 trivial example demonstrating this


 import numpy as np
 a = np.arange(10)
 print a.dtype
 b = a + 0.5
 print b.dtype
 a += 0.5
 print a.dtype

   int64
  float64
  int64
  type 'int'
  type 'float'
  type 'float'


 An implication of this arrises from a simple function that does math.
 The function returns different values depending on whether a number or
 array was passed in.


 def add_n_multiply(var):
 var += 0.5
 var *= 10
 return var

 aaa = np.arange(5)
 print aaa
 print add_n_multiply(aaa.copy())
 print [add_n_multiply(x) for x in aaa.copy()]


  [0 1 2 3 4]
  [ 0 10 20 30 40]
  [5.0, 15.0, 25.0, 35.0, 45.0]




 Am I alone in thinking this is a bug? Or is this the behavior that others
 would have expected?



 Cheers,
 Patrick
 ---
 Patrick Marsh
 Ph.D. Candidate / Liaison to the HWT
 School of Meteorology / University of Oklahoma
 Cooperative Institute for Mesoscale Meteorological Studies
 National Severe Storms Laboratory
 http://www.patricktmarsh.com

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building numpy with OpenBLAS

2012-12-14 Thread Bradley M. Froehle
Hi Sergey: 

I recently ran into similar problems with ACML.

See an original bug report (https://github.com/numpy/numpy/issues/2728)  
documentation fix (https://github.com/numpy/numpy/pull/2809).

Personally, I ended up using a patch similar to 
https://github.com/numpy/numpy/pull/2751 to force NumPy to respect site.cfg (so 
that I could put the libacml in the [blas_opt]  [lapack_opt] sections).  But 
this seems unlikely to get merged into NumPy as it changes the behavior of 
site.cfg.  Instead I think we should discuss adding a have cblas flag of some 
sort to the [blas] section so that the user can still get _dotblas to compile.

-Brad 


On Friday, December 14, 2012 at 1:17 AM, Sergey Bartunov wrote:

 Hi. I'm trying to build numpy (1.6.2 and master from git) with
 OpenBLAS on Ubuntu server 11.10.
 
 I succeed with this just once and performance boost was really big for
 me, but unfortunately something went wrong with my application and I
 had to reinstall numpy. After that I couldn't reproduce this result
 and even just perform faster than default numpy installation with no
 external libraries anyhow.
 
 Now things went even worse. I assume that numpy built with BLAS and
 LAPACK should do dot operation faster than clean installation on
 relatively large matirces (say 2000 x 2000). Here I don't use OpenBLAS
 anyway.
 
 I install libblas-dev and liblapack-dev by apt-get and after that
 build numpy from sources / by pip (that doesn't matter for the
 result). Building tool reports that BLAS and LAPACK are detected on my
 system, so says numpy.distutils.system_info after installation. But
 matrix multiplication by dot takes the same time as clean installation
 (12 s vs 0.16 s with OpenBLAS). That's the first thing I'm wondering
 about.
 
 Nevertheless I tried to compile numpy with OpenBLAS only. I have
 forced this by setting ATLAS= BLAS=/usr/lib/libopenblas.a
 LAPACK=/usr/lib/libopenblas.a as I saw somewhere in the internet. I
 had installed numpy exactly this way at the first time when I was
 lucky. But now it doesn't work for me. I tryied installing OpenBLAS
 from sources and as libopenblas-dev ubuntu package.
 
 So how can I fix this? Many thanks in advance.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org (mailto:NumPy-Discussion@scipy.org)
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal to drop python 2.4 support in numpy 1.8

2012-12-13 Thread Bradley M. Froehle
Targeting = 2.6 would be preferable to me.  Several other packages including 
IPython, support only Python = 2.6, = 3.2.

This change would help me from accidentally writing Python syntax which is 
allowable in 2.6  2.7 (but not in 2.4 or 2.5).

Compiling a newer Python interpreter isn't very hard… probably about as 
difficult as installing NumPy.

-Brad  


On Thursday, December 13, 2012 at 9:03 AM, Skipper Seabold wrote:

 On Thu, Dec 13, 2012 at 12:00 PM, David Cournapeau courn...@gmail.com 
 (mailto:courn...@gmail.com) wrote:
 snip
  I would even go as far as dropping 2.5 as well then (RHEL 6
  uses python 2.6).
  
 +1


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Support for python 2.4 dropped. Should we drop 2.5 also?

2012-12-13 Thread Bradley M. Froehle
Yes, but the point was that since you can live with an older version on
Python you can probably live with an older version of NumPy.

On Thursday, December 13, 2012, David Cournapeau wrote:

  Im still dumfounded that people are working on projects where they
  are free to use the latest an greatest numpy, but *have* to use a
  more-than-four-year-old-python:

 It happens very easily in corporate environments. Compiling python it
 a major headache compared to numpy, not because of python itself, but
 because you need to recompile every single extension you're gonna use.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dstack vs np.concatenate?

2012-12-10 Thread Bradley M. Froehle
The source for np.dstack would point the way towards a simpler
implementation:

array = np.concatenate(map(np.atleast_3d, (arr1, arr2, arr3, arr4, arr5,
arr6)), axis=2)

array_list_old = [arr1, arr2, arr3, arr4, arr5, arr6]

 array_list = [arr[...,np.newaxis] for arr in array_list_old]
 array = np.concatenate(tuple(array_list),axis=2)

 So is there some inconsistency in the documentation?


Maybe.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] site.cfg: Custom BLAS / LAPACK configuration

2012-12-07 Thread Bradley M. Froehle
Aha, thanks for the clarification.  I've always been surpassed that NumPy 
doesn't ship with a copy of CBLAS.  It's easy to compile --- just a thin 
wrapper over BLAS, if I remember correctly. 

-Brad 


On Friday, December 7, 2012 at 4:01 AM, David Cournapeau wrote:

 On Thu, Dec 6, 2012 at 7:35 PM, Bradley M. Froehle
 brad.froe...@gmail.com (mailto:brad.froe...@gmail.com) wrote:
  Right, but if I link to libcblas, cblas would be available, no?
 
 
 
 No, because we don't explicitly check for CBLAS. We assume it is there
 if Atlas, Accelerate or MKL is found.
 
 cheers,
 David



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] site.cfg: Custom BLAS / LAPACK configuration

2012-12-06 Thread Bradley M. Froehle
Thanks Alexander, that was quite helpful, but unfortunately does not
actually work. The recommendations there are akin to a site.cfg file:

[atlas]
atlas_libs =
library_dirs =

[blas]
blas_libs = cblas,acml
library_dirs = /opt/acml5.2.0/gfortan64_fma4/lib

[lapack]
blas_libs = cblas,acml
library_dirs = /opt/acml5.2.0/gfortan64_fma4/lib
$ python setup.py build

However this makes numpy think that there is no optimized blas available
and prevents the numpy.core._dotblas module from being built.

-Brad


On Thu, Dec 6, 2012 at 4:29 AM, Alexander Eberspächer 
alex.eberspaec...@gmail.com wrote:

 On Fri, 30 Nov 2012 12:13:58 -0800
 Bradley M. Froehle brad.froe...@gmail.com wrote:

  As far as I can tell, it's IMPOSSIBLE to create a site.cfg which will
  link to ACML when a system installed ATLAS is present.

 setup.py respects environment variables. You can set ATLAS to None and
 force the setup to use $LAPACK and $BLAS. See also this link:

 http://www.der-schnorz.de/2012/06/optimized-linear-algebra-and-numpyscipy/

 Greetings

 Alex
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] site.cfg: Custom BLAS / LAPACK configuration

2012-12-06 Thread Bradley M. Froehle
Right, but if I link to libcblas, cblas would be available, no?


On Thu, Dec 6, 2012 at 10:34 AM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Dec 6, 2012 at 7:13 PM, Bradley M. Froehle
 brad.froe...@gmail.com wrote:
  Thanks Alexander, that was quite helpful, but unfortunately does not
  actually work. The recommendations there are akin to a site.cfg file:
 
  [atlas]
  atlas_libs =
  library_dirs =
 
  [blas]
  blas_libs = cblas,acml
  library_dirs = /opt/acml5.2.0/gfortan64_fma4/lib
 
  [lapack]
  blas_libs = cblas,acml
  library_dirs = /opt/acml5.2.0/gfortan64_fma4/lib
  $ python setup.py build
 
  However this makes numpy think that there is no optimized blas available
 and
  prevents the numpy.core._dotblas module from being built.

 _dotblas is only built if *C*blas is available (atlas, accelerate and
 mkl only are supported ATM).

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] site.cfg: Custom BLAS / LAPACK configuration

2012-11-30 Thread Bradley M. Froehle
I recently installed NumPy 1.6.2 on a new computer and wanted to use ACML
as the BLAS/LAPACK library.  [I'm aware that ACML doesn't provide CBLAS,
but that is easy to work around by compiling it yourself to produce
libcblas.a or libcblas.so].

I experienced a great bit of difficulty in getting NumPy to use ACML
(-lcblas -lacml), primarily stemming from the fact that there was a working
ATLAS installation already in /usr/lib64.

As far as I can tell, it's IMPOSSIBLE to create a site.cfg which will link
to ACML when a system installed ATLAS is present.

The detection routine for blas_opt (and similarly for lapack_opt) seem to
operate as:
* Is MKL present?  If so, use it.
* Is ATLAS present? If so, use it.
* Use [blas] section from site.cfg.

Instead I would have expected the detection routine to be more like:
* Is [blas_opt] present in site.cfg? If so, use it.
* Is MKL present? ...
* Is ATLAS present? ...
* Use [blas] section from site.cfg.

This is not just a problem with ACML.  I've also experienced this when
using NumPy on some cray supercomputers where the default C compiler
automatically links a preferred BLAS/LAPACK.

I created a GitHub issue for this:
https://github.com/numpy/numpy/issues/2728.  In addition, I created a pull
request with a works for me solution, but which should have needs some
wider visibility https://github.com/numpy/numpy/pull/2751.

I'd appreciate any reviews, workarounds, or other general feedback.  If you
want to test our the library detection mechanism you can run the following
from within the NumPy source directory::

import __builtin__ as builtins
builtins.__NUMPY_SETUP__ = True
import numpy.distutils.system_info as si
print si.get_info('blas_opt')

Thanks,
Brad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion