[Numpy-discussion] Ternary plots anywhere?

2010-07-02 Thread V. Armando Solé
Dear all,

Perhaps this is a bit off topic for the mailing list, but this is 
probably the only mailing list that is common to users of all python 
plotting packages.

I am trying to find a python implementation of ternary/triangular plots:

http://en.wikipedia.org/wiki/Ternary_plot

but I have been unsuccessful. Is there any on-going project around?

Thanks for your time.

Best regards,

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Bento (ex-toydist) 0.0.3

2010-07-02 Thread Dag Sverre Seljebotn
David Cournapeau wrote:
 On Fri, Jul 2, 2010 at 1:56 PM, Robert Pyle rp...@post.harvard.edu wrote:
   
 Hi,

 While I agree that toydist needs a new name, Bento might not be a good
 choice.  It's already the name of a database system for Macintosh from
 Filemaker, an Apple subsidiary.  I'd be *very* surprised if the name
 Bento is not copyrighted.
 

 Can you copyright a word ? I thought this was the trademark part of
 the law. For example, linux is a trademark owned by Linus Torvald.
 Also, well known packages use words which are at least as common as
 bento in English (sphinx, twisted, etc...), and as likely to be
 trademarked. But IANAL...
   
There's been lots of discussions about this on the Sage list, since 
there's lots of software called Sage. It seems that the consensus of 
IANAL advice on that list is that as long as they're not competing in 
the same market they're OK. For instance, there's been some talk about 
whether it's OK to include economics utilities in Sage since there's an 
accounting software (?) called Sage -- that sort of thing.

Thanks for your work David, I'll make sure to check it out soon!

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Bento (ex-toydist) 0.0.3

2010-07-02 Thread David
On 07/02/2010 05:05 PM, Dag Sverre Seljebotn wrote:
 David Cournapeau wrote:
 On Fri, Jul 2, 2010 at 1:56 PM, Robert Pylerp...@post.harvard.edu  wrote:

 Hi,

 While I agree that toydist needs a new name, Bento might not be a good
 choice.  It's already the name of a database system for Macintosh from
 Filemaker, an Apple subsidiary.  I'd be *very* surprised if the name
 Bento is not copyrighted.


 Can you copyright a word ? I thought this was the trademark part of
 the law. For example, linux is a trademark owned by Linus Torvald.
 Also, well known packages use words which are at least as common as
 bento in English (sphinx, twisted, etc...), and as likely to be
 trademarked. But IANAL...

 There's been lots of discussions about this on the Sage list, since
 there's lots of software called Sage. It seems that the consensus of
 IANAL advice on that list is that as long as they're not competing in
 the same market they're OK. For instance, there's been some talk about
 whether it's OK to include economics utilities in Sage since there's an
 accounting software (?) called Sage -- that sort of thing.

Thanks. that's useful to know.

 Thanks for your work David, I'll make sure to check it out soon!

Note that cython setup.py can be automatically converted - there is a 
small issue with the setup docstring which contains rest syntax 
incompatible with bento.info format (when empty lines has a different 
amount of space than the current indentation). But once you manually 
edit those, you can build egg, windows installer and install cython. In 
particular, the cython script is exucutablified like setuptools does, 
so cython is a bit more practical to use on windows.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] memory leak using numpy and cvxopt

2010-07-02 Thread Tillmann Falck
Hi all,

I am hitting a memory leak with the combination of numpy and cvxopt.matrix. As 
I am not where it occurs, I am cross posting.

On my machine (Fedora 13, x86_64) this example quickly eats up all my memory.

---
from cvxopt import matrix
import numpy as np

N = 2000

X = np.ones((N, N))
Y = matrix(0.0, (N, N))

while True:
Y[:N, :N] = X
---

I don't hit the leak if copy blocks of 1-d arrays.

Regards,
Tillmann
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Bento (ex-toydist) 0.0.3

2010-07-02 Thread Robert Pyle

On Jul 2, 2010, at 1:11 AM, David Cournapeau wrote:

 On Fri, Jul 2, 2010 at 1:56 PM, Robert Pyle rp...@post.harvard.edu  
 wrote:
 Hi,

 While I agree that toydist needs a new name, Bento might not be a  
 good
 choice.  It's already the name of a database system for Macintosh  
 from
 Filemaker, an Apple subsidiary.  I'd be *very* surprised if the name
 Bento is not copyrighted.

 Can you copyright a word ? I thought this was the trademark part of
 the law. For example, linux is a trademark owned by Linus Torvald.
 Also, well known packages use words which are at least as common as
 bento in English (sphinx, twisted, etc...), and as likely to be
 trademarked. But IANAL...

 cheers,

 David

It was very late last night when I wrote.  I meant to say 'trademark'  
rather than 'copyright'.

But IANAL, also.

Bob


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Bento (ex-toydist) 0.0.3

2010-07-02 Thread Matthew Brett
Hi,

 Can you copyright a word ? I thought this was the trademark part of
 the law. For example, linux is a trademark owned by Linus Torvald.
 Also, well known packages use words which are at least as common as
 bento in English (sphinx, twisted, etc...), and as likely to be
 trademarked.

I got ripely panned for doing this before, but...

If you have a look at - to reduce controversy - :

http://cyber.law.harvard.edu/metaschool/fisher/domain/tm.htm#7

you'll see a summary of the criteria used.   I read this stuff as
meaning that,  if you're doing something that has a low 'likelihood of
confusion' with the other guy / gal doing 'Bento', and the other
'Bento' trademark is not 'famous', you're probably, but not certainly,
safe from successful prosecution.

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] memory leak using numpy and cvxopt

2010-07-02 Thread Pauli Virtanen
Fri, 02 Jul 2010 14:56:47 +0200, Tillmann Falck wrote:
 I am hitting a memory leak with the combination of numpy and
 cvxopt.matrix. As I am not where it occurs, I am cross posting.

Probably a bug in cvxopt, as also the following leaks memory:


from cvxopt import matrix

N = 2000

X = [0]*N
Y = matrix(0.0, (N, N))

while True:
Y[:N, :1] = X


-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OT: request help building pymex win64

2010-07-02 Thread Robin
Hi,

Sorry for the offtopic post but I wondered if any Windows experts who
are familiar with topics like linking python on windows and visual
studio runtimes etc. might be able to help.

I'm on a bit of a mission to get pymex built for 64 bit windows. Pymex
( http://github.com/kw/pymex ) is a matlab package that embeds the
Python interpreter in a mex file and provides a very elegant interface
for manipulating python objects from matlab, as well as converting
between data times when necessary. It builds easily on mac, linux and
win32 with mingw, but I really need it also for 64 bit windows. (It
works very well with numpy as well so not completely OT).

I have looked at trying to get a 64bit mingw  working to build mex
files, but that seemed quite difficult, so instead I am trying to
build with VS 2008 Express Edition + Windows 7 SDK (for 64 bit
support). As far as I can tell this is installed OK as I can build the
example mex64 files OK.

I have made some modifications to pymex to get it to build under vs
2008 ( http://github.com/robince/pymex/tree/win64 ).

And I can get it to build and link (I believe using the implicit dll
method of linking against C:\Python26\libs\python26.lib of the amd64
python.org python) without errors, but when I run it seems to
segfaults whenever a pointer is passed between the mex side and
python26.dll.

I asked this stackoverflow question which has some more details (build log)
http://stackoverflow.com/questions/3167134/trying-to-embed-python-into-matlab-mex-win64

Anyway I'm completely in the dark but wondered if some of the experts
on here would be able to spot something (perhaps to do with
incompatible C runtimes - I am not sure what runtime Python is built
with but I thought it was VS 2008).

Cheers

Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: request help building pymex win64

2010-07-02 Thread David Cournapeau
On Sat, Jul 3, 2010 at 1:37 AM, Robin robi...@gmail.com wrote:
 Hi,

 Sorry for the offtopic post but I wondered if any Windows experts who
 are familiar with topics like linking python on windows and visual
 studio runtimes etc. might be able to help.

 I'm on a bit of a mission to get pymex built for 64 bit windows. Pymex
 ( http://github.com/kw/pymex ) is a matlab package that embeds the
 Python interpreter in a mex file and provides a very elegant interface
 for manipulating python objects from matlab, as well as converting
 between data times when necessary. It builds easily on mac, linux and
 win32 with mingw, but I really need it also for 64 bit windows. (It
 works very well with numpy as well so not completely OT).

 I have looked at trying to get a 64bit mingw  working to build mex
 files, but that seemed quite difficult, so instead I am trying to
 build with VS 2008 Express Edition + Windows 7 SDK (for 64 bit
 support). As far as I can tell this is installed OK as I can build the
 example mex64 files OK.

 I have made some modifications to pymex to get it to build under vs
 2008 ( http://github.com/robince/pymex/tree/win64 ).

 And I can get it to build and link (I believe using the implicit dll
 method of linking against C:\Python26\libs\python26.lib of the amd64
 python.org python) without errors, but when I run it seems to
 segfaults whenever a pointer is passed between the mex side and
 python26.dll.

 I asked this stackoverflow question which has some more details (build log)
 http://stackoverflow.com/questions/3167134/trying-to-embed-python-into-matlab-mex-win64

 Anyway I'm completely in the dark but wondered if some of the experts
 on here would be able to spot something (perhaps to do with
 incompatible C runtimes - I am not sure what runtime Python is built
 with but I thought it was VS 2008).

The problem may be that matlab is built with one runtime, and Python
with another Unless your matlab is very recent, it is actually
quite likely to be compiled with VS 2005, which means you should use
python 2.5 instead (or built python2.6 with VS 2005, but I am not sure
it is even possible without herculean efforts).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: request help building pymex win64

2010-07-02 Thread Robin
On Fri, Jul 2, 2010 at 5:47 PM, David Cournapeau courn...@gmail.com wrote:

 The problem may be that matlab is built with one runtime, and Python
 with another Unless your matlab is very recent, it is actually
 quite likely to be compiled with VS 2005, which means you should use
 python 2.5 instead (or built python2.6 with VS 2005, but I am not sure
 it is even possible without herculean efforts).

Thanks for your help!

I thought of that, but then VS 2008 is an officially supported
compiler for the version of matlab I am using (2009a).
http://www.mathworks.com/support/compilers/release2009a/win64.html

So I thought on the matlab/mex side 2008 should be fine, and I thought
since Python is built with 2008 that should also be OK.  But obviously
something isn't!

Cheers

Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: request help building pymex win64

2010-07-02 Thread Ken Watford
That's an excellent point.
I've noticed on my (Linux) workstation that pymex works fine, but
PyCUDA fails to import properly, because PyCUDA is a Boost::Python
project and expects a different libstdc++ than the one that MATLAB
jams into its LD_LIBRARY_PATH. (I got around this using an evil
LD_PRELOAD, but that's another story)

So yeah. Robin has been converting my C99isms C++-isms to get the
Visual Studio compiler to accept it - and in the process, I suppose,
adding a libstdc++ dependency that wasn't there to begin with that
MATLAB doesn't like.

Anyone know if there's a switch somewhere to get VS 2008 to accept
some semblance of C99 source? Otherwise you might need to convert
those bits into valid C89. I really need to convert this thing into
Cython some day.

On Fri, Jul 2, 2010 at 12:47 PM, David Cournapeau courn...@gmail.com wrote:
 On Sat, Jul 3, 2010 at 1:37 AM, Robin robi...@gmail.com wrote:
 Hi,

 Sorry for the offtopic post but I wondered if any Windows experts who
 are familiar with topics like linking python on windows and visual
 studio runtimes etc. might be able to help.

 I'm on a bit of a mission to get pymex built for 64 bit windows. Pymex
 ( http://github.com/kw/pymex ) is a matlab package that embeds the
 Python interpreter in a mex file and provides a very elegant interface
 for manipulating python objects from matlab, as well as converting
 between data times when necessary. It builds easily on mac, linux and
 win32 with mingw, but I really need it also for 64 bit windows. (It
 works very well with numpy as well so not completely OT).

 I have looked at trying to get a 64bit mingw  working to build mex
 files, but that seemed quite difficult, so instead I am trying to
 build with VS 2008 Express Edition + Windows 7 SDK (for 64 bit
 support). As far as I can tell this is installed OK as I can build the
 example mex64 files OK.

 I have made some modifications to pymex to get it to build under vs
 2008 ( http://github.com/robince/pymex/tree/win64 ).

 And I can get it to build and link (I believe using the implicit dll
 method of linking against C:\Python26\libs\python26.lib of the amd64
 python.org python) without errors, but when I run it seems to
 segfaults whenever a pointer is passed between the mex side and
 python26.dll.

 I asked this stackoverflow question which has some more details (build log)
 http://stackoverflow.com/questions/3167134/trying-to-embed-python-into-matlab-mex-win64

 Anyway I'm completely in the dark but wondered if some of the experts
 on here would be able to spot something (perhaps to do with
 incompatible C runtimes - I am not sure what runtime Python is built
 with but I thought it was VS 2008).

 The problem may be that matlab is built with one runtime, and Python
 with another Unless your matlab is very recent, it is actually
 quite likely to be compiled with VS 2005, which means you should use
 python 2.5 instead (or built python2.6 with VS 2005, but I am not sure
 it is even possible without herculean efforts).

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: request help building pymex win64

2010-07-02 Thread David Cournapeau
On Sat, Jul 3, 2010 at 1:58 AM, Robin robi...@gmail.com wrote:
 On Fri, Jul 2, 2010 at 5:47 PM, David Cournapeau courn...@gmail.com wrote:

 The problem may be that matlab is built with one runtime, and Python
 with another Unless your matlab is very recent, it is actually
 quite likely to be compiled with VS 2005, which means you should use
 python 2.5 instead (or built python2.6 with VS 2005, but I am not sure
 it is even possible without herculean efforts).

 Thanks for your help!

 I thought of that, but then VS 2008 is an officially supported
 compiler for the version of matlab I am using (2009a).
 http://www.mathworks.com/support/compilers/release2009a/win64.html

What mathworks means by supported may not include what you are doing,
though. Generally,on windows, people design API to be independent of
runtimes (because you more or less have to), and do not use much of
the standard C library anyway. This is not true for python. IOW,
supporting VS 2008 does not mean built with 2008, that's a limitation
of python (and also caused by the desire of MS to complete screw up
the C library, but that's another story).

Also, matlab could be built with 2008, and not use the same runtime as
python. Even if the same version is used in both python and matlab,
but the process uses two copies, the issues remain the same. You
should use depends.exe to check for this (and maybe the MS debugger as
well).

Also, I would double check the issue is not something else altogether,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] Vectorization

2010-07-02 Thread Benjamin Root
I am moving this over to numpy-discussion maillist...

I don't have a firm answer for you, but I did notice one issue in your
code.  You call arange(len(dx) - 1) for your loops, but you probably really
need arange(1, len(dx) - 1) because you are accessing elements both after
*and* before the current index.  An index of -1 is actually valid because
that means the last element of the array, and may not be what you intended.

Ben Root

On Fri, Jul 2, 2010 at 1:15 PM, Nicolas Bigaouette nbigaoue...@gmail.comwrote:

 Hi all,

 I don't really know where to ask, so here it is.

 I was able to vectorize the normalization calculation in quantum mechanics:
 phi|phi. Basically it's a volume integral of a scalar field. Using:

 norm = 0.0
 for i in numpy.arange(len(dx)-1):
 for j in numpy.arange(len(dy)-1):
 for k in numpy.arange(len(dz)-1):
 norm += psi[k,j,i]**2 * dx[i] * dy[j] * dz[k]

 if dead slow. I replaced that with:

 norm = (psi**2 *
 dx*dy[:,numpy.newaxis]*dz[:,numpy.newaxis,numpy.newaxis]).sum()

 which is almost instantanious.

 I want to do the same for the calculation of the kinetic energy:
 phi|p^2|phi/2m. There is a laplacian in the volume integral which
 complicates things:

 K = 0.0
 for i in numpy.arange(len(dx)-1):
 for j in numpy.arange(len(dy)-1):
 for k in numpy.arange(len(dz)-1):
 K += -0.5 * m * phi[k,j,i] * (
   (phi[k,j,i-1] - 2.0*phi[k,j,i] + phi[k,j,i+1]) /
 dx[i]**2
 + (phi[k,j-1,i] - 2.0*phi[k,j,i] + phi[k,j+1,i]) /
 dy[j]**2
 + (phi[k-1,j,i] - 2.0*phi[k,j,i] + phi[k+1,j,i]) /
 dz[k]**2
 )


 My question is, how would I vectorize such loops? I don't know how I would
 manage the numpy.newaxis code-foo with neighbours dependency... Any idea?

 Thanx!


 --
 This SF.net email is sponsored by Sprint
 What will you do first with EVO, the first 4G phone?
 Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
 ___
 Matplotlib-users mailing list
 matplotlib-us...@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/matplotlib-users


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] Vectorization

2010-07-02 Thread Keith Goodman
On Fri, Jul 2, 2010 at 11:33 AM, Benjamin Root ben.r...@ou.edu wrote:
 I am moving this over to numpy-discussion maillist...

 I don't have a firm answer for you, but I did notice one issue in your
 code.  You call arange(len(dx) - 1) for your loops, but you probably really
 need arange(1, len(dx) - 1) because you are accessing elements both after
 *and* before the current index.  An index of -1 is actually valid because
 that means the last element of the array, and may not be what you intended.

 Ben Root

 On Fri, Jul 2, 2010 at 1:15 PM, Nicolas Bigaouette nbigaoue...@gmail.com
 wrote:

 Hi all,

 I don't really know where to ask, so here it is.

 I was able to vectorize the normalization calculation in quantum
 mechanics: phi|phi. Basically it's a volume integral of a scalar field.
 Using:

 norm = 0.0
 for i in numpy.arange(len(dx)-1):
     for j in numpy.arange(len(dy)-1):
     for k in numpy.arange(len(dz)-1):
     norm += psi[k,j,i]**2 * dx[i] * dy[j] * dz[k]

 if dead slow. I replaced that with:

 norm = (psi**2 *
 dx*dy[:,numpy.newaxis]*dz[:,numpy.newaxis,numpy.newaxis]).sum()

 which is almost instantanious.

 I want to do the same for the calculation of the kinetic energy:
 phi|p^2|phi/2m. There is a laplacian in the volume integral which
 complicates things:

 K = 0.0
 for i in numpy.arange(len(dx)-1):
     for j in numpy.arange(len(dy)-1):
     for k in numpy.arange(len(dz)-1):
     K += -0.5 * m * phi[k,j,i] * (
   (phi[k,j,i-1] - 2.0*phi[k,j,i] + phi[k,j,i+1]) /
 dx[i]**2
     + (phi[k,j-1,i] - 2.0*phi[k,j,i] + phi[k,j+1,i]) /
 dy[j]**2
     + (phi[k-1,j,i] - 2.0*phi[k,j,i] + phi[k+1,j,i]) /
 dz[k]**2
     )

 My question is, how would I vectorize such loops? I don't know how I would
 manage the numpy.newaxis code-foo with neighbours dependency... Any idea?

If no one knows how to vectorize it then one way to go is cython. If
you convert your arrays to lists then it is very easy to convert the
loop to cython. Fast too.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] Vectorization

2010-07-02 Thread Keith Goodman
On Fri, Jul 2, 2010 at 11:45 AM, Keith Goodman kwgood...@gmail.com wrote:
 On Fri, Jul 2, 2010 at 11:33 AM, Benjamin Root ben.r...@ou.edu wrote:
 I am moving this over to numpy-discussion maillist...

 I don't have a firm answer for you, but I did notice one issue in your
 code.  You call arange(len(dx) - 1) for your loops, but you probably really
 need arange(1, len(dx) - 1) because you are accessing elements both after
 *and* before the current index.  An index of -1 is actually valid because
 that means the last element of the array, and may not be what you intended.

 Ben Root

 On Fri, Jul 2, 2010 at 1:15 PM, Nicolas Bigaouette nbigaoue...@gmail.com
 wrote:

 Hi all,

 I don't really know where to ask, so here it is.

 I was able to vectorize the normalization calculation in quantum
 mechanics: phi|phi. Basically it's a volume integral of a scalar field.
 Using:

 norm = 0.0
 for i in numpy.arange(len(dx)-1):
     for j in numpy.arange(len(dy)-1):
     for k in numpy.arange(len(dz)-1):
     norm += psi[k,j,i]**2 * dx[i] * dy[j] * dz[k]

 if dead slow. I replaced that with:

 norm = (psi**2 *
 dx*dy[:,numpy.newaxis]*dz[:,numpy.newaxis,numpy.newaxis]).sum()

 which is almost instantanious.

 I want to do the same for the calculation of the kinetic energy:
 phi|p^2|phi/2m. There is a laplacian in the volume integral which
 complicates things:

 K = 0.0
 for i in numpy.arange(len(dx)-1):
     for j in numpy.arange(len(dy)-1):
     for k in numpy.arange(len(dz)-1):
     K += -0.5 * m * phi[k,j,i] * (
   (phi[k,j,i-1] - 2.0*phi[k,j,i] + phi[k,j,i+1]) /
 dx[i]**2
     + (phi[k,j-1,i] - 2.0*phi[k,j,i] + phi[k,j+1,i]) /
 dy[j]**2
     + (phi[k-1,j,i] - 2.0*phi[k,j,i] + phi[k+1,j,i]) /
 dz[k]**2
     )

 My question is, how would I vectorize such loops? I don't know how I would
 manage the numpy.newaxis code-foo with neighbours dependency... Any idea?

 If no one knows how to vectorize it then one way to go is cython. If
 you convert your arrays to lists then it is very easy to convert the
 loop to cython. Fast too.

Some more thoughts: you pull phi[k,j,i] four times per loop. Setting
phikji = phi[k,j,i] might give a little more speed. Might also help to
do stuff like phi_i = phi[:,:,i], phi_ip1 = phi[:,:,i+1], etc right
after for i in range(). Also everything gets multiplied by -0.5 * m
* phi so you should do that outside the loop. You can square the d's
(dx, dy, and dz) outside the loop. Doing all these and then using
cython should makes things very fast.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] Vectorization

2010-07-02 Thread Bruce Southey
On 07/02/2010 01:45 PM, Keith Goodman wrote:
 On Fri, Jul 2, 2010 at 11:33 AM, Benjamin Rootben.r...@ou.edu  wrote:

 I am moving this over to numpy-discussion maillist...

 I don't have a firm answer for you, but I did notice one issue in your
 code.  You call arange(len(dx) - 1) for your loops, but you probably really
 need arange(1, len(dx) - 1) because you are accessing elements both after
 *and* before the current index.  An index of -1 is actually valid because
 that means the last element of the array, and may not be what you intended.

 Ben Root

 On Fri, Jul 2, 2010 at 1:15 PM, Nicolas Bigaouettenbigaoue...@gmail.com
 wrote:
  
 Hi all,

 I don't really know where to ask, so here it is.

 I was able to vectorize the normalization calculation in quantum
 mechanics:phi|phi. Basically it's a volume integral of a scalar field.
 Using:

 norm = 0.0
 for i in numpy.arange(len(dx)-1):
  for j in numpy.arange(len(dy)-1):
  for k in numpy.arange(len(dz)-1):
  norm += psi[k,j,i]**2 * dx[i] * dy[j] * dz[k]
  
 if dead slow. I replaced that with:

 norm = (psi**2 *
 dx*dy[:,numpy.newaxis]*dz[:,numpy.newaxis,numpy.newaxis]).sum()
  
 which is almost instantanious.

 I want to do the same for the calculation of the kinetic energy:
 phi|p^2|phi/2m. There is a laplacian in the volume integral which
 complicates things:

 K = 0.0
 for i in numpy.arange(len(dx)-1):
  for j in numpy.arange(len(dy)-1):
  for k in numpy.arange(len(dz)-1):
  K += -0.5 * m * phi[k,j,i] * (
(phi[k,j,i-1] - 2.0*phi[k,j,i] + phi[k,j,i+1]) /
 dx[i]**2
  + (phi[k,j-1,i] - 2.0*phi[k,j,i] + phi[k,j+1,i]) /
 dy[j]**2
  + (phi[k-1,j,i] - 2.0*phi[k,j,i] + phi[k+1,j,i]) /
 dz[k]**2
  )
  
 My question is, how would I vectorize such loops? I don't know how I would
 manage the numpy.newaxis code-foo with neighbours dependency... Any idea?

 If no one knows how to vectorize it then one way to go is cython. If
 you convert your arrays to lists then it is very easy to convert the
 loop to cython. Fast too.
 ___

Since things do not depend on previous results. Without thinking much 
can you just replace all your phi[] references to being phi[].sum()?

Probably wrong but hopefully you can figure out what I mean.



My reasoning is that over three loops then

K = 0.0
for i in numpy.arange(len(dx)-1):
 for j in numpy.arange(len(dy)-1):
 for k in numpy.arange(len(dz)-1):
 K += -0.5 * m * phi[k,j,i]

is just sum of all the entries except the last dimension in each axis of phi ie

K= -0.5 * m * phi[0:len(dz)-1, 0:len(dz)-1,0:len(dx)-1].sum()

Similarity for the remaining places you have to sum the appropriate sections of 
the phi array. So phi[k,j,i-1] becomes something like:
phi[:,:,0:i-1].sum() except you have to address the division. Therefore you 
have to sum over the appropriate access

(phi[:,:,0:i-1].sum(axis=2)/dx[0:i-1]**2).sum() # or
numpy.dot((phi[:,:,0:i-1].sum(axis=2), 1/(dx[0:i-1]**2))

Then I got confused when say i=0 because the reference phi[k,j,i-1] becomes 
phi[k,j,-1] - which is the last index of that axis. Is that what you meant to 
happen?


Bruce








___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] Vectorization

2010-07-02 Thread Geoffrey Ely
On Jul 2, 2010, at 11:33 AM, Benjamin Root wrote:
 I want to do the same for the calculation of the kinetic energy: 
 phi|p^2|phi/2m. There is a laplacian in the volume integral which 
 complicates things:
 K = 0.0
 for i in numpy.arange(len(dx)-1):
 for j in numpy.arange(len(dy)-1):
 for k in numpy.arange(len(dz)-1):
 K += -0.5 * m * phi[k,j,i] * (
   (phi[k,j,i-1] - 2.0*phi[k,j,i] + phi[k,j,i+1]) / dx[i]**2
 + (phi[k,j-1,i] - 2.0*phi[k,j,i] + phi[k,j+1,i]) / dy[j]**2
 + (phi[k-1,j,i] - 2.0*phi[k,j,i] + phi[k+1,j,i]) / dz[k]**2
 )
 
 My question is, how would I vectorize such loops? I don't know how I would 
 manage the numpy.newaxis code-foo with neighbours dependency... Any idea?

How about:

K = -0.5 * m * (
phi[1:-1,1:-1,1:-1] * (
  np.diff(phi[1:-1,1:-1,:], 2, 2) / dx[None,None,1:-1]**2
+ np.diff(phi[1:-1,:,1:-1], 2, 1) / dy[None,1:-1,None]**2
+ np.diff(phi[:,1:-1,1:-1], 2, 0) / dz[1:-1,None,None]**2
)
).sum()

(Not tested)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] cython and f2py

2010-07-02 Thread Geoffrey Ely
Hi All,

Sorry if this has been documented or discussed already, but my searches have 
come up short. Can someone please recommend a way to setup both Cython and 
Fortran extensions in a single package with numpy.distutils (or something 
else)? E.g.:

from numpy.distutils.core import setup, Extension
ext_modules = [
Extension( 'cext', ['cext.pyx'] ),
Extension( 'fext', ['fext.f90'] ),
]
setup(
name = 'test',
ext_modules = ext_modules,
)

Can numpy.distutils be directed to process *.pyx with Cython rather than Pyrex?

Is is kosher to call setup twice in the same script, once for Fortran, and once 
for Cython using Cython.Distutils.build_ext, or would that do bad things?

I guess I could just pre-process the Cython stuff, and have distutils work with 
the generated C, but I don't like that as much.

I am using numpy version 1.4.0, but can update to the development version if 
that helps.

Thanks,
Geoff

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cython and f2py

2010-07-02 Thread Keith Goodman
On Fri, Jul 2, 2010 at 12:53 PM, Geoffrey Ely g...@usc.edu wrote:
 Hi All,

 Sorry if this has been documented or discussed already, but my searches have 
 come up short. Can someone please recommend a way to setup both Cython and 
 Fortran extensions in a single package with numpy.distutils (or something 
 else)? E.g.:

 from numpy.distutils.core import setup, Extension
 ext_modules = [
    Extension( 'cext', ['cext.pyx'] ),
    Extension( 'fext', ['fext.f90'] ),
 ]
 setup(
    name = 'test',
    ext_modules = ext_modules,
 )

 Can numpy.distutils be directed to process *.pyx with Cython rather than 
 Pyrex?

 Is is kosher to call setup twice in the same script, once for Fortran, and 
 once for Cython using Cython.Distutils.build_ext, or would that do bad things?

 I guess I could just pre-process the Cython stuff, and have distutils work 
 with the generated C, but I don't like that as much.

That's what I do. I think it is easier for users (some might not have
cython). If the C extension doesn't compile I fallback to the python
versions of the functions.

http://github.com/kwgoodman/la/blob/master/setup.py
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cython and f2py

2010-07-02 Thread Matthew Brett
Hi,

 Can numpy.distutils be directed to process *.pyx with Cython rather than 
 Pyrex?

Yes, but at the moment I believe you have to monkey-patch numpy
distutils : see the top of

http://github.com/matthew-brett/nipy/blob/master/setup.py

and generate_a_pyrex_source around line 289 of:

http://github.com/matthew-brett/nipy/blob/master/build_helpers.py

for how we've done it - there may be a better way - please post if you find it!

Best,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PATCH: reference leaks for 'numpy.core._internal' module object

2010-07-02 Thread Lisandro Dalcin
The simple test below show the issue.

import sys
import numpy as np
from numpy.core import _internal

def f(a = np.zeros(4)):
a = np.zeros(4)
b = memoryview(a)
c = np.asarray(b)
print sys.getrefcount(_internal)

while 1:
f()

The patch it trivial (I've added a little extra, unrelated NULL check):

Index: numpy/core/src/multiarray/buffer.c
===
--- numpy/core/src/multiarray/buffer.c  (revision 8468)
+++ numpy/core/src/multiarray/buffer.c  (working copy)
@@ -747,9 +747,13 @@
 }
 str = PyUString_FromStringAndSize(buf, strlen(buf));
 free(buf);
+if (str == NULL) {
+return NULL;
+}
 descr = (PyArray_Descr*)PyObject_CallMethod(
 _numpy_internal, _dtype_from_pep3118, O, str);
 Py_DECREF(str);
+Py_DECREF(_numpy_internal);
 if (descr == NULL) {
 PyErr_Format(PyExc_ValueError,
  '%s' is not a valid PEP 3118 buffer format string, buf);


PS: I think that such implementation should at least handle the very
simple one/two character format (eg, 'i', 'f', 'd', 'Zf', 'Zd', etc.)
without calling Python code... Of course, complaints without patches
should not be taken too seriously ;-)


-- 
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion