Re: [Numpy-discussion] NumPy 1.8.0rc2 release

2013-10-25 Thread Andrew Straw
Yeah, sorry for the confusion. I thought my email didn't go through. I
created a bug report on github instead. Now I find the email was indeed
sent. Sorry about that. Looking at the github activity (see
https://github.com/numpy/numpy/issues/3977 ), I think a fix is almost at
hand thanks to Julian Taylor.

Best,
Andrew


On Fri, Oct 25, 2013 at 7:52 PM, Nathaniel Smith  wrote:

> It's py3 only, see the discussion in #3977.
> On 25 Oct 2013 17:45, "Charles R Harris" 
> wrote:
>
>>
>>
>>
>> On Fri, Oct 25, 2013 at 7:07 AM, Andrew Straw 
>> wrote:
>>
>>> Hi,
>>>
>>> I found an unexpected difference from numpy 1.7.1 and 1.8.0rc2 with
>>> Python 3.3.2 on Ubuntu 12.04 (amd64). Here is the test program:
>>>
>>> import numpy as np
>>> print(np.__version__)
>>> K = np.array([[ 0.,  0.,  0.,  0.],
>>>   [-0.,  0.,  0.,  0.],
>>>   [ 0., -0.,  0.,  0.],
>>>   [ 0., -0.,  0.,  0.]])
>>> w, V = np.linalg.eigh(K)
>>> print('w')
>>> print(w)
>>>
>>> with numpy 1.7.1:
>>>
>>> 1.7.1
>>> w
>>> [-0. -0. -0.  0.]
>>>
>>> with numpy 1.8.0rc2:
>>>
>>> 1.8.0rc2
>>> w
>>> [ 0.  0.  0.  0.]
>>>
>>
>> This appears to have gotten fixed since rc2:
>>
>> In [1]: K = np.array([[ 0.,  0.,  0.,  0.
>> ],
>>   [-0.,  0.,  0.,  0.],
>>   [ 0., -0.,  0.,  0.],
>>   [ 0., -0.,  0.,  0.]])
>>
>> In [2]: eigh(K)
>> Out[2]:
>> (array([-0., -0., -0.,  0.]),
>>  array([[-0.78251031,  0.37104402,  0.00198815,  0.5   ],
>>[-0.29974269,  0.03728557,  0.81164285, -0.5   ],
>>[ 0.54246491,  0.46764373,  0.48686874,  0.5   ],
>>[-0.05969728, -0.80140218,  0.32278596,  0.5   ]]))
>>
>> In [3]: np.__version__
>> Out[3]: '1.8.0.dev-ced0a94'
>>
>> Could you try the current 1.8.x version on github and see if the problem
>> persists?
>>
>> Chuck
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.8.0rc2 release

2013-10-25 Thread Andrew Straw
Hi,

I found an unexpected difference from numpy 1.7.1 and 1.8.0rc2 with Python
3.3.2 on Ubuntu 12.04 (amd64). Here is the test program:

import numpy as np
print(np.__version__)
K = np.array([[ 0.,  0.,  0.,  0.],
  [-0.,  0.,  0.,  0.],
  [ 0., -0.,  0.,  0.],
  [ 0., -0.,  0.,  0.]])
w, V = np.linalg.eigh(K)
print('w')
print(w)

with numpy 1.7.1:

1.7.1
w
[-0. -0. -0.  0.]

with numpy 1.8.0rc2:

1.8.0rc2
w
[ 0.  0.  0.  0.]

Apologies if this is my mistake!
-Andrew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling for free on Windows32

2009-04-15 Thread Andrew Straw
Fadhley Salim wrote:
> Thomasm,
> 
> What want is the current latest Numpy as a win32 .egg file for Python
> 2.4.4. I'm not bothered how I get there. We've been able to compile

> * Dont care how I make it as long as it works!

Are you aware that the reason numpy is not distributed as an .egg is so
that the installer can determine the correct architecture (no SSE, SSE,
SSE2) and install the appropriate files? This is not possible with .egg,
so you'll either have to ship the no SSE version or ensure that it will
be installed on only computers with the appropriate SIMD processor.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-12 Thread Andrew Straw
Eric Firing wrote:
> Sure enough, that is what I was looking for.  (gitweb doesn't seem to 
> have the annotate [or blame, in git-speak] option, or the graph.)
>   
gitweb does, you have to turn it on, though...
You need to add this to your gitweb.conf, though:

$feature{'blame'}{'default'} = [1];
$feature{'blame'}{'override'} = 1;

I also find pickaxe and snapshot useful:

$feature{'pickaxe'}{'default'} = [1];
$feature{'pickaxe'}{'override'} = 1;

$feature{'snapshot'}{'default'} = ['zip', 'tgz'];
$feature{'snapshot'}{'override'} = 1;

I don't know about the graph. (You mean the "gitk --all" kind of view? I
saw one JavaScript-y web app that did that, but it was slow and ugly for
any non-trivial repo.)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DVCS at PyCon

2009-04-09 Thread Andrew Straw
Matthieu Brucher wrote:
>> One thing about git-svn is that this is not really needed if you just
>> use git and I installed git from source on many linuxes and clusters
>> and it just works, as it is just pure C. I usually just use git-svn on
>> my laptop/workstation, where I install the Debian/Ubuntu packages, and
>> I create the git repository, upload to github.com or somewhere else
>> and just work with the git repository.
>>
>> But I agree that if it installs git-svn and it doesn't just work, it's
>> a big problem.
> 
> I was inquiring the use of git with the use of one of our internal svn
> repositories, just to have a feeling about it :(

My opinion is that attempting to use git-svn to get a feeling of git is
not a good idea. There's too much slowness of svn involved, too much
pain of trying to learn git while also trying to learn git-svn (which
itself has corner cases and such that pure git doesn't) and there's no
bidirectional 1:1 mapping between branches (that I've found),
eliminating a huge part of the joy of git -- cheap branches.

Better to start developing on a pure git project to get a feel. You
can't go wrong with sympy, for example. :)

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N-D array interface page is out of date

2009-03-08 Thread Andrew Straw
Gael Varoquaux wrote:
> On Sun, Mar 08, 2009 at 10:18:11PM -0700, Andrew Straw wrote:
>> OK, I now have a password (thanks Gaël), but I don't have edit
>> permissions on that page. So I'm attaching a patch against that page
>> source that incorporates the stuff that was on the old page that's not
>> in the new page.
> 
>> I'm happy to apply this myself if someone gives me edit permissions. I
>> wasn't able to check out all the ReST formatting with the online editor
>> because I don't have edit permissions.
> 
> You don't have permissions to the corresonding page in the doc wiki:
> http://docs.scipy.org/numpy/docs/numpy-docs/reference/arrays.interface.rst/
> ?
> 
> I am a bit lost, if this is the case.

OK, thanks for the pointer. Somehow I navigated to a view of that page
that I could not edit. I have now uploaded my changes, including bits
that didn't hadn't yet made it into the Sphinx-based documentation from
the original page on numpy.scipy.org.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N-D array interface page is out of date

2009-03-08 Thread Andrew Straw
Andrew Straw wrote:
> Pauli Virtanen wrote:
>> Hi,
>>
>> Fri, 06 Mar 2009 09:16:33 -0800, Andrew Straw wrote:
>>> I have updated http://numpy.scipy.org/array_interface.shtml to have a
>>> giant warning first paragraph describing how that information is
>>> outdated. Additionally, I have updated http://numpy.scipy.org/ to point
>>> people to the buffer interface described in PEP 3118 and implemented in
>>> Python 2.6/3.0. Furthermore, I have suggested Cython has a way to write
>>> code for older Pythons that will automatically support the buffer
>>> interface in newer Pythons.
>>>
>>> If you have knowledge about these matters (Travis O. and Dag,
>>> especially), I'd appreciate it if you could read over the pages to
>>> ensure everything is actually correct.
>> I wonder if it would make sense to redirect the page here:
>>
>>  http://docs.scipy.org/doc/numpy/reference/arrays.interface.html
>>
>> so that it would be easier to edit etc. in the future?
>>
> 
> 
> Yes, great idea. I just updated the page to point to the page you linked
> (which I didn't know existed -- thanks for pointing it out).
> 
> Also, I have made several changes to arrays.interface.rst which I will
> upload once my password situation gets resolved.

OK, I now have a password (thanks Gaël), but I don't have edit
permissions on that page. So I'm attaching a patch against that page
source that incorporates the stuff that was on the old page that's not
in the new page.

I'm happy to apply this myself if someone gives me edit permissions. I
wasn't able to check out all the ReST formatting with the online editor
because I don't have edit permissions.

-Andrew
diff --git a/scipy-website/numpy-doc/arrays.interface.rst b/scipy-website/numpy-doc/arrays.interface.rst
index e3b12ce..4894dd6 100644
--- a/scipy-website/numpy-doc/arrays.interface.rst
+++ b/scipy-website/numpy-doc/arrays.interface.rst
@@ -206,6 +206,33 @@ array using only one attribute lookup and a well-defined C-structure.
must also not reallocate their memory if other objects are
referencing them.
 
+The PyArrayInterface structure is defined in ``numpy/ndarrayobject.h``
+as::
+
+  typedef struct {
+int two;  /* contains the integer 2 -- simple sanity check */
+int nd;   /* number of dimensions */
+char typekind;/* kind in array --- character code of typestr */
+int itemsize; /* size of each element */
+int flags;/* flags indicating how the data should be interpreted */
+  /*   must set ARR_HAS_DESCR bit to validate descr */
+Py_intptr_t *shape;   /* A length-nd array of shape information */
+Py_intptr_t *strides; /* A length-nd array of stride information */
+void *data;   /* A pointer to the first element of the array */
+PyObject *descr;  /* NULL or data-description (same as descr key
+  of __array_interface__) -- must set ARR_HAS_DESCR
+  flag or this will be ignored. */
+  } PyArrayInterface;
+
+The flags member may consist of 5 bits showing how the data should be
+interpreted and one bit showing how the Interface should be
+interpreted.  The data-bits are :const:`CONTIGUOUS` (0x1),
+:const:`FORTRAN` (0x2), :const:`ALIGNED` (0x100), :const:`NOTSWAPPED`
+(0x200), and :const:`WRITEABLE` (0x400).  A final flag
+:const:`ARR_HAS_DESCR` (0x800) indicates whether or not this structure
+has the arrdescr field.  The field should not be accessed unless this
+flag is present.
+
 .. admonition:: New since June 16, 2006:
 
In the past most implementations used the "desc" member of the
@@ -216,3 +243,92 @@ array using only one attribute lookup and a well-defined C-structure.
reference to the object when the :ctype:`PyCObject` is created using
:ctype:`PyCObject_FromVoidPtrAndDesc`.
 
+
+Type description examples
+=
+
+For clarity it is useful to provide some examples of the type
+description and corresponding :data:`__array_interface__` 'descr'
+entries.  Thanks to Scott Gilbert for these examples:
+
+In every case, the 'descr' key is optional, but of course provides
+more information which may be important for various applications::
+
+ * Float data
+ typestr == '>f4'
+ descr == [('','>f4')]
+
+ * Complex double
+ typestr == '>c8'
+ descr == [('real','>f4'), ('imag','>f4')]
+
+ * RGB Pixel data
+ typestr == '|V3'
+ descr == [('r','|u1'), ('g','|u1'), ('b','|u1')]
+
+ * Mixed endian (weird b

[Numpy-discussion] numpy.scipy.org

2009-03-08 Thread Andrew Straw
Hi all,

I have been doing some editing of http://numpy.scipy.org . In general,
however, lots of this page is redundant and outdated compared to lots of
other documentation that has now sprung up. Shall we kill this page off,
redirect it to another page, or continue updating it? (For this latter
option, patches are welcome.)

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N-D array interface page is out of date

2009-03-08 Thread Andrew Straw
Pauli Virtanen wrote:
> Hi,
> 
> Fri, 06 Mar 2009 09:16:33 -0800, Andrew Straw wrote:
>> I have updated http://numpy.scipy.org/array_interface.shtml to have a
>> giant warning first paragraph describing how that information is
>> outdated. Additionally, I have updated http://numpy.scipy.org/ to point
>> people to the buffer interface described in PEP 3118 and implemented in
>> Python 2.6/3.0. Furthermore, I have suggested Cython has a way to write
>> code for older Pythons that will automatically support the buffer
>> interface in newer Pythons.
>>
>> If you have knowledge about these matters (Travis O. and Dag,
>> especially), I'd appreciate it if you could read over the pages to
>> ensure everything is actually correct.
> 
> I wonder if it would make sense to redirect the page here:
> 
>   http://docs.scipy.org/doc/numpy/reference/arrays.interface.html
> 
> so that it would be easier to edit etc. in the future?
> 


Yes, great idea. I just updated the page to point to the page you linked
(which I didn't know existed -- thanks for pointing it out).

Also, I have made several changes to arrays.interface.rst which I will
upload once my password situation gets resolved.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy documentation editor - retrieve password?

2009-03-08 Thread Andrew Straw
Hi,

I created a login for the numpy documentation editor but cannot remember
my password. Would it be possible to have it sent to me or a new one
generated? It would be great to have a button on the website so that I
could do this myself, but if that's too much pain, my username is
AndrewStraw.

Thanks,
Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N-D array interface page is out of date

2009-03-06 Thread Andrew Straw
Hi,

I have updated http://numpy.scipy.org/array_interface.shtml to have a 
giant warning first paragraph describing how that information is 
outdated. Additionally, I have updated http://numpy.scipy.org/ to point 
people to the buffer interface described in PEP 3118 and implemented in 
Python 2.6/3.0. Furthermore, I have suggested Cython has a way to write 
code for older Pythons that will automatically support the buffer 
interface in newer Pythons.

If you have knowledge about these matters (Travis O. and Dag, 
especially), I'd appreciate it if you could read over the pages to 
ensure everything is actually correct.

Thanks,
Andrew

Stéfan van der Walt wrote:
> 2009/2/3 Andrew Straw :
>   
>> Can someone with appropriate permissions fix the page or give me the
>> appropriate permissions so I can do it? I think even deleting the page
>> is better than keeping it as-is.
>> 
>
> Who all has editing access to this page?  Is it hosted on scipy.org?
>
> Stéfan
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython numerical syntax revisited

2009-03-04 Thread Andrew Straw
Dag Sverre Seljebotn wrote:
> This is NOT yet discussed on the Cython list; I wanted to check with 
> more numerical users to see if the issue should even be brought up there.
> 
> The idea behind the current syntax was to keep things as close as 
> possible to Python/NumPy, and only provide some "hints" to Cython for 
> optimization. My problem with this now is that a) it's too easy to get 
> non-optimized code without a warning by letting in untyped indices, b) I 
> think the whole thing is a bit too "magic" and that it is too unclear 
> what is going on to newcomers (though I'm guessing there).

These may be issues, but I think keeping "cython -a my_module.pyx" in 
one's development cycle and inspecting the output will lead to great 
enlightenment on the part of the Cython user. Perhaps this should be 
advertised more prominently? I always do this with any Cython-generated 
code, and it works wonders.

> My proposal: Introduce an explicit "buffer syntax":
> 
> arr = np.zeros(..)
> cdef int[:,:] buf = arr # 2D buffer

My initial reaction is that it seems to be a second implementation of 
buffer interaction Cython, and therefore yet another thing to keep in 
mind and it's unclear to me how different it would be from the 
"traditional" Cython numpy ndarray behavior and how the behavior of the 
two approaches might differ, perhaps in subtle ways. So that's a 
disadvantage from my perspective. I agree that some of your ideas are 
advantages, however. Also, it seems it would allow one to (more easily) 
interact with buffer objects in sophisticated ways without needing the 
GIL, which is another advantage.

Could some or all of this be added to the current numpy buffer 
implementation, or does it really need the new syntax?

Also, is there anything possible with buffer objects that would be 
limited by the choice of syntax you propose? I imagine this might not 
work with structured data types (then again, it might...).
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is there a faster way to get a buffer interface than ndarray.tostring()?

2009-02-24 Thread Andrew Straw
Given what you're doing, may I also suggest having a look at
http://code.astraw.com/projects/motmot/wxglvideo.html

-Andrew

Chris Colbert wrote:
> As an update for any future googlers: 
> 
> the problem was with revpixels = pixeldata[::-1,:,;:-1] which apparently
> returns an array that is discontinuous in memory. What Lisandro
> suggested worked. pixeldata[::-1,:,;:-1].copy() returns a continuous
> array object which natively implements a single-segment buffer
> interface. i.e. no buffer(revpixels) is needed; just simply revpixels. 
> 
> array.copy() is also 50% faster than array.tostring() on my machine. 
> 
> Chris
> 
> On Tue, Feb 24, 2009 at 9:27 PM, Chris Colbert  > wrote:
> 
> thanks for both answers! 
> 
> 
> Lisandro, you're right, I should have declared the array outside the
> loop. Thanks for catching that!
> 
> Robert, as always, thanks for the answer. Quick and to the point!
> You've helped me more than once on the enthought list :)
> 
> 
> 
> On Tue, Feb 24, 2009 at 8:06 PM, Lisandro Dalcin  > wrote:
> 
> When you do pixeldata[::-1,:,::-1], you just got a new array with
> different strides, but now non-contiguous... So I believe you really
> need a fresh copy of the data... tostring() copies, but could be
> slow... try to use
> 
> revpixels = pixeldata[::-1,:,::-1].copy()
> 
> ...
> 
> rgbBMP = wx.BitmapFromBuffer(640, 480, buffer(revpixels))
> 
> 
> Perhaps you could optimize this by declaring revpixels outside de
> loop, and then inside de loop doing
> 
> revpixels[...] = pixeldata[::-1,:,::-1]
> 
> 
> This way you will save the mem allocation of revpixels at each
> step of the loop.
> 
> 
> 
> 
> 
> On Tue, Feb 24, 2009 at 9:15 PM, Chris Colbert
> mailto:sccolb...@gmail.com>> wrote:
> > Hi all,
> >
> >  I'm new to mailing list and relatively new (~1 year) to
> python/numpy. I
> > would appreciate any insight any of you may have here. The
> last 8 hours of
> > digging through the docs has left me, finally, stuck.
> >
> > I am making a wxPython program that includes webcam
> functionality. The
> > script below just brings up a simple display window that shows
> the webcam
> > feed. The problem I am running into is in the
> WebCamWorker.run() method.
> > This method reads the raw pixel data (in string form) from the
> webcam
> > buffer. This data (thank you microsoft!) comes in BGR and
> bottom-to-top. I
> > feed this buffer to the array constructor and then swap the
> pixel order to
> > RGB and top-to-bottom. This all happens very fast, ~1ms on a
> Quad Core, and
> > the majority of that time is spent constructing the array (the
> numpy pixel
> > swapping is on the order of E-5s !). The tostring() method,
> however, takes a
> > whopping 10ms to execute. Unfortunately, the
> wx.BitmapFromBuffer needs a
> > single segment buffer interface as an argument. Is there a way
> to expose my
> > numpy array as a buffer to cut down on this conversion time?
> >
> > I have tried sending the array to the PIL Image.fromarray(),
> but that
> > eventually requires me to do a Image.tostring() anyway, which
> negates any
> > benefit.
> >
> > Thanks in advance for the help!
> >
> > S. Chris Colbert
> > Rehabilitation Robotics Laboratory
> > University of South Florida
> >
> >
> >   Code ###
> >
> > import VideoCapture
> > import wx
> > import time
> > import threading
> > import numpy as np
> > import Image
> >
> >
> > class WebCamWorker(threading.Thread):
> > _abort = 0
> >
> > def __init__(self, parent):
> > super(WebCamWorker, self).__init__()
> >
> > self._parent = parent
> > self.panel = wx.Panel(parent)
> >
> > def run(self):
> >
> > while not self._abort:
> >
> > #numpy arrays reverse pixel data that comes in
> from CCD as BGR
> > and bottom to top
> >
> > pixeldata = np.ndarray((480,640,3),
> > buffer=webcam.getBuffer()[0], dtype='u1')
> > revpixels = pixeldata[::-1,:,::-1].tostring() 
> #tostring is
> > an order of magnitude slower than the entire array
> manipulation. need a
> > faster method.
> >
> > rgbBMP = wx.BitmapFromBuffer(640, 480, revpixels)
> >   

Re: [Numpy-discussion] small suggestion for numpy.testing utils

2009-02-22 Thread Andrew Straw
Darren,

What's the difference between asanyarray(y) and array(y, copy=False, 
subok=True)? I thought asanyarray would also do what you want.

-Andrew

Darren Dale wrote:
> On Sun, Feb 22, 2009 at 3:22 PM, Darren Dale  > wrote:
>
> On Sun, Feb 22, 2009 at 3:17 PM, Darren Dale  > wrote:
>
> Hello,
>
> I am using numpy's assert_array_equal and
> assert_array_almost_equal to unit test my physical quantities
> package. I made a single minor change to assert_array_compare
> that I think might make these functions more useful to ndarray
> subclasses, and thought maybe they could be useful to numpy
> itself. I tried applying this diff to numpy and running the
> test suite, and instead of 9 known failures I got 1 known
> failure, 11 skips, 2 errors and 2 failures. Perhaps it is
> possible that by not forcing the input arrays to be ndarray
> instances, some additional numpy features are exposed.
>
> Thanks,
> Darren
>
> $ svn diff
> Index: numpy/testing/utils.py
> ===
> --- numpy/testing/utils.py  (revision 6370)
> +++ numpy/testing/utils.py  (working copy)
> @@ -240,9 +240,9 @@
>
>  def assert_array_compare(comparison, x, y, err_msg='',
> verbose=True,
>   header=''):
> -from numpy.core import asarray, isnan, any
> -x = asarray(x)
> -y = asarray(y)
> +from numpy.core import array, isnan, any
> +x = array(x, copy=False, subok=True)
> +y = array(y, copy=False, subok=True)
>
>  def isnumber(x):
>  return x.dtype.char in '?bhilqpBHILQPfdgFDG'
>
>
> Actually, my svn checkout was not up to date. With this patch
> applied, I get 1 known failure and 11 skips.
>
>
> I just double checked and I think I get the same results running the 
> svn 6456 test suite with and without this patch applied. I tried 
> posting an enhancement request at the trac website, but I cant file 
> the ticket because I get "500 Internal Server Error", so I'm posting 
> it here.
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Andrew Straw
David Cournapeau wrote:
> On Thu, Feb 19, 2009 at 3:24 PM, Andrew Straw  wrote:
>   
>> It's an interesting idea to build Python package distributions without
>> distutils. For pure Python installables, if all you seek better is
>> distutils, the bar seems fairly low.
>> 
>
> :) Being better than distutils is not difficult, indeed - that is if
> you don't care about backward compatibility with distutils (which I
> don't personally - since distutils is implementation defined, I don't
> see any way to be backward compatible and be a significant improvement
> at the same time).
>   
Maybe if you need a level of backward compatibility, (and really, to 
gain a decent audience for this idea, I think you do need some level of 
backward compatibility) the new tool could emit setup.py files for 
consumption by distutils as a fallback plan. I could imagine 
auto-generating a setup.py is easier than emulating distutils, 
particularly if your concept starts simple. Furthermore, if you're not 
opposed to dropping in your own distutils monkeypatches, like lots of 
other packages, you probably could do anything you wanted. For example, 
bypassing the build_ext command and injecting the built products into 
the distutils install command.

This monkeypatching idea is not even so unpalatable if one remembers 
that monkeypatching, which is quite common, makes it literally 
impossible to emulate distutils without being distutils.
> I will refrain from speaking about setuptools :) But the above problem
> is the same for distutils and setuptools, and exactly the fundamental
> issue. If distutils was conceived as a set of lousely coupled modules
> for tools, build, distribution, it would have been fixable. In the
> case of setuptools, it was a conscious decision to force setuptools on
> python world,
This reminds of Linus' criticism of svn: its goal is to be a better cvs. 
Said dripping with incredulity due to his perception of the fatal flaws 
of CVS. Well, I think (parts of) setuptools are better than distutils, 
but by being distutils+, it will always share the same flawed genetic 
material...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Andrew Straw
David Cournapeau wrote:
>> * Integration with setuptools and eggs, which enables things like
>> namespace packages.
>>   
> 
> This is not. eggs are not specified, and totally implementation defined.
> I tried some time ago to  add an egg builder to scons, but I gave up.
> And I don't think you can reuse the setuptools code, as everything is
> coupled.

It's an interesting idea to build Python package distributions without 
distutils. For pure Python installables, if all you seek better is 
distutils, the bar seems fairly low. For compiled stuff, it still 
doesn't seem too bad. Of course, it is easy to say this without having 
tried...

So, what do you mean by an "egg", in the context of it being hard to 
produce? An .egg zip file, an .egg directory, and/or a normal distutils 
package with an .egg-info/ sibling? Since .egg-info/  is now part of 
distutils, this should now be specified... or?

[This, though, does point out a conceptual problem with setuptools for 
me -- it does a zillion things some of which I like a lot (e.g. 
installing console scripts and gui scripts, a simple plugin 
architecture) and others I don't care about as long as they don't break 
things for me (e.g. installing multiple version of packages 
side-by-side, a problem which is much more sanely solved by setting 
PYTHONPATH, or its sophisticated cousin, virtualenv). And all of these 
things go by one name "setuptools" and sometimes even "eggs", even 
though people often use those words to discuss totally different 
features. Hence my question above.]

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] Numpy 1.2.1 and Scipy 0.7.0; Ubuntu packages

2009-02-12 Thread Andrew Straw
OK, I think you're concerned about compatibility of Python extensions
using fortran. We don't use any (that I know of), so I'm going to stop
worrying about this and upload .debs from your .dsc (or very close) to
my repository...

...except for one last question: If Hardy uses the g77 ABI but I'm
building scipy with gfortran, shouldn't there be an ABI issue with my
ATLAS? Shouldn't I get lots of test failures with scipy? I don't.

David Cournapeau wrote:
> My main rationale to provide PPA is to avoid the never-ending queue of
> emails about missing symbols, etc... because I am tired of it, because
> it gives a bad mouth taste to users, because I would like to deal with
> more interesting issues.
>   
Well, I appreciate you doing that. Packaging is a thankless job... and
when everything works on your own computer, it's hard to work up the
motivation to make it work on others'.
>> To understand your statement about identical, I will operationally
>> define "identical" for .debs to mean that they were built from the same
>> .dsc.
>> 
>
> I has an even larger definition of identical: same control/rules/patches
> is identical for me.
>   
This is the only other point I want to make: if you're building from the
same .dsc, it means you're using the same control/rules/patches. (That's
why I brought up the checksums and the signatures.)

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] Numpy 1.2.1 and Scipy 0.7.0; Ubuntu packages

2009-02-12 Thread Andrew Straw
David Cournapeau wrote:
> Andrew Straw wrote:
>   
>> Fernando Perez wrote:
>>   
>> 
>>> On Wed, Feb 11, 2009 at 6:17 PM, David Cournapeau  
>>> wrote:
>>>
>>> 
>>>   
>>>> Unfortunately, it does require some work, because hardy uses g77
>>>> instead of gfortran, so the source package has to be different (once
>>>> hardy is done, all the one below would be easy, though). I am not sure
>>>> how to do that with PPA (the doc is not great).
>>>>   
>>>> 
>>> OK, thanks for the info.  This is already very useful.
>>> 
>>>   
>> What exactly is the expected problem and how would I verify that I'm not
>> getting hit by it?
>>   
>> 
>
> I want to follow as closely as possible the official debian/ubuntu
> packages. Ideally, any package produced on the PPA superseded by an
> official package (from 1.2.1-1~ppaN to 1.2.1-1) should be identical to
> the superseding package. Hardy default fortran ABI is g77, not gfortran,
> so I have to use g77 for hardy - and the ppa package, limited to
> intrepid, use gfortran (since intrepid ABI is gfortran's).
>   
(Warning: this email is a little over-detailed on the packaging details
front. Believe it or not, I'm not discussing the details of Debian
packaging for fun, but rather my questions have practical importance to
me -- I don't want to break all my lab's scipy installations. :)


This doesn't make sense to me. I built the .deb on an a clean, minimal
sbuild (a chroot with only a very few basic packages installed, somewhat
mimicing Ubuntu's PPA builder). It built from your unmodified .dsc,
which auto-downloads the declared dependencies (and nothing else). It
passes the tests. To be very explicit -- I didn't specify to use g77 at
any point. (As implied by my previous statement of using your unmodified
.dsc, I used only the debian/rules and debian/control in your package.)
To understand your statement about identical, I will operationally
define "identical" for .debs to mean that they were built from the same
.dsc. Of course, in the case you describe above, it can't be _exactly_
the same .dsc because the version numbers in debian/changelog must
change and consequently so must the checksums and GPG signature in the
.dsc file, and presumably a different person will sign it. Also, there
will be timestamp differences and such for two .debs built from the
exact same .dsc, but we can ignore those. In this case, I don't see why
an "official" package, which meets this operational definition of
identical, wouldn't work on Hardy, as it would be built from nearly an
identical .dsc on nearly an identical clean build environment. (Of
course, there will never be an official package of this for Hardy, but
that's not the point.)

> In your case, you built a package which uses gfortran ABI on Hardy - it
> works, but is not really acceptable for an official package - and thus,
> when an upgrade from ppa to official happens, you won't get the same
> package.
Why is it "not really acceptable"? As long as it builds and works and
doesn't break anything, why would Ubuntu maintainers care if it uses the
gfortran ABI?

Also, in practical terms, what upgrade? A) Hardy will not upgrade
python-scipy. It's against policy for a released distribution to upgrade
software without a security reason. B) Imaging for a moment that there
would be an upgrade, why do you think it would break?
>  In the rpm  world, you can use conditional on distribution
> version/type int the spec file (which is the control  + changelog  +
> rules in one file), but AFAIK, you can't do that with debian, or at
> least have not found the relevant doc.
>   
I don't understand what you're saying. My understanding is that at the
beginning of each distribution (Hardy, Intrepid, Lenny), the maintainers
decide on a C++ (and I guess Fortran, but I'm not sure) ABI and fix the
toolchain to build this ABI. From then on, everything is built with this
ABI. And the point of a signed .dsc file and a clean sbuild/pbuilder is
that any .deb that gets built will be contingent on the files in
debian/* because that's cryptographically signed in the .dsc file. So,
if you trust the archive master and his computer (by trusting his keys
in your apt keyring), you trust that the .deb was built from the .dsc.
An the .dsc is signed by the maintainer. So there's a cryptographic
chain of trust to those control + changelog + rules files.

I'm still not sure if I'm not getting your worry, though...

Thanks,
Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] Numpy 1.2.1 and Scipy 0.7.0; Ubuntu packages

2009-02-12 Thread Andrew Straw
Fernando Perez wrote:
> On Wed, Feb 11, 2009 at 6:17 PM, David Cournapeau  wrote:
> 
>> Unfortunately, it does require some work, because hardy uses g77
>> instead of gfortran, so the source package has to be different (once
>> hardy is done, all the one below would be easy, though). I am not sure
>> how to do that with PPA (the doc is not great).
> 
> OK, thanks for the info.  This is already very useful.

What exactly is the expected problem and how would I verify that I'm not
getting hit by it?

The context is that I have built your packages on Hardy and got no FTBFS
errors in a clean sbuild machine and was about to upload this to my
lab's repo but saw your email. (Also, I have been using scipy
0.7.0rc2-ish apparently descended from the same Debian source package as
yours for a couple weeks and have noted no problems of late.) "nosetest
scipy" on an amd64 with lots of installed packages shows:

Ran 3211 tests in 356.572s

FAILED (SKIP=13, errors=9)

Where the errors are all known failures.


The same test on the i386 sbuild shows:

Ran 3211 tests in 312.464s

FAILED (SKIP=16, errors=204)

Most of the failures appear to be from weave/scxx and I suspect have to
do with some package not being installed -- if that's true, it's
probably a packaging bug and not a scipy bug.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N-D array interface page is out of date

2009-02-03 Thread Andrew Straw
Regarding http://numpy.scipy.org/array_interface.shtml :

I just noticed that this out of date page is now featuring in recent
discussions about the future of Numpy in Ubuntu:
https://bugs.launchpad.net/ubuntu/+source/python-numpy/+bug/309215

Can someone with appropriate permissions fix the page or give me the
appropriate permissions so I can do it? I think even deleting the page
is better than keeping it as-is.

-Andrew

Andrew Straw wrote:
> Hi, I just noticed that the N-D array interface page is outdated and
> doesn't mention the buffer interface that is standard with Python 2.6
> and Python 3.0:
> 
> http://numpy.scipy.org/array_interface.shtml
> 
> This page is linked to from http://numpy.scipy.org/
> 
> I suggest, at the minimum, modifying the page with really annoying
> blinking red letters at the top (or other suitable warning) that this is
> deprecated at that people should use
> http://www.python.org/dev/peps/pep-3118/ instead.
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] N-D array interface page is out of date

2009-01-22 Thread Andrew Straw
Hi, I just noticed that the N-D array interface page is outdated and
doesn't mention the buffer interface that is standard with Python 2.6
and Python 3.0:

http://numpy.scipy.org/array_interface.shtml

This page is linked to from http://numpy.scipy.org/

I suggest, at the minimum, modifying the page with really annoying
blinking red letters at the top (or other suitable warning) that this is
deprecated at that people should use
http://www.python.org/dev/peps/pep-3118/ instead.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Singular Matrix problem with Matplitlib in Numpy (Windows - AMD64)

2009-01-16 Thread Andrew Straw
John Hunter wrote:
> Andrew, since you are the original author of the isnan port, could you
> patch the branch and the trunk to take care of this?

Done in r6791 and r6792.

Sorry for the trouble.

Now I just hope we don't get a problem with "long long", although now if
_ISOC99_SOURCE is defined, we'll preferentially use "int64_t" out of
, so I should think this is more portable on sane platforms.

This one of many reasons why I stick to Python...
-Andrew

> 
> JDH
> 
> On Fri, Jan 16, 2009 at 8:07 AM, George  wrote:
>> Hello.
>>
>> I am terribly sorry. I was mistaken last night. I had the latest Matplotlib
>> version 0.98.5.2 and I thought the bug was fixed but it hasn't. Let me 
>> explain.
>>
>> In the file MPL_isnan.h line 26 there is a declaration:
>>
>> typedef long int MPL_Int64
>>
>> This is fine for Linux 64-bit, but NOT for Windows XP 64-bit! For Windows the
>> declaration should be:
>>
>> typedef long longMPL_Int64
>>
>> This bug has caused me a LOT of late nights and last night was one of them. 
>> The
>> declaration is correct for Linux 64-bit and I guess Matplotlib was developed 
>> on
>> Linux because of this declaration. That is also why I thought the bug was 
>> fixed
>> but this morning I realised that I was looking at the wrong console.
>>
>> So, in summary. For Matplotlib 0.98.5.2 and Numpy 1.2.1 to work without any
>> problems. This means compiling and using Numpy and Matplotlib on Windows XP
>> 64-bit using AMD 64-bit compile environment, change line 26 in the file
>> MPL_isnan.h from long int to long long.\
>>
>> I also previously suggested switching MKL and ACML etc. but with this change
>> everything is fine. One can choose any math library and it works.
>>
>> Writing a small test application using sizeof on different platforms 
>> highlights
>> the problem.
>>
>> Thanks.
>>
>> George.
>>
>>
>> ___
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Efficient removal of duplicates

2008-12-15 Thread Andrew Straw
Hanno Klemm wrote:
> Hi,
> 
> I the following problem: I have a relatively long array of points
> [(x0,y0), (x1,y1), ...]. Apparently, I have some duplicate entries, which
> prevents the Delaunay triangulation algorithm from completing its task.
> 
> Question, is there an efficent way, of getting rid of the duplicate
> entries?
> All I can think of involves loops. 
> 
> Thanks and regards,
> Hanno
> 
> 

One idea is to create a view of the original array with a shape of (N,)
and elements with a dtype that encompases both xn, yn. Then use
numpy.unique() to find the unique entries, and create a view of that
array with your original dtype.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Python equivalent to Matlab control systems/ LTI/state space model manipulation tools

2008-10-14 Thread Andrew Straw
Ryan Krauss has been working on something, although I have not had a
chance to try it.

http://www.siue.edu/~rkrauss/python_intro.html

Scott Askey wrote:
> Where is a good place to look for python funtions similar to Matlab's
> ss , tf, ss2tf as use for transforming a linear time invariant(LTI) model 
> into a state space model. 
> 
> V/R
> 
> Scott
> 
> http://www.mathworks.com/access/helpdesk/help/toolbox/control/
> 
> 
>   
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] test results of 1.2.0rc1

2008-09-04 Thread Andrew Straw
Hi, with numpy 1.2.0rc1 running 'python -c "import numpy; numpy.test()"'
on my Ubuntu Hardy amd64 machine results in 1721 tests being run and 1
skipped. So far, so good.

However, if I run numpy.test(10,10,all=True), I get 1846 tests with:
the message "FAILED (SKIP=1, errors=8, failures=68)" Furthermore, there
are several matplotlib windows that pop up, many of which are
non-reassuringly blank: Bartlett window frequency response (twice -- I
guess the 2nd is actually for the Blackman window), Hamming window
frequency response, Kaiser window, Kaiser window frequency response,
sinc function. Additionally, the linspace and logspace tests each
generate a plots with green and blue dots at Y values of 0.0 and 0.5,
but it would be nice to have an axes title.

Should I be concerned that there are so many errors and failures with
the numpy test suite? Or am I just running it with unintended settings?
If these tests should pass, I will attempt to find time to generate bug
reports for them, although I don't think there's anything particularly
weird about my setup.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Advice on converting iterator into array efficiently

2008-08-28 Thread Andrew Straw
Alan Jackson wrote:
> Looking for advice on a good way to handle this problem.
>
> I'm dealing with large tables (Gigabyte large). I would like to 
> efficiently subset values from one column based on the values in
> another column, and get arrays out of the operation. For example,
> say I have 2 columns, "energy" and "collection". Collection is
> basically an index that flags values that go together, so all the
> energy values with a collection value of 18 belong together. I'd
> like to be able to set up an iterator on collection that would
> hand me an array of energy on each iteration :
>
> if table is all my data, then something like
>
> for c in table['collection'] :
> e = c['energy']
> ... do array operations on e
>
> I've been playing with pytables, and they help, but I can't quite
> seem to get there. I can get an iterator for energy within a collection,
> but I can't figure out an efficient way to get an array out.
>
> What I have so far is 
>
> for h in np.unique(table.col('collection')) :
> rows = table.where('collection == c')
> for row in rows :
> print c,' : ', row['energy']
>
> but I really want to convert rows['energy'] to an array.
>
> I've thought about building a nasty set of pointers and whatnot -
> I did it once in perl - but I'm hoping to avoid that.
>
>   

I do stuff like this all the time:

t = table[:] # convert to structured array
collections = np.unique(t['collection'])
for collection in collections:
cond = t['collection'] == collection
energy_this_collection = t['energy'][cond]

HTH,
Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] rc1 update

2008-08-28 Thread Andrew Straw
Jarrod Millman wrote:
> On Thu, Aug 28, 2008 at 7:48 AM, Travis E. Oliphant
> <[EMAIL PROTECTED]> wrote:
>   
>> +1 on what Andrew said.
>> 
>
> I don't really care that much, but I do think API is better than
> FEATURE.  I would think that there may be times when we change the API
> but not the features (e.g., renaming something).  I won't bring this
> up again, so if no one else finds this compelling consider it dropped.
>
>   
As I said, I'm +0 on that aspect. So I guess that makes Travis +0, too. ;)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-19 Thread Andrew Straw
Andrew Straw wrote:
> Robert Kern wrote:
>> On Sat, Aug 16, 2008 at 04:34, Jon Wright <[EMAIL PROTECTED]> wrote:
>>   
>>> Travis, Stéfan,
>>>
>>> I missed Travis mail previously. Are you *really* sure you want force
>>> all C code which uses numpy arrays to be recompiled? If you mean that
>>> all your matplotlib/PIL/pyopengl/etc users are going to have to make a
>>> co-ordinated upgrade, then this seems to be a grave mistake.
>>> 
>> FWIW, neither PIL nor PyOpenGL have C code which uses numpy arrays, so
>> they are entirely unaffected. And this does not require an *upgrade*
>> of the actually affected packages, just a rebuild of the binary.
>>
>>   
> I'll also point out that PEP 3118 will make this unnecessary in the 
> future for many applications. http://www.python.org/dev/peps/pep-3118/
> 
>  From what I can tell ( http://svn.python.org/view?rev=61491&view=rev ), 
> this is already in Python 2.6.

I just tried to make use of this functionality with the Python 2.6 beta 
2 release and numpy svn trunk. Everything is present in Python itself, 
but numpy doesn't yet support the PEP.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Andrew Straw
Robert Kern wrote:
> On Sat, Aug 16, 2008 at 04:34, Jon Wright <[EMAIL PROTECTED]> wrote:
>   
>> Travis, Stéfan,
>>
>> I missed Travis mail previously. Are you *really* sure you want force
>> all C code which uses numpy arrays to be recompiled? If you mean that
>> all your matplotlib/PIL/pyopengl/etc users are going to have to make a
>> co-ordinated upgrade, then this seems to be a grave mistake.
>> 
>
> FWIW, neither PIL nor PyOpenGL have C code which uses numpy arrays, so
> they are entirely unaffected. And this does not require an *upgrade*
> of the actually affected packages, just a rebuild of the binary.
>
>   
I'll also point out that PEP 3118 will make this unnecessary in the 
future for many applications. http://www.python.org/dev/peps/pep-3118/

 From what I can tell ( http://svn.python.org/view?rev=61491&view=rev ), 
this is already in Python 2.6.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [REVIEW] Update NumPy API format to support updates that don't break binary compatibility

2008-08-16 Thread Andrew Straw
Looking at the code, but not testing it -- this looks fine to me. (I 
wrote the original NPY_VERSION stuff and sent it to Travis, who modified 
and included it.)

I have added a couple of extremely minor points to the code review tool 
-- as much as a chance to play with the tool as to comment on the code.

-Andrew

Stéfan van der Walt wrote:
> The current NumPy API number, stored as NPY_VERSION in the header files, needs
> to be incremented every time the NumPy C-API changes.  The counter tells
> developers with exactly which revision of the API they are dealing.  NumPy 
> does
> some checking to make sure that it does not run against an old version of the
> API.  Currently, we have no way of distinguishing between changes that break
> binary compatibility and those that don't.
>
> The proposed fix breaks the version number up into two counters -- one that 
> gets
> increased when binary compatibility is broken, and another when the API is
> changed without breaking compatibility.
>
> Backward compatibility with packages such as Matplotlib is maintained by
> renaming NPY_VERSION to NPY_BINARY_VERSION.
>
> Please review the proposed change at http://codereview.appspot.com/2946
>
> Regards
> Stéfan
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Instructions on building from source

2008-07-24 Thread Andrew Straw
Eric Firing wrote:
> Andrew Straw wrote:
>
>   
>> Just for reference, you can find the build dependencies of any Debian
>> source package by looking at its .dsc file. For numpy, that can be found
>> at http://packages.debian.org/sid/python-numpy
>>
>> Currently (version 1.1.0, debian version 1:1.1.0-3), that list is:
>>
>> Build-Depends: cdbs (>= 0.4.43), python-all-dev, python-all-dbg,
>> python-central (>= 0.6), gfortran (>= 4:4.2), libblas-dev [!arm !m68k],
>> liblapack-dev [!arm !m68k], debhelper (>= 5.0.38), patchutils,
>> python-docutils, libfftw3-dev
>>
>> Build-Conflicts: atlas3-base-dev, libatlas-3dnow-dev, libatlas-base-dev,
>> libatlas-headers, libatlas-sse-dev, libatlas-sse2-dev
>> 
>
> Do you know why atlas is not used, and is even listed as a conflict?  I 
> have libatlas-sse2 etc. installed on ubuntu hardy, and I routinely build 
> numpy from source.  Maybe the debian specification is for 
> lowest-common-denominator hardware?

The way it's supposed to work, as far as I understand it, is that atlas
is not required at build time but when installed later automatically
speeds up the blas routines.  (Upon installation of libatlas3gf-sse2,
libblas.so.3gf is pointed to /usr/lib/sse2/atlas/libblas.so.3gf from
libblas.so.3gf => /usr/lib/libblas.so.3gf). I have not verified that any
of this actually happens. So, please take this with a grain of salt.
Especially since my answer differs from Robert's.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Instructions on building from source

2008-07-23 Thread Andrew Straw
Robert Kern wrote:
> On Wed, Jul 23, 2008 at 16:56, Fernando Perez <[EMAIL PROTECTED]> wrote:
>> Howdy,
>>
>> I was just trying to explain to a new user how to build numpy from
>> source on ubuntu and I realized that there's not much info on this
>> front in the source tree.  Scipy has a nice INSTALL.txt that even
>> lists the names of the debian/ubuntu packages needed for the build
>> (which can be a useful guide on other distros).  Should we have a
>> stripped-down copy of this doc somewhere in the top-level directory of
>> numpy?
> 

Just for reference, you can find the build dependencies of any Debian
source package by looking at its .dsc file. For numpy, that can be found
at http://packages.debian.org/sid/python-numpy

Currently (version 1.1.0, debian version 1:1.1.0-3), that list is:

Build-Depends: cdbs (>= 0.4.43), python-all-dev, python-all-dbg,
python-central (>= 0.6), gfortran (>= 4:4.2), libblas-dev [!arm !m68k],
liblapack-dev [!arm !m68k], debhelper (>= 5.0.38), patchutils,
python-docutils, libfftw3-dev

Build-Conflicts: atlas3-base-dev, libatlas-3dnow-dev, libatlas-base-dev,
libatlas-headers, libatlas-sse-dev, libatlas-sse2-dev

Some of that stuff (cdbs, debhelper, patchutils) is specific to the
Debian build process and wouldn't be necessary for simply compiling
numpy itself.

And on a Debian (derivative) system, you can stall those with "apt-get
build-dep python-numpy". This will only install the build dependencies
for the version of python-numpy which is listed in your apt
sources.list, but 99% of the time, that should be sufficient.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT's & IFFT's on images

2008-07-02 Thread Andrew Straw
Mike Sarahan wrote:
> I agree that the components are very small, and in a numeric sense, I
> wouldn't worry at all about them, but the image result is simply noise,
> albeit periodic-looking noise.

Fernando Perez and John Hunter have written a nice FFT image denoising
example:
http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/py4science/examples/fft_imdenoise.py?view=markup

with documentation, even:
http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/py4science/workbook/fft_imdenoise.tex?view=markup
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Updating cython directory to pxd usage: objections?

2008-06-20 Thread Andrew Straw
Fernando Perez wrote:
> I verified further by putting the import_array() back into  the .pyx
> file and indeed:
>
> - i_a() in .pxd -> missing from .c file.
> - i_a() in .pyx -> OK in .c file.
>
> It thus seems that  we must keep the import_array call out of the .pxd
> and users still need to remember to make it themselves.
>   
It's not the worst thing in the world, either -- sometimes one wants a
bit of the C structures layout from the .pxd file to use the array
interface without actually calling any of the numpy C API. Thus,
occasionally, calling import_array() would be a (probably very minor)
waste. However, this appears to be a completely moot point now...

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nose changes checked in

2008-06-18 Thread Andrew Straw
Stéfan van der Walt wrote:
> 2008/6/18 Alan McIntyre <[EMAIL PROTECTED]>:
>   
>> Is "next release" referring to 1.2 or the release after that?  If it's
>> the release after 1.2, then I assume that 1.2 must still be able to
>> run all its tests without nose.
>> 
>
> Alternatively, we could distribute Nose inside of NumPy for one
> release?  I suppose the Debian guys would shoot us :)
No, I don't think they'll shoot anyone. :) It can just be disabled with
a Debian specific patch to numpy. And since nose is LGPL (and therefore
DFSG kosher), I don't think there'd be any reason to even bother
re-making the source tarball. I'm quite sure the Debian packagers have
dealt with worse.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Buildbot dying horrible, slow death.

2008-06-09 Thread Andrew Straw
Charles R Harris wrote:
> Dear Friends,
>
> Our wonderful buildbot is in declining health. The Mac can't update
> from svn, Andrew's machines are offline, the Sparcs have lost their
> spark, and bsd_64 is suffering from tolist() syndrome:
AFAIK, I don't have any machines on the buildbot... Is there another
Andrew with offline machines?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Switching to nose test framework (was: NumpyTest problem)

2008-06-09 Thread Andrew Straw
Alan McIntyre wrote:
> On Mon, Jun 9, 2008 at 3:41 AM, Stéfan van der Walt <[EMAIL PROTECTED]> wrote:
>   
>> I suggest we also remove ParametricTestCase now.
>> 
>
> I don't mind converting the existing uses (looks like it's only used 5
> times) to something else, it was causing trouble for me with nose
> anyway--whenever the test module imported it, nose wanted to run it
> since it inherited from TestCase.  So if nobody really, really needs
> it I'll make ParametricTestCase go away.
I'm using it in some of my code, but I'll happily switch to nose. It
will make my life easier, however, if I can see how you've converted it.
If you do this, can you indicate what svn revision made the switch?
Thanks, Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.1.0

2008-05-29 Thread Andrew Straw
Thanks, Jarrod.

Should I replace the old numpy 1.0.4 information at 
http://www.scipy.org/Download with the 1.1.0? It's still listing 1.0.4, 
but I wonder if there's some compatibility with scipy 0.6 issue that 
should cause it to stay at 1.0.4. In either case, I think the page 
should be updated -- particularly as searching Google for "numpy 
download" results in that page as the first hit.

-Andrew

Jarrod Millman wrote:
> I'm pleased to announce the release of NumPy 1.1.0.
>
> NumPy is the fundamental package needed for scientific computing with
> Python.  It contains:
>
>   * a powerful N-dimensional array object
>   * sophisticated (broadcasting) functions
>   * basic linear algebra functions
>   * basic Fourier transforms
>   * sophisticated random number capabilities
>   * tools for integrating Fortran code.
>
> Besides it's obvious scientific uses, NumPy can also be used as an
> efficient multi-dimensional container of generic data. Arbitrary
> data-types can be defined. This allows NumPy to seamlessly and
> speedily integrate with a wide-variety of databases.
>
> This is the first minor release since the 1.0 release in
> October 2006. There are a few major changes, which introduce
> some minor API breakage. In addition this release includes
> tremendous improvements in terms of bug-fixing, testing, and
> documentation.
>
> For information, please see the release notes:
> http://sourceforge.net/project/shownotes.php?release_id=602575&group_id=1369
>
> Thank you to everybody who contributed to this release.
>
> Enjoy,
>
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 tagged

2008-05-18 Thread Andrew Straw
Jarrod Millman wrote:
> Please test the release candidate:
> svn co http://svn.scipy.org/svn/numpy/tags/1.1.0rc1 1.1.0rc1
>   
Thanks, Jarrod.

I have packaged SVN trunk from r5189 and made a Debian source package
(based on a slightly old version the Debian Python Modules Team's numpy
package with all patches removed) and Ubuntu Hardy (8.04) binary
packages. These are available at:

http://debs.astraw.com/hardy/

In particular, you can just grab the .deb for your architecture for
Ubuntu Hardy:

 * i386:
http://debs.astraw.com/hardy/python-numpy_1.1.0~dev5189-0ads1_i386.deb
 * amd64:
http://debs.astraw.com/hardy/python-numpy_1.1.0~dev5189-0ads1_amd64.deb

All numpy, tests pass on both architectures in my hands, and I shall
begin testing my various codes with this release. Other Ubunteers who
don't want to bother compiling from source are also welcome to try these
packages.

(I chose the trunk rather than the RC tag because my understanding is
that fixes to the final 1.1.0 are going to the trunk, and David
Cournapeau has made a couple commits. Also, I released after I packaged
this up that I forgot to touch the date in debian/changelog -- apologies!)

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] searchsorted() and memory cache

2008-05-14 Thread Andrew Straw
Aha, I've found the problem -- my values were int64 and my keys were
uint64. Switching to the same data type immediately fixes the issue!
It's not a memory cache issue at all.

Perhaps searchsorted() should emit a warning if the keys require
casting... I can't believe how bad the hit was.

-Andrew

Charles R Harris wrote:
>
>
> On Wed, May 14, 2008 at 2:00 PM, Andrew Straw <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> Charles R Harris wrote:
> >
> >
> > On Wed, May 14, 2008 at 8:09 AM, Andrew Straw
> <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
> > <mailto:[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>> wrote:
> >
> >
> >
> > Quite a difference (a factor of about 3000)! At this point,
> I haven't
> > delved into the dataset to see what makes it so pathological --
> > performance is nowhere near this bad for the binary search
> algorithm
> > with other sets of keys.
> >
> >
> > It can't be that bad Andrew, something else is going on. And 191 MB
> > isn's *that* big, I expect it should bit in memory with no problem.
> I agree the performance difference seems beyond what one would expect
> due to cache misses alone. I'm at a loss to propose other
> explanations,
> though. Ideas?
>
>
> I just searched for  2**25/10 keys in a 2**25 array of reals. It took
> less than a second when vectorized. In a python loop it took about 7.7
> seconds. The only thing I can think of is that the search isn't
> getting any cpu cycles for some reason. How much memory is it using?
> Do you have any nans and such in the data?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] searchsorted() and memory cache

2008-05-14 Thread Andrew Straw
Charles R Harris wrote:
>
>
> On Wed, May 14, 2008 at 8:09 AM, Andrew Straw <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>
>
> Quite a difference (a factor of about 3000)! At this point, I haven't
> delved into the dataset to see what makes it so pathological --
> performance is nowhere near this bad for the binary search algorithm
> with other sets of keys.
>
>
> It can't be that bad Andrew, something else is going on. And 191 MB
> isn's *that* big, I expect it should bit in memory with no problem.
I agree the performance difference seems beyond what one would expect
due to cache misses alone. I'm at a loss to propose other explanations,
though. Ideas?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] searchsorted() and memory cache

2008-05-14 Thread Andrew Straw
Andrew Straw wrote:
> I have uploaded it as a pytables file to http://astraw.com/framenumbers.h5 
Ahh, forgot to mention a potentially important point -- this data file
is 191 MB.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] searchsorted() and memory cache

2008-05-14 Thread Andrew Straw

> I will post any new insights as I continue to work on this...
>   
OK, I save isolated a sample of my data that illustrates the terrible
performance with the binarysearch. I have uploaded it as a pytables file
to http://astraw.com/framenumbers.h5 in case anyone wants to have a look
themselves. Here's an example of the type of benchmark I've been running:

import fastsearch.downsamp
import fastsearch.binarysearch
import tables

h5=tables.openFile('framenumbers.h5',mode='r')
framenumbers=h5.root.framenumbers.read()
keys=h5.root.keys.read()
h5.close()

def bench( implementation ):
for key in keys:
implementation.index( key )

downsamp = fastsearch.downsamp.DownSampledPreSearcher( framenumbers )
binary = fastsearch.binarysearch.BinarySearcher( framenumbers )

# The next two lines are IPython-specific, and the 2nd takes a looong time:

%timeit bench(downsamp)
%timeit bench(binary)



Running the above gives:

In [14]: %timeit bench(downsamp)
10 loops, best of 3: 64 ms per loop

In [15]: %timeit bench(binary)

10 loops, best of 3: 184 s per loop

Quite a difference (a factor of about 3000)! At this point, I haven't
delved into the dataset to see what makes it so pathological --
performance is nowhere near this bad for the binary search algorithm
with other sets of keys.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] searchsorted() and memory cache

2008-05-13 Thread Andrew Straw
Nathan Bell wrote:
> On Tue, May 13, 2008 at 6:59 PM, Andrew Straw <[EMAIL PROTECTED]> wrote:
>   
>>  easier and still blazingly fast compared to the binary search
>>  implemented in searchsorted() given today's cached memory architectures.
>> 
>
> Andrew, I looked at your code and I don't quite understand something.
> Why are you looking up single values?
>   
Hi Nathan,

The Python overhead was nothing compared to the speed problems I was
having... Now I'm quite sure that some optimization could go a little
further. Nevertheless, for my motivating use case, it wouldn't be
trivial to vectorize this, and a "little further" in this case is too
little to justify the investment of my time at the moment.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] searchsorted() and memory cache

2008-05-13 Thread Andrew Straw
Charles R Harris wrote:
>
>
> On Tue, May 13, 2008 at 5:59 PM, Andrew Straw <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> Thanks for all the comments on my original question. I was more
> offline
> than intended after I sent it until now, so I'm sorry I wasn't
> immediately able to participate in the discussion.
>
> Anyhow, after working on this a bit more, I came up with a few
> implementations of search algorithms doing just what I needed with the
> same interface available using bazaar and launchpad at
> http://launchpad.net/~astraw/+junk/fastsearch
> <http://launchpad.net/%7Eastraw/+junk/fastsearch> (MIT license). I
> have
> attached the output of the plot_comparisons.py benchmarking script to
> this email (note that this benchmarking is pretty crude).
>
> For the problem I originally wrote about, I get what a nearly
> unbelievable speedup of ~250x using the
> fastsearch.downsamp.DownSampledPreSearcher class, which is very
> similar
> in spirit to Charles' suggestion. It takes 1000 values from the
> original
> array to create a new first-level array that is itself localized in
> memory and points to a more localized region of the full original
> array.
> Also, I get a similar (though slightly slower) result using AVL trees
> using the fastsearch.avlsearch.AvlSearcher class, which uses pyavl (
> http://sourceforge.net/projects/pyavl ).
>
> Using the benchmarking code included in the bzr branch, I don't get
> anything like this speedup (e.g. the attached figure), so I'm not sure
> exactly what's going on at this point, but I'm not going to argue
> with a
> 250x speedup, so the fastsearch.downsamp code is now being put to
> use in
> one of my projects.
>
> Stefan -- I think your code simply implements the classic binary
> search
> -- I don't see how it will reduce cache misses.
>
> Anyhow, perhaps someone will find the above useful. I guess it would
> still be a substantial amount of work to make a numpy-types-aware
> implementation of AVL trees or similar algorithms. These sorts of
> binary
> search trees seem like the right way to solve this problem and thus
> there might be an interesting project in this. I imagine that a
> numpy-types-aware Cython might make such implementation significantly
> easier and still blazingly fast compared to the binary search
> implemented in searchsorted() given today's cached memory
> architectures.
>
>
> That's pretty amazing, but I don't understand the graph. The 
> DownSampled search looks like the worst. Are the curves mislabled? Are
> the axis correct? I'm assuming smaller is better here.
The lines are labeled properly -- the graph is inconsistent with the
findings on my real data (not shown), which is what I meant with "Using
the benchmarking code included in the bzr branch, I don't get anything
like this speedup (e.g. the attached figure)". My guess is that the
BinarySearcher climbs terribly under some usage pattern that isn't being
exhibited with this test. I'm really not sure yet what is the important
difference with my real data and these synthetic data. I will keep the
list posted as I find out more. Clearly, on the synthetic data for the
benchmark, the BinarySearcher does pretty well when N items is large.
This is quite contrary to my theory about cache misses being the root of
my problem with the binary search, so I don't understand it at the
moment, but certainly both the of the other searchers perform better on
my real data.

I will post any new insights as I continue to work on this...

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] searchsorted() and memory cache

2008-05-13 Thread Andrew Straw
Thanks for all the comments on my original question. I was more offline
than intended after I sent it until now, so I'm sorry I wasn't
immediately able to participate in the discussion.

Anyhow, after working on this a bit more, I came up with a few
implementations of search algorithms doing just what I needed with the
same interface available using bazaar and launchpad at
http://launchpad.net/~astraw/+junk/fastsearch (MIT license). I have
attached the output of the plot_comparisons.py benchmarking script to
this email (note that this benchmarking is pretty crude).

For the problem I originally wrote about, I get what a nearly
unbelievable speedup of ~250x using the
fastsearch.downsamp.DownSampledPreSearcher class, which is very similar
in spirit to Charles' suggestion. It takes 1000 values from the original
array to create a new first-level array that is itself localized in
memory and points to a more localized region of the full original array.
Also, I get a similar (though slightly slower) result using AVL trees
using the fastsearch.avlsearch.AvlSearcher class, which uses pyavl (
http://sourceforge.net/projects/pyavl ).

Using the benchmarking code included in the bzr branch, I don't get
anything like this speedup (e.g. the attached figure), so I'm not sure
exactly what's going on at this point, but I'm not going to argue with a
250x speedup, so the fastsearch.downsamp code is now being put to use in
one of my projects.

Stefan -- I think your code simply implements the classic binary search
-- I don't see how it will reduce cache misses.

Anyhow, perhaps someone will find the above useful. I guess it would
still be a substantial amount of work to make a numpy-types-aware
implementation of AVL trees or similar algorithms. These sorts of binary
search trees seem like the right way to solve this problem and thus
there might be an interesting project in this. I imagine that a
numpy-types-aware Cython might make such implementation significantly
easier and still blazingly fast compared to the binary search
implemented in searchsorted() given today's cached memory architectures.

-Andrew


Andrew Straw wrote:
> I've got a big element array (25 million int64s) that searchsorted()
> takes a long time to grind through. After a bit of digging in the
> literature and the numpy source code, I believe that searchsorted() is
> implementing a classic binary search, which is pretty bad in terms of
> cache misses. There are several modern implementations of binary search
> which arrange items in memory such that cache misses are much more rare.
> Clearly making such an indexing arrangement would take time, but in my
> particular case, I can spare the time to create an index if searching
> was faster, since I'd make the index once but do the searching many times.
>
> Is there an implementation of such an algorithm that works easilty with
> numpy? Also, can you offer any advice, suggestions, and comments to me
> if I attempted to implement such an algorithm?
>
> Thanks,
> Andrew
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

<>___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] searchsorted() and memory cache

2008-05-08 Thread Andrew Straw
I've got a big element array (25 million int64s) that searchsorted()
takes a long time to grind through. After a bit of digging in the
literature and the numpy source code, I believe that searchsorted() is
implementing a classic binary search, which is pretty bad in terms of
cache misses. There are several modern implementations of binary search
which arrange items in memory such that cache misses are much more rare.
Clearly making such an indexing arrangement would take time, but in my
particular case, I can spare the time to create an index if searching
was faster, since I'd make the index once but do the searching many times.

Is there an implementation of such an algorithm that works easilty with
numpy? Also, can you offer any advice, suggestions, and comments to me
if I attempted to implement such an algorithm?

Thanks,
Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python memory use

2008-05-03 Thread Andrew Straw
Robin wrote:
> Hi,
>
> I am starting to push the limits of the available memory and I'd like
> to understand a bit better how Python handles memory...
>   
This is why I switched to 64 bit linux and never looked back.
> If I try to allocate something too big for the available memory I
> often get a MemoryError exception. However, in other situations,
> Python memory use continues to grow until the machine falls over. I
> was hoping to understand the difference between those cases.
I don't know what "falls over" mean. It could be that you're getting
swap death -- the kernel starts attempting to use virtual memory (hard
disk) for some of the RAM. This would be characterized by your CPU use
dropping to near-zero, your hard disk grinding away, and your swap space
use increasing.

The MemoryError simply means that Python made a request for memory that
the kernel didn't grant.

There's something else you might run into -- the maximum memory size of
a process before the kernel kills that process. On linux i686, IIRC this
limit is 3 GB.

I'm not sure why you get different behavior on different runs.

FWIW, with 64 bit linux the worst that happens to me now is swap death,
which can be forestalled by adding lots of RAM.
>  From what
> I've read Python never returns memory to the OS (is this right?) so
> the second case, python is holding on to memory that it isn't really
> using (for objects that have been destroyed). I guess my question is
> why doesn't it reuse the memory freed from object deletions instead of
> requesting more - and even then when requesting more, why does it
> continue until the machine falls over and not cause a MemoryError?
>   
It's hard to say without knowing what your code does. A first guess is
that you're allocating lots of memory without allowing it to be freed.
Specifically, you may have references to objects which you no longer
need, and you should eliminate those references and allow them to be
garbage collected. In some cases, circular references can be hard for
python to detect, so you might want to play around with the gc module
and judicious use of the del statement. Note also that IPython keeps
references to past results by default (the history).

> While investigating this I found this script:
> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/511474
> which does wonders for my code. I was wondering if this function
> should be included in Numpy as it seems to provide an important
> feature, or perhaps an entry on the wiki (in Cookbook section?)
>   
I don't think it belongs in numpy per se, and I'm not sure of the
necessity of a spot on the scipy cookbook given that it's in the python
cookbook. Perhaps more useful would be starting a page called
"MemoryIssues" on the scipy wiki -- I imagine this subject, as a whole,
is of particular interest for many in the numpy/scipy crowd. Certainly
adding a link and description to that recipe would be useful in that
context. But please, feel free to add to or edit the wiki as you see fit
-- if you think something will be useful, by all means, go ahead and do
it. I think there are enough eyes on the wiki that it's fairly
self-regulating.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fast histogram

2008-04-17 Thread Andrew Straw
Hi Zach,

I have a similar loop which I wrote using scipy.weave. This was my first
foray into weave, and I had to dig through the intermediate C sources to
find the macros that did the indexing in the way I make use of here, but
this snipped may get you started. There are 2 functions, which each do
the same thing, but one is in python, the other is in C. Note that this
is for a 3D histogram -- presumably you could remove B and C from this
example.

I'm sure there are better (more documented) ways to do this using weave
-- but I had this code written, it works, and it appears it may be
useful to you... (Sorry it's not documented, however.)

-Andrew

Zachary Pincus wrote:
> Hi,
>
>   
>> How about a combination of sort, followed by searchsorted right/left  
>> using the bin boundaries as keys? The difference of the two  
>> resulting vectors is the bin value. Something like:
>>
>> In [1]: data = arange(100)
>>
>> In [2]: bins = [0,10,50,70,100]
>>
>> In [3]: lind = data.searchsorted(bins)
>>
>> In [4]: print lind[1:] - lind[:-1]
>> [10 40 20 30]
>>
>> This won't be as fast as a c implementation, but at least avoids the  
>> loop.
>> 
>
> This is, more or less, what the current numpy.histogram does, no? I  
> was hoping to avoid the O(n log n) sorting, because the image arrays  
> are pretty big, and numpy.histogram doesn't get close to video rate  
> for me...
>
> Perhaps, though, some of the slow-down from numpy.histogram is from  
> other overhead, and not the sorting. I'll try this, but I think I'll  
> probably just have to write the c loop...
>
> Zach
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

from scipy import weave
import numpy

def increment_slow(final_array_shape,  A,B,C):
counts = numpy.zeros(final_array_shape, dtype=numpy.uint64)
for i in range(len(A)):
a=A[i]; b=B[i]; c=C[i]
counts[a,b,c] += 1
return counts

def increment_fast(final_array_shape,  idxa,idxb,idxc):
counts = numpy.zeros(final_array_shape, dtype=numpy.uint64)
assert len(idxa.shape)==1
assert len(idxa)==len(idxb)
assert len(idxa)==len(idxc)

code = r"""
for (int i=0; i___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] F2PY future

2008-04-11 Thread Andrew Straw
Pearu Peterson wrote:
> Use Google Code. Pros: it provides necessary infrastructure to develop 
> software projects and I am used to it. Cons: in my experience Google 
> Code has been too many times broken (at least three times in half a 
> year), though this may improve in future. Also, Google Code provides 
> only SVN, no hg.
>   
Another option: the IPython people have been using launchpad.net ( 
https://launchpad.net/ipython ) -- it supports bzr. I'm not sure how 
happy they are with it, but I think happy enough to stick with it rather 
than attempt to get a server with hg set up. IIRC, they did initially 
marginally prefer hg over bzr but the immediate availability of bzr 
hosting swung them over.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpyx.pyx (recent svn) works?

2008-04-08 Thread Andrew Straw
This is off-topic and should be directed to the pyrex/cython list, but
since we're on the subject:

I suppose the following is true, but let me ask, since I have not used
Cython. Please correct me if I'm wrong.

I have a bunch of pyrex compiled .pyx code. If I start adding some
Cython compiled code, all the extensions will live happily without
colliding, even if I don't re-compile my old .pyx files with Cython.

(I can't imagine how they'd conflict, since they both translate to .c
extensions without anything to collide. Or are there linker-level
symbols that collide or something difficult?)

-Andrew

Fernando Perez wrote:
> On Tue, Apr 8, 2008 at 4:38 PM, Travis E. Oliphant
> <[EMAIL PROTECTED]> wrote:
>
>   
>>  I say just add it.  We should move forward with Cython.   More important
>>  is to see if random actually builds with Cython right now.  There was an
>>  issue that I recall from a few weeks ago that Cython could not build the
>>  pyrex extension in NumPy.
>> 
>
> OK, I'll play with random for a bit.
>
> BTW, the original 'bug' that started this thread is due to a change in
> Cython's casting behavior explained here:
>
> http://wiki.cython.org/DifferencesFromPyrex
>
> it's fixed with a simple extra (void *) cast as shown here:
>
> print 'Printing array info for ndarray at 0x%0lx'% \
>   (arr,)
>
>
> I just committed that code to the bzr branch.
>
> In summary:
>
> 1. Do you want me to obliterate the old numpy/doc/pyrex and replace it
> with numpy/doc/cython? That would make it clear what the tool to use
> is...
>
> 2. I'll work on random and report shortly.
>
> Cheers,
>
> f
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile Numpy in VC++8

2008-04-03 Thread Andrew Straw
Matthieu Brucher wrote:
>
>
> 2008/4/3, Chris Barker <[EMAIL PROTECTED]
> >:
>
> Robert Kern wrote:
>
> Just since that has been discussed a LOT, for years, I want to be
> clear:
>
>
> > Different versions of Microsoft's compiler use different
> libraries for
> > the standard C library. Some simple Python extension modules
> compiled
> > with a different compiler than the Python interpreter will usually
> > work.
>
>
> Do SOME modules USUALLY work just because of what routines in the
> standard library they happen to call? So it's just a matter of
> luck that
> a given module may not trigger a conflict between the two runtime
> libs?
>
>
> Some parts of the C interface are fixed by the standard. Rely on them
> and you should be OK (safe perhaps for some memory functions, but I
> never saw a problem there). The other parts should never be trusted
> (as the I/O stuff).
> These are the rules I follow.
I have also had success compiling external code that required VS 2008
into a shared library (.dll) and then calling that using ctypes. I'm not
sure if I was just lucky that this worked in my case, or if the windows
linker can deal with the issues resulting from mixing C runtimes when
using .dlls.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] image to array doubt

2008-02-29 Thread Andrew Straw
[EMAIL PROTECTED] wrote:
>> Robin wrote
>> I'm not sure why they would be doing this - to me it looks they might
>> be using Image as a convenient way to store some other kind of data...
> 
> thanks Robin,
> I am wondering if there is a more straightforward way to do these..
> especially the vector to image function
> D

Check out scipy.misc.pilutil.imread() and imsave()
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] load movie frames in python?

2008-01-29 Thread Andrew Straw
I'm pretty sure there's code floating around the pyglet mailing list.
I'd be happy to add it to
http://code.astraw.com/projects/motmot/wiki/pygarrayimage if it seems
reasonable. (pygarrayimage goes from numpy array to pyglet texture).

Brian Blais wrote:
> On Jan 29, 2008, at Jan 29:8:24 PM, Andrew Straw wrote:
>
>> I'd suggest pyglet's avbin library.
>>
>
>
> great suggestion!  I never would have thought to do that.  Do you
> happen to know how to convert a 
>
> player.texture  into a numpy.array?
>
> there is ImageData, but I can't seem to figure out how to do the
> conversion.
>
>
> bb
> -- 
> Brian Blais
> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
> http://web.bryant.edu/~bblais <http://web.bryant.edu/%7Ebblais>
>
>
>
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] load movie frames in python?

2008-01-29 Thread Andrew Straw
I'd sugget pyglet's avbin library.

Hans Meine wrote:
> On Dienstag 29 Januar 2008, Brian Blais wrote:
>   
>> Is there a way to read frames of a movie in python?  Ideally,
>> something as simple as:
>>
>> for frame in movie('mymovie.mov'):
>>  pass
>>
>>
>> where frame is either a 2-D list, or a numpy array?  The movie format
>> can be anything, because I can probably convert things, but most
>> convenient would be avi, mov, and flv (for youtube videos).
>> 
>
> I'd look for Gstreamer python bindings.  Or run mplayer/mencoder with a "raw" 
> format (e.g. yuv4mpeg or -ovc raw -vf format=bgr24/rgb15/...) using a pipe 
> and processing the frames one by one. OK, that's no iterator-based interface 
> yet, but one could probably hack that together in an hour or so.
>
> Ciao, /  /.o.
>  /--/ ..o
> /  / ANS  ooo
>   
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Median again

2008-01-29 Thread Andrew Straw
Considering that many of the statistical functions (mean, std, median)
must iterate over all the data and that people (or at least myself)
typically call them sequentially on the same data, it may make sense to
make a super-function with less repetition.

Instead of:
x_mean = np.mean(x)
x_median = np.median(x)
x_std = np.std(x)
x_min = np.min(x)
x_max = np.max(x)

We do:
x_stats = np.get_descriptive_stats(x,
stats=['mean','median','std','min','max'],axis=-1)
And x_stats is a dictionary with 'mean','meadian','std','min', 'max' keys.

The implementation could reduce the number of iterations over the data
in this case. The implementation wouldn't have to be optimized
initially, but could be gradually sped up once the interface is in
place. I bring this up now to suggest such an idea as a more-general
alternative to the "medianwithaxis" function proposed. What do you
think? (Perhaps something like this already exists?) And, finally, this
all surely belongs in scipy, but we already have stuff in numpy that
can't be removed without seriously breaking backwards compatibility...

-Andrew

Matthew Brett wrote:
> Hi,
>   
 median moved mediandim0
 implementation of medianwithaxis or similar, with same call
 signature as mean.

 Deprecation warning for use of median, and return of mediandim0 for
 now.  Eventual move of median to return medianwithaxis.
 
>>> This would confuse people even more, I'm afraid. First they're said
>>> that median() is deprecated, and then later on it becomes the standard
>>> function to use. I would actually prefer a short pain rather than a
>>> long one.
>>>   
>
> I was thinking the warning could be something like:
>
> "The current and previous version of numpy use a version of median
> that is not consistent with other summary functions such as mean.  The
> calling convention of median will change in a future version of numpy
> to match that of the other summary functions.  This compatible future
> version is implemented as medianwithaxis, and will become the default
> implementation of median.  Please change any code using median to call
> medianwithaxis specifically, to maintain compatibility with future
> numpy APIs."
>
>   
>> I would certainly like median to take the axis keyword. The axis
>> keyword (and its friends) could be added to 1.0.5 with the default
>> being 1 instead of None, so that it keeps compatibility with the 1.0
>> API. Then, with 1.1 (an API-breaking release) the default can be
>> changed to None to restore consistency with mean, etc.
>> 
>
> But that would be very surprising to a new user, and might lead to
> some hard to track down silent bugs at a later date.
>
> Matthew
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Andrew Straw
dmitrey wrote:
> The only one thing I'm very interested in for now - why the most 
> simplest matrix operations are not implemented to be parallel in numpy 
> yet (for several-CPU computers, like my AMD Athlon X2).
For what it's worth, sometimes I *want* my numpy operations to happen 
only on one core. (So that I can do more important stuff with the others.)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Andrew Straw
Travis E. Oliphant wrote:
> Andrew Straw wrote:
>> Hi,
>>
>> I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
>> having troubles accessing a numpy scalar with the __array_interface__.
>> Is this supposed to work? Or should __array_interface__ trigger an
>> AttributeError on a numpy scalar? Note that I haven't done any digging
>> on this myself...
>>   
> 
> Yes, the __array_interface__ approach works (but read-only).  In fact, 
> what happens is that a 0-d array is created and you are given the 
> information for it.
> 

Thanks. How is the array reference kept alive? Maybe that's Mike's issue?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Andrew Straw
Hi,

I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
having troubles accessing a numpy scalar with the __array_interface__.
Is this supposed to work? Or should __array_interface__ trigger an
AttributeError on a numpy scalar? Note that I haven't done any digging
on this myself...

-Andrew
--- Begin Message ---
Bugs item #1827190, was opened at 2007-11-06 17:35
Message generated for change (Comment added) made by mcfletch
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105988&aid=1827190&group_id=5988

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
>Category: GL
Group: v3.0.0
>Status: Pending
Resolution: None
Priority: 5
Private: No
Submitted By: Chris Waters (crwaters)
>Assigned to: Mike C. Fletcher (mcfletch)
Summary: Default (Numpy) array return values not accepted

Initial Comment:
The values returned by the default array type, numpy, are not accepted by 
OpenGL; more importantly, they are not accepted by the respective set/bind 
functions.

This output is from the attached test file, run with the latest CVS revision:

glGenTextures(1) -> 1 ()
glGenTextures(2) -> [2 3] (list: , elements: )
Calling: glBindTexture(GL_TEXTURE_2D, 1)
 (created from glGenTextures(1))
No Exceptions
Calling: glBindTexture(GL_TEXTURE_2D, 2)
 (created from glGenTextures(2), element 0)
Exception Caught: argument 2: : wrong type

The returned type of the array is numpy.ndarray, with each element having the 
type numpy.uint32. This element type is also not immediately convertable to a 
function argument type such as GLuint.

The return type of glGenTextures(1), however, is of the type long due to the 
special-case functionality. This is not the case for functions that do not 
handle special cases similar to this, such as 
OpenGL.GL.EXT.framebuffer_object.glGenFramebuffersEXT

A quick global work-around is to change the array type to ctypes after 
importing OpenGL:
 from OpenGL.arrays import formathandler
 formathandler.FormatHandler.chooseOutput( 'ctypesarrays' )

--

>Comment By: Mike C. Fletcher (mcfletch)
Date: 2008-01-07 13:32

Message:
Logged In: YES 
user_id=34901
Originator: NO

I've added an optional flag to the top-level module which allows for using
numpy scalars. ALLOW_NUMPY_SCALARS which when true allows your test case to
succeed.

Coded in Python, however, it's rather slow (and rather poorly
implemented).  I looked into implementing this using the
__array_interface__ on scalars, but the data-pointer there appears to be
randomly generated.  Without that, a conversion at the Python level and
then passing onto the original function seems the only solution.

I doubt we'll get a *good* solution to this in the near term.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105988&aid=1827190&group_id=5988

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
PyOpenGL Homepage
http://pyopengl.sourceforge.net
___
PyOpenGL-Devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/pyopengl-devel
--- End Message ---
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] unexpected behavior with allclose( scalar, empty array)

2008-01-04 Thread Andrew Straw
Thanks, I updated the page.

Charles R Harris wrote:
>
>
> On Jan 4, 2008 12:27 PM, Andrew Straw <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> I have added a page to the wiki describing this issue:
>
> http://scipy.org/numpy_warts_and_gotchas
>
> I'll link it into the main documentation pages over the next few
> days,
> but I ask for a review the following text for correctness and clarity:
> (You can simply edit the wiki page or post your reply here and
> I'll do it.)
>
> Like most of numpy, allclose() uses the broadcasting rules when
> performing its operation. This leads to the following behavior:
>
> >>> a=32
> >>> b=numpy.array([])
> >>> numpy.allclose(a,b)
> True
>
> Upon closer inspection, we can see that the broadcasting rules
> cause a
> to become a zero-dimensional array like b. The default truth value
> of a
>
>  
> It is not the dimensions, it's the fact that the array is empty, so
> that anything said about it's non-existent elements is true, i.e., x
> in empty -> anything is always true because x in empty is always
> false. So we also have the following:
>  
> In [1]: allclose(32,[[]])
> Out[1]: True
>
> The other reason is that because of broadcasting, the shape of the
> arrays may be immaterial.
>
> Chuck
>
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] unexpected behavior with allclose( scalar, empty array)

2008-01-04 Thread Andrew Straw
I have added a page to the wiki describing this issue:

http://scipy.org/numpy_warts_and_gotchas

I'll link it into the main documentation pages over the next few days,
but I ask for a review the following text for correctness and clarity:
(You can simply edit the wiki page or post your reply here and I'll do it.)

Like most of numpy, allclose() uses the broadcasting rules when
performing its operation. This leads to the following behavior:

>>> a=32
>>> b=numpy.array([])
>>> numpy.allclose(a,b)
True

Upon closer inspection, we can see that the broadcasting rules cause a
to become a zero-dimensional array like b. The default truth value of a
zero-dimensional array is True, so the following holds and illustrates
how the above result is consistent with numpy's rules.



Andrew Straw wrote:
> Apologies if I've missed the discussion of this, but I was recently
> surprised by the following behavior (in svn trunk 4673). The following
> code runs without triggering the assertion.
> 
> import numpy as np
> print np.__version__
> a=np.int32(42)
> b=np.array([],dtype=np.int32)
> assert np.allclose(a,b)
> 
> Is this expected behavior of numpy or is this a bug I should report?
> 
> Thanks,
> Andrew
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] unexpected behavior with allclose( scalar, empty array)

2008-01-03 Thread Andrew Straw
Matthew Brett wrote:
>>> So, currently we have all and allclose giving the same answer:
>>>
>>> In [19]: a = array([])
>>>
>>> In [20]: b = array([1])
>>>
>>> In [21]: all(a == b)
>>> Out[21]: True
>>>
>>> In [22]: allclose(a, b)
>>> Out[22]: True
>>>
>>> Would we want the answers to be different?
>>>   
>> No. I wasn't thinking correctly, previously.
>> 
>
> It's unfortunate that all of us immediately think the given answer is wrong.
Maybe we need "allclose_sameshape()" (my ability to name stuff is
terrible, but you get the idea). Regardless of the name issue, I have no
idea how this is viewed against the no-namespace-bloat principle.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] unexpected behavior with allclose( scalar, empty array)

2008-01-03 Thread Andrew Straw
Apologies if I've missed the discussion of this, but I was recently
surprised by the following behavior (in svn trunk 4673). The following
code runs without triggering the assertion.

import numpy as np
print np.__version__
a=np.int32(42)
b=np.array([],dtype=np.int32)
assert np.allclose(a,b)

Is this expected behavior of numpy or is this a bug I should report?

Thanks,
Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] where construct

2007-12-16 Thread Andrew Straw
"or" is logical or. You want "|" which is bitwise/elementwise or. Also, 
watch the order of operations -- | has higher precedence than <.

Thus, you want

where( (a<1) | (b<3), b,c)

Ross Harder wrote:
> What's the correct way to do something like this?
>
> a=array( (0,1,1,0) )
> b=array( (4,3,2,1) )
> c=array( (1,2,3,4) )
>
> where( (a<1 or b<3), b,c)
>
> Python throws a ValueError
> I would expect to get an array that looks like
> [4,2,2,1] I think
>
>
> Thanks,
> Ross
>
>
>   
> 
> Never miss a thing.  Make Yahoo your home page. 
> http://www.yahoo.com/r/hs
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Andrew Straw
According to the QEMU website, QEMU does not (yet) emulate SSE on x86
target, so a Windows installation on a QEMU virtual machine may be a
good way to build binaries free of these issues.
http://fabrice.bellard.free.fr/qemu/qemu-tech.html

-Andrew

Travis E. Oliphant wrote:
> Fernando Perez wrote:
>> On Dec 10, 2007 4:41 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>>   
>>> The current situation is untenable. I will gladly accept a slow BLAS for an
>>> official binary that won't segfault anywhere. We can look for a faster BLAS 
>>> later.
>>> 
>> Just to add a note to this: John Hunter and I just finished teaching a
>> python workshop here in Boulder, and one attendee had a recurring
>> all-out crash on WinXP.  Eventually John was able to track it to a bad
>> BLAS call, but the death was an 'illegal instruction'. We then noticed
>> that this was on an older Pentium III laptop, and I'd be willing to
>> bet that the problem is an ATLAS compiled with SSE2 support.  The PIII
>> chip only has plain SSE, not SSE2, and that's the kind of crash I've
>> seen when  accidentally running code compiled in my office machine (a
>> P4) on my laptop (a similarly old PIII).
>>
>> It may very well be that it's OK to ship binaries with ATLAS, but just
>> to build them without any fancy instruction support (no SSE, SSE2 or
>> anything else of that kind, just plain x87 code).
>>   
> 
> I think this is what the default should be (but plain SSE allowed).  
> However, since I have moved, the machine I was using to build "official" 
> binaries has switched and that is probably at the core of the problem.
> 
> Also,  I've tried to build ATLAS 3.8.0 without SSE without success (when 
> I'm on a machine that has it).
> 
> It would be useful to track which binaries are giving people problems as 
> I built the most recent ones on a VM against an old version of ATLAS 
> (3.6.0) that has been compiled on windows for a long time.
> 
> I'm happy to upload a better binary of NumPy (if I can figure out which 
> one is giving people grief and how to create a decent one).
> 
> -Travis O.
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Andrew Straw
An idea that occurred to me after reading Fernando's email. A function
could be called at numpy import time that specifically checks for the
instruction set on the CPU running and makes sure that is completely
covers the instruction set available through all the various calls,
including to BLAS. If this kind of thing were added, numpy could fail
with a loud warning rather than dying with mysterious errors later on.
The trouble would seem that you can switch your BLAS shared library
without re-compiling numpy, so numpy would have to do a run-time query
of ATLAS, etc. for compilation issues. Which is likely
library-dependent, and furthermore, not having looked into BLAS
implementations, I'm not sure that (m)any of them provide such
information. Do they? Is this idea technically possible?

-Andrew

Fernando Perez wrote:
> On Dec 10, 2007 4:41 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> 
>> The current situation is untenable. I will gladly accept a slow BLAS for an
>> official binary that won't segfault anywhere. We can look for a faster BLAS 
>> later.
> 
> Just to add a note to this: John Hunter and I just finished teaching a
> python workshop here in Boulder, and one attendee had a recurring
> all-out crash on WinXP.  Eventually John was able to track it to a bad
> BLAS call, but the death was an 'illegal instruction'. We then noticed
> that this was on an older Pentium III laptop, and I'd be willing to
> bet that the problem is an ATLAS compiled with SSE2 support.  The PIII
> chip only has plain SSE, not SSE2, and that's the kind of crash I've
> seen when  accidentally running code compiled in my office machine (a
> P4) on my laptop (a similarly old PIII).
> 
> It may very well be that it's OK to ship binaries with ATLAS, but just
> to build them without any fancy instruction support (no SSE, SSE2 or
> anything else of that kind, just plain x87 code).
> 
> 
> Cheers,
> 
> f
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Loading a > GB file into array

2007-12-04 Thread Andrew Straw
Hi all,

I haven't done any serious testing in the past couple years, but for 
this particular task -- drawing frames using OpenGL without ever 
skipping a video update -- it is my impression that as of a few Ubuntu 
releases ago (Edgy?) Windows still beat linux.

Just now, I have investigated on 2.6.22-14-generic x86_64 as pacakged by 
Ubuntu 7.10, and I didn't skip a frame out of 1500 at 60 Hz. That's not 
much testing, but it is certainly better performance than I've seen in 
the recent past, so I'll certainly be doing some more testing soon. Oh, 
how I'd love to never be forced to use Windows again.

Leaving my computer displaying moving images overnight, (and tomorrow at 
lab on a 200 Hz display),
Andrew

Gael Varoquaux wrote:
> On Tue, Dec 04, 2007 at 02:13:53PM +0900, David Cournapeau wrote:
>   
>> With recent kernels, you can get really good latency if you do it right 
>> (around 1-2 ms worst case under high load, including high IO pressure). 
>> 
>
> As you can see on my page, I indeed measured less than 1ms latency on
> Linux under load with kernel more than a year old. These things have
> gotten much better recently and with a premptible kernel you should be
> able to get 1ms easily. Going below 0.5ms without using a realtime OS (ie
> a realtime kernel, under linux) is really pushing it.
>
> Cheers,
>
> Gaël
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: A Way to Approximate and Compress a 3DSurface

2007-11-21 Thread Andrew Straw
Christopher Barker wrote:
>> For data interpolation: 2D-Delaunay triangulation based method (I think you 
>> can find one in the scipy cookbook).
> 
> yup -- but  then you need the decimation to remove the "unneeded" 
> points. I don't think Scipy has that.

The sandbox does, thanks to Robert Kern. (And I should really submit a
patch to move this into the main scipy.)
http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API Documentation?

2007-09-24 Thread Andrew Straw
Thomas Schreiner wrote:
> Am I doing anything wrong in this program? It's crashing immediately 
> after the "before" line, using Borland C++ Builder 6 and 
> numpy-1.0.3.1.win32-py2.4.
You have to call import_array() before using the C API.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy arrays, data allocation and SIMD alignement

2007-08-03 Thread Andrew Straw
Dear David,

Both ideas, particularly the 2nd, would be excellent additions to numpy. 
I often use the Intel IPP (Integrated Performance Primitives) Library 
together with numpy, but I have to do all my memory allocation with the 
IPP to ensure fastest operation. I then create numpy views of the data. 
All this works brilliantly, but it would be really nice if I could 
allocate the memory directly in numpy.

IPP allocates, and says it wants, 32 byte aligned memory (see, e.g. 
http://www.intel.com/support/performancetools/sb/CS-021418.htm ). Given 
that fftw3 apparently wants 16 byte aligned memory, my feeling is that, 
  if the effort is made, the alignment width should be specified at 
run-time, rather than hard-coded.

In terms of implementation of your 1st point, I'm not aware of how much 
effort your idea would take (and it does sound nice), but some benefit 
would be had just from a simple function numpy.is_mem_aligned( ndarray, 
width=16 ) which returns a bool.

Cheers!
Andrew

David Cournapeau wrote:
> Hi,
> 
>Following an ongoing discussion with S. Johnson, one of the developer 
> of fftw3, I would be interested in what people think about adding 
> infrastructure in numpy related to SIMD alignement (that is 16 bytes 
> alignement for SSE/ALTIVEC, I don't know anything about other archs). 
> The problem is that right now, it is difficult to get information for 
> alignement in numpy (by alignement here, I mean something different than 
> what is normally meant in numpy context; whether, in my understanding, 
> NPY_ALIGNED refers to a pointer which is aligned wrt his type, here, I 
> am talking about arbitrary alignement).
>   For example, for fftw3, we need to know whether a given data buffer is 
> 16 bytes aligned to get optimal performances; generally, SSE needs 16 
> byte alignement for optimal performances, as well as altivec. I think it 
> would be nice to get some infrastructure to help developers to get those 
> kind of information, and maybe to be able to request 16 aligned buffers.
>Here is what I can think of:
>   - adding an API to know whether a given PyArrayObject has its data 
> buffer 16 bytes aligned, and requesting a 16 bytes aligned 
> PyArrayObject. Something like NPY_ALIGNED, basically.
>   - forcing data allocation to be 16 bytes aligned in numpy (eg 
> define PyDataMem_Mem to a 16 bytes aligned allocator instead of malloc). 
> This would mean that many arrays would be "naturally" 16 bytes aligned 
> without effort.
> 
> Point 2 is really easy to implement I think: actually, on some platforms 
> (Mac OS X and FreeBSD), malloc returning 16 bytes aligned buffers 
> anyway, so I don't think the wasted space is a real problem. Linux with 
> glibc is 8 bytes aligned, I don't know about windows. Implementing our 
> own 16 bytes aligned memory allocator for cross platform compatibility 
> should be relatively easy. I don't see any drawback, but I guess other 
> people will.
> 
> Point 1 is more tricky, as this requires much more changes in the code.
> 
> Do main developers of numpy have an opinion on this ?
> 
>cheers,
> 
>David
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pickle, pytables, and sqlite - loading and saving recarray's

2007-07-20 Thread Andrew Straw
Gael Varoquaux wrote:
> On Thu, Jul 19, 2007 at 09:42:42PM -0500, Vincent Nijs wrote:
>>I'd luv to hear from people using sqlite, pytables, and cPickle about
>>their experiences.
> 
> I was about to point you to this discussion:
> http://projects.scipy.org/pipermail/scipy-user/2007-April/011724.html
> 
> but I see that you participated in it.
> 
> I store data from each of my experimental run with pytables. What I like
> about it is the hierarchical organization of the data which allows me to
> save a complete description of the experiment, with strings, and
> extensible data structures. Another thing I like is that I can load this
> in Matlab (I can provide enhanced script for hdf5, if somebody wants
> them), and I think it is possible to read hdf5 in Origin. I don't use
> these software, but some colleagues do.

I want that Matlab script! I have colleagues with whom the least common 
denominator is currently .mat files. I'd be much happier if it was hdf5 
files. Can you post it on the scipy wiki cookbook? (Or the pytables wiki?)

Cheers!
Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875

2007-06-28 Thread Andrew Straw
john,

there was a bug that made it into debian sarge whereby a SIGFPE wasn't
trapped in the appropriate place and ended up causing problems similar
to what you describe. the difficulty in debugging is that you're after
whatever triggers the FPE in the first place (or the bug that lets it go
untrapped), but you only get notified when the kernel kills your program
for not trapping it. I wrote a little page about the Debian sarge issue:
http://code.astraw.com/debian_sarge_libc.html

John Ollinger wrote:
> rex  nosyntax.com> writes:
> 
> 
>> There doesn't appear to be a problem with recent versions of the
>> software. In particular, ATLAS 3.7.33 does not cause an error.
>>
>> Is there some reason for you to use such old software? (gcc 3.3.1 &
>> kernel 2.4.21)? What platform are you building for?
>>
>> -rex
>>
> 
> 
> I think have resolved this problem.  First off, I building
>  on such an old system because the applications I write are 
> used at a number of labs across the campus and most of these
> labs are in nontechnical departments with minimal linux support.
> As a result, the usually run the version that was current 
> when the machine was purchased.  If I want people to use my 
> code, I have to supply them with a tar file that contains 
> everything they need, including the dependencies for numpy, 
> scipy, wxPython, vtk and itk.  I try to make a general
> build on the oldest version I have access to in case I miss 
> something.  The motherboard on my desktop died last week, 
> so I was forced to use the older system for a couple of weeks, 
> which is what prompted me to update numpy from numeric.
> 
> The floating exception is definitely not caused by the numpy 
> or scipy builds. The same builds run correctly on one of our 
> newer systems (2.6.9). I rebuilt everything on my desktop 
> (including gcc. The new box on my desk is now running
> 2.6.28 with gcc 4.1, so I had to build gcc 3.6 anyway).  
> The new build has the Floating point exception, but in a 
> different, later test (three tests after the matvec test).  
> Then I rebuilt a new version of gcc (3.6 rather than 3.3 and
> built numpy again. The floating point exception still 
> occurred but this time at the cdouble test, the third from last. 
> The fact that the build runs find with nonoptimized lapack 
> libraries made me wonder about the threading support.  I 
> found an article at
> http://www.ibm.com/developerworks/eserver/library/es-033104.html 
> which said that SUSE 2.4.21 used a backported version of the 
> new threading package in version 2.6.  The exceptions always 
> occur on complex operations, so it isn't a stretch
> to assume that threads are in play when they occur. 
> 
> John
> 
> p.s. My desktop is now running selinux, which is denying access 
> to the numpy shareable libries.  The error message is "cannot 
> restore segment prot after reloc".  The numpy setup script 
> should probably set the context for the libraries.  I am going to 
> post this on a separate thread since other people will
> probably be encountering it. For those googling this, the command
> is "chcon -t texrel_shlib_t "
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] annoying numpy string to float conversion behaviour

2007-06-26 Thread Andrew Straw
Torgil Svensson wrote:

> This seems to indicate that float('nan') works on some platforms but
> str(nan) isn't. Is this true on Linux? Could anyone confirm this? What
> about float('inf') and repr(inf) on Linux?

On Ubuntu Feisty (amd64) Linux (but this behavior has been the same for 
at least the 6 years I can remember.):

$ python
Python 2.5.1 (r251:54863, May  2 2007, 16:27:44)
[GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
 >>> float('nan')
nan
 >>> float('inf')
inf
 >>> import numpy
 >>> repr(numpy.inf)
'inf'
 >>> repr(numpy.nan)
'nan'
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] annoying numpy string to float conversion behaviour

2007-06-25 Thread Andrew Straw
Torgil Svensson wrote:

> OS-specific routines (probably the c-library, haven't looked).  I
> think python should be consistent regarding this across platforms but
> I don't know if different c-libraries generates different strings for
> special numbers. Anyone? 

Windows and Linux certainly generate different strings for special
numbers from current Python, and I guess the origin is the libc on those
platforms. But, as Python is moving away from the libc for file IO in
Python 3K, perhaps string representation of floats would be considered,
too. (In fact for all I know, perhaps it has already been considered.)
Maybe you should email the python-3k-dev list?

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] average of array containing NaN

2007-06-25 Thread Andrew Straw
Giorgio F. Gilestro wrote:
> I find myself in a situation where an array may contain not-Numbers
> that I set as NaN.
> Yet, whatever operation I do on that array( average, sum...) will
> threat the NaN as infinite values rather then ignoring them as I'd
> like it'd do.
> 
> Am I missing something? Is this a bug or a feature? :-)

You may be interested in masked arrays.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available

2007-06-09 Thread Andrew Straw
This is a note to announce the availability of a VMWare Virtual 
Appliance with Ubuntu linux with numpy, scipy, matplotlib, and ipython 
installed.

This should make it relatively easy to try out the software. The VMWare 
Player and VMWare Server are available for no cost from 
http://www.vmware.com/products/player/ and 
http://www.vmware.com/products/server/

The download URL is:
http://mosca.caltech.edu/outgoing/Ubuntu%207.04%20for%20scientific%20computing%20in%20Python.zip

The username is "ubuntu" and the password is "abc123". The network will 
share the host's interface using NAT. The md5sum is 
4191e13abda1154c94e685ffdc0f829b.

I have updated http://scipy.org/Download with this information.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vista installer?

2007-05-29 Thread Andrew Straw
Andrew Straw wrote:
 > OK, I have placed an Ubuntu 7.04 image with stock numpy, scipy, 
matplotlib, and ipython at 
http://mosca.caltech.edu/outgoing/Ubuntu%207.04%20for%20scientific%20computing%20in%20Python.zip
 >
 > The md5sum is 4191e13abda1154c94e685ffdc0f829b.
 >
 > Note: I haven't tested this at all on any computer other than the one 
which I created the virtual appliance on. I'm posting now because it 
will take me a while to download the file (it's 1 GB) in order to test.
 >
After downloading the 1GB, it does seems to work on my laptop. (On a Mac 
running VMware Fusion, even! Sweet! I thought this job would require 
booting my desktop.)

The username is "ubuntu" and the password is "abc123". I set the network 
up to share the host's interface with NAT.

Hmm, as David suggests, this might be a pretty good way to make an 
easy-to-try scipy environment, particularly for Windows users. I'm happy 
to continue to (allow Caltech to) host this, but also happy if we move 
it, or an improved version, somewhere else. For example, I could imagine 
wanting auto login turned on and some kind of icon on the desktop that 
says "click here for interactive python prompt".

-Andrew

 > -Andrew
 >
 > Andrew Straw wrote:
 >
 >> Hi Ryan,
 >>
 >> I use VMware server on my linux box to host several more linux 
images. I will see if I can whip you up a Ubuntu Feisty i386 image with 
the "big 4" - numpy/scipy/matplotlib/ipython. If I understand their docs 
correctly, I have "virtual appliances" for previously existing images 
already... If it's that easy, this should take just a few minutes. I'll 
see what I can do and post the results.
 >>
 >> -Andrew
 >>
 >> Ryan Krauss wrote:
 >>
 >>> I need to plot things using matplotlib, so I don't think it works for
 >>> me without X.
 >>>
 >>> Thanks though.
 >>>
 >>> Ryan
 >>>
 >>> On 5/28/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
 >>>
 >>>> Ryan Krauss wrote:
 >>>>
 >>>>> I have this more or less working, the only problem is that my guest
 >>>>> Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic
 >>>>> networking.  So, I can't install anything.  Does anyone have a VMWare
 >>>>> virtual appliance with Scipy/Numpy/IPython/Matplotlib already
 >>>>> installed?
 >>>>>
 >>>>> I posted a question to the VMWare forum about networking, but will
 >>>>> welcome any help from here.
 >>>>>
 >>>>>
 >>>> I use vmware to test various packages: it is a barebone version (only
 >>>> command line), dunno if this is of any interest for you ?
 >>>>
 >>>> David
 >>>> ___
 >>>> Numpy-discussion mailing list
 >>>> Numpy-discussion@scipy.org
 >>>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
 >>>>
 >>>>
 >>> ___
 >>> Numpy-discussion mailing list
 >>> Numpy-discussion@scipy.org
 >>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
 >>>
 >> ___
 >> Numpy-discussion mailing list
 >> Numpy-discussion@scipy.org
 >> http://projects.scipy.org/mailman/listinfo/numpy-discussion
 >>
 >
 > ___
 > Numpy-discussion mailing list
 > Numpy-discussion@scipy.org
 > http://projects.scipy.org/mailman/listinfo/numpy-discussion
 >


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vista installer?

2007-05-28 Thread Andrew Straw
OK, I have placed an Ubuntu 7.04 image with stock numpy, scipy, 
matplotlib, and ipython at 
http://mosca.caltech.edu/outgoing/Ubuntu%207.04%20for%20scientific%20computing%20in%20Python.zip

The md5sum is 4191e13abda1154c94e685ffdc0f829b.

Note: I haven't tested this at all on any computer other than the one 
which I created the virtual appliance on. I'm posting now because it 
will take me a while to download the file (it's 1 GB) in order to test.

-Andrew

Andrew Straw wrote:
> Hi Ryan,
> 
> I use VMware server on my linux box to host several more linux images. I 
> will see if I can whip you up a Ubuntu Feisty i386 image with the "big 
> 4" - numpy/scipy/matplotlib/ipython. If I understand their docs 
> correctly, I have "virtual appliances" for previously existing images 
> already... If it's that easy, this should take just a few minutes. I'll 
> see what I can do and post the results.
> 
> -Andrew
> 
> Ryan Krauss wrote:
>> I need to plot things using matplotlib, so I don't think it works for
>> me without X.
>>
>> Thanks though.
>>
>> Ryan
>>
>> On 5/28/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
>>   
>>> Ryan Krauss wrote:
>>> 
>>>> I have this more or less working, the only problem is that my guest
>>>> Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic
>>>> networking.  So, I can't install anything.  Does anyone have a VMWare
>>>> virtual appliance with Scipy/Numpy/IPython/Matplotlib already
>>>> installed?
>>>>
>>>> I posted a question to the VMWare forum about networking, but will
>>>> welcome any help from here.
>>>>
>>>>   
>>> I use vmware to test various packages: it is a barebone version (only
>>> command line), dunno if this is of any interest for you ?
>>>
>>> David
>>> ___
>>> Numpy-discussion mailing list
>>> Numpy-discussion@scipy.org
>>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>> 
>> ___
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>   
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vista installer?

2007-05-28 Thread Andrew Straw
Hi Ryan,

I use VMware server on my linux box to host several more linux images. I 
will see if I can whip you up a Ubuntu Feisty i386 image with the "big 
4" - numpy/scipy/matplotlib/ipython. If I understand their docs 
correctly, I have "virtual appliances" for previously existing images 
already... If it's that easy, this should take just a few minutes. I'll 
see what I can do and post the results.

-Andrew

Ryan Krauss wrote:
> I need to plot things using matplotlib, so I don't think it works for
> me without X.
>
> Thanks though.
>
> Ryan
>
> On 5/28/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
>   
>> Ryan Krauss wrote:
>> 
>>> I have this more or less working, the only problem is that my guest
>>> Ubuntu OS doesn't have Scipy/Numpy/IPython and I can't get basic
>>> networking.  So, I can't install anything.  Does anyone have a VMWare
>>> virtual appliance with Scipy/Numpy/IPython/Matplotlib already
>>> installed?
>>>
>>> I posted a question to the VMWare forum about networking, but will
>>> welcome any help from here.
>>>
>>>   
>> I use vmware to test various packages: it is a barebone version (only
>> command line), dunno if this is of any interest for you ?
>>
>> David
>> ___
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>
>> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vista installer?

2007-05-27 Thread Andrew Straw
Charles R Harris wrote:
>
>
> On 5/24/07, *Ryan Krauss* <[EMAIL PROTECTED] 
> > wrote:
>
> I am trying to use Numpy/Scipy for a class I am teaching this summer.
> I have one student running Vista.  Is there an installer that works
> for Vista?  Running the exe file from webpage gives errors about not
> being able to create various folders and files.  I think this is from
> Vista being very restrictive about which files and folders are
> writable.  Is anyone out there running Numpy/Scipy in Vista?  If so,
> how did you get it to work?
>
>
> Install Ubuntu? ;) I've heard nothing but complaints and nasty words 
> from co-workers stuck with new computers and trying to use Vista as a 
> development platform for scientific work.
Just a follow-up, vmware is now given away and Ubuntu has always been 
free, and I guess most computers with Vista have native support for 
virtualization. So, this way you could run Ubuntu and Vista 
simultaneously without a speed loss on either or partitioning the drive. 
Of course, if it were me I'd put Ubuntu as the host OS and probably 
never boot the guest OS. ;)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy array sharing between processes?

2007-05-12 Thread Andrew Straw
Charles R Harris wrote:
>
> I'll pitch in a few donuts (and my eternal gratitude) for an
> example of
> shared memory use using numpy arrays that is cross platform, or at
> least
> works in linux, mac, and windows.
>
>
> I wonder if you could mmap a file and use it as common memory?
Yes, that's the basic idea. Now for the example that works on those
platforms...
> Forking in python under linux leads to copies because anything that
> accesses an object changes its reference count.
I'm not sure what you're trying to say here. If it's shared memory, it's
not copied -- that's the whole point. I don't really care how I spawn
the multiple processes, and indeed forking is one way.
> Pipes are easy and could be used for synchronization.
True. But they're not going to be very fast. (I'd like to send streams
of realtime images between different processes.)
> Would python threading work for you?
That's what I use now and what I'd like to get away from because 1) the
GIL sucks and 2) (bug-free) threading is hard.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy array sharing between processes?

2007-05-12 Thread Andrew Straw
Ray Schumacher wrote:
>
> After Googling for examples on this, in the Cookbook
> http://www.scipy.org/Cookbook/Multithreading
> MPI and POSH (dead?), I don't think I know the answer...
> We have a data collection app running on dual core processors; I start 
> one thread collecting/writing new data directly into a numpy circular 
> buffer, another thread does correlation on the newest data and 
> occasional FFTs, both now use 50% CPU, total.
> The threads never need to access the same buffer slices.
> I'd prefer to have two processes, forking the FFT process off and 
> utilizing the second core. The processes would only need to share two 
> variables (buffer insert position and a short_integer result from the 
> FFT process, each process would only read or write), in addition to 
> the numpy array itself.
>
> Should I pass the numpy address to the second process and just create 
> an identical array there, as in
> http://projects.scipy.org/pipermail/numpy-discussion/2006-October/023647.html 
> ?
>
> Use a file-like object to share the other variables? mmap? 
> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/413807
>
> I also thought ctypes
> ctypes.string_at(address[, size])
> might do both easily enough, although would mean a copy. We already 
> use it for the collection thread.
> Does anyone have a lightweight solution to this relatively simple sort 
> of problem?
I'll pitch in a few donuts (and my eternal gratitude) for an example of 
shared memory use using numpy arrays that is cross platform, or at least 
works in linux, mac, and windows.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] bug in http://www.scipy.org/NumPy_for_Matlab_Users

2007-04-29 Thread Andrew Straw
No, the nth index of a Python sequence is a[n], where n starts from 
zero. Thus, if I want the nth dimension of array a, I want a.shape[n].

I reverted the page to its original form and added a couple explanatory 
comments about zero vs one based indexing.

dmitrey wrote:
> now there is
> MATLABNDArray Matrix
> size(a,n) a.shape[n]   a.shape[n]
> 
> but it should be
> size(a,n) a.shape[n-1]   a.shape[n-1]
> 
> WBR, D.
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Python issue of Computing in Science and Engineering available

2007-04-25 Thread Andrew Straw
The May/June issue of Computing in Science and Engineering 
http://computer.org/cise: is out and has a Python theme. Many folks we 
know and love from the community and mailing lists contribute to the 
issue. Read articles by Paul Dubois and Travis Oliphant for free online.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building numpy with atlas on ubuntu edgy

2007-04-18 Thread Andrew Straw
rex wrote:
> Keith Goodman <[EMAIL PROTECTED]> [2007-04-18 10:49]:
>   
>> I'd like to compile atlas so that I can take full advantage of my core
>> 2 duo. 
>> 
>
> If your use is entirely non-commercial you can use Intel's MKL with
> built-in optimized BLAS and LAPACK and avoid the need for ATLAS.
>   
Just to clarify, my understanding is that if you buy a developer's
license, you can also use it for commercial use, including distributing
binaries. (Otherwise it would seem kind of silly for Intel to invest so
much in their performance libraries and compilers.)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building numpy with atlas on ubuntu edgy

2007-04-17 Thread Andrew Straw
Christian K wrote:
> David Cournapeau wrote:
>   
>> On Ubuntu and debian, you do NOT need any site.cfg to compile numpy with 
>> atlas support. Just install the package atlas3-base-dev, and you are 
>> done. The reason is that when *compiling* a software which needs atlas, 
>> the linker will try to find libblas.so in /usr/lib, not in 
>> /usr/lib/sse2. If you install atlas3-base-dev, the package will install 
>> those at the correct locations. I have updated the instructions for 
>> Ubuntu (also works for debian) on the wiki a few days ago:
>> 
>
> Indeed, installing atlas3-base-dev helps. I only had atlas3-base, atlas3-sse 
> and
> atlas3-sse2-dev installed. Sorry for the noise.
>   
Hmm, atlas3-sse2-dev doesn't depend on atlas3-base-dev? That sounds like 
a bug...

Off to investigate and possibly file a bug report with Debian or Ubuntu,
Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Big list of Numpy & Scipy users

2007-04-05 Thread Andrew Straw
Bill Baxter wrote:
> On 4/4/07, Robert Kern <[EMAIL PROTECTED]> wrote:
>   
>> Bill Baxter wrote:
>> 
>>> Is there any place on the Wiki that lists all the known software that
>>> uses Numpy in some way?
>>>
>>>   
 It would be nice to start collecting such a list if there isn't one
 
>>> already.  Screenshots would be nice too.
>>>   
>> There is no such list that I know of, but you may start one on the wiki if 
>> you like.
>> 
>
> Ok, I made a start:  http://www.scipy.org/Scipy_Projects
>   
Great idea. I renamed the page to http://www.scipy.org/Projects so
Numpy-only users wouldn't feel excluded.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Nonblocking Plots with Matplotlib

2007-03-15 Thread Andrew Straw
Bill Baxter wrote:
> On 3/15/07, Bill Baxter <[EMAIL PROTECTED]> wrote:
>   
>> Thanks, Sebastian.  I'll take a look at Pyro.  Hadn't heard of it.
>> I'm using just xmlrpclib with pickle right now.
>> 
>
> I took a look at Pyro -- it looks nice.
> The only thing I couldn't find, though, is how decouple the wx GUI on
> the server side from the Pyro remote call handler.  Both wx and Pyro
> want to run a main loop.
>
> With the XML-RPC, I could use twisted and its wxreactor class.  That
> does all the necessary magic under the hood to run both loops.
> Basically all you have to do to make it  work is:
>
>class MainApp(wx.App, twisted.web.xmlrpc.XMLRPC):
> ...
>
> twisted.internet.wxreactor.install()
> app = MainApp()
> twisted.internet.reactor.registerWxApp(app)
> twisted.internet.reactor.run()
>
> And then you're good to go.  reactor.run() takes care of both main
> loops somehow.
>
> Do you know of any good examples showing how to do that sort of thing
> with Pyro?  It must be possible.  I mean it's the exact same sort of
> thing you'd need if you're writing a simple GUI internet chat program.
>  My googling has turned up nothing, though.

It is possible to do this with Pyro.

I think Pyro can "auto-background" (my terminology not theirs) if you 
allow it to do threading. Otherwise, you could handle requests by 1) 
putting Pyro in a thread you manage or 2) a GUI timer calling Pyro's 
daemon.handleRequests(...) fairly often.

But I hear nothing but good things about Twisted, and your code works, 
so I say find some bigger fish to fry (especially since I can see 
potential issues with each of the above options...).
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] Nonblocking Plots with Matplotlib

2007-03-15 Thread Andrew Straw
Bill, very cool. Also, thanks for showing me how Twisted can be used 
like Pyro, more-or-less, I think. (If I understand your code from my 1 
minute perusal.)

On Mac OS X, there's one issue I don't have time to follow any further: 
sys.executable points to  
/Library/Frameworks/Python.framework/Versions/2.4/Resources/Python.app/Contents/MacOS/Python
whereas /Library/Frameworks/Python.framework/Versions/Current/bin/python 
is the file actually on my path. For some reason, when I run the latter 
ezplot is found, when the former, it is not. Thus, your auto-spawning of 
a plotserver instance fails on my installation.

Other than that, the example you gave works as advertised and looks 
great. (Ohh, those anti-aliased lines look better and better the more I 
suffer through my colleagues' aliased plots...)

Bill Baxter wrote:
> Howdy Folks,
>
> I was missing the good ole days of using Matlab back at the Uni when I
> could debug my code, stop at breakpoints and plot various data without
> fear of blocking the interpreter process.
>
> Using "ipython -pylab" is what has been suggested to me in the past,
> but the problem is I don't do my debugging from ipython.  I have a
> very nice IDE that works very well, and it has a lovely interactive
> debugging prompt that I can use to probe my code when stopped at a
> breakpoint.  It's great except I can't really use matplotlib for
> debugging there because it causes things to freeze up.
>
> So I've come up with a decent (though not perfect) solution for
> quickie interactive plots which is to run matplotlib in a separate
> process.  I call the result it 'ezplot'.  The first alpha version of
> this is now available at the Cheeseshop.  (I made an egg too, so if
> you have setuptools you can do "easy_install ezplot".)
>
> The basic usage is like so:
>
>  In [1]: import ezplot
>  In [2]: p = ezplot.Plotter()
>  In [3]: p.plot([1,2,3],[1,4,9],marker='o')
>  Connecting to server... waiting...
>  connected to plotserver 0.1.0a1 on http://localhost:8397
>  Out[3]: True
>  In [4]: from numpy import *
>  In [5]: x = linspace(-5,5,20)
>  In [13]: p.clf()
>  Out[13]: True
>  In [14]: p.plot(x, x*x*log(x*x+0.01))
>
> (Imagine lovely plots popping up on your screen as these commands are typed.)
>
> The only return values you get back are True (success...probably) or
> False (failure...for sure).  So no fancy plot object manipulation is
> possible.  But you can do basic plots no problem.
>
> The nice part is that this (unlike ipython's built-in -pylab threading
> mojo) should work just as well from wherever you're using python.
> Whether it's ipython (no -pylab) or Idle, or a plain MS-DOS console,
> or WingIDE's debug probe, or SPE, or a PyCrust shell or whatever.  It
> doesn't matter because all the client is doing is packing up data and
> shipping over a socket.  All the GUI plotting mojo happens in a
> completely separate process.
>
> There are plenty of ways this could be made better, but for me, for
> now, this probably does pretty much all I need, so it's back to Real
> Work.  But if anyone is interested in making improvements to this, let
> me know.
>
> Here's a short list of things that could be improved:
> * Right now I assume use of the wxAGG backend for matplotlib.  Don't
> know how much work it would be to support other back ends (or how to
> go about it, really).   wxAGG is what I always use.
> * Returning more error/exception info from the server would be nice
> * Returning full fledged proxy plot objects would be nice too, but I
> suspect that's a huge effort
> * SOAP may be better for this than xmlrpclib but I just couldn't get
> it to work (SOAPpy + Twisted).
> * A little more safety would be nice.  Anyone know how to make a
> Twisted xmlrpc server not accept connections from anywhere except
> localhost?
> * There's a little glitch in that the spawned plot server dies with
> the parent that created it.  Maybe there's a flag to subprocess.Popen
> to fix that?
> * Sometimes when you click on "Exit Server", if there are plot windows
> open it hangs while shutting down.
>
>
> Only tested on Win32 but there's nothing much platform specific in there.
>
> Give it a try and let me know what you think!
>
> --bb
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Matplotlib-users mailing list
> Matplotlib-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/matplotlib-users
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] NumPy in Teaching

2007-03-01 Thread Andrew Straw
Perry Greenfield wrote:
> On Feb 28, 2007, at 7:32 PM, Joe Harrington wrote:
>
>   
>> Hi Steve,
>>
>> I have taught Astronomical Data Analysis twice at Cornell using IDL,
>> and I will be teaching it next Fall at UCF using NumPy.  Though I've
>> been active here in the recent past, I'm actually not a regular NumPy
>> user myself yet (I used Numeric experimentally for about 6 months in
>> 1997), so I'm a bit nervous.  There isn't the kind of documentation
>> and how-to support for Numpy that there is for IDL, though our web
>> site is a start in that direction.  One thought I've had in making the
>> transition easier is to put up a syntax and function concordance,
>> similar to that available for MATLAB.  I thought this existed.  Maybe
>> Perry can point me to it.  Just adding a column to the MATLAB one
>> would be fine.
>> 
>
> I made one for IDL, but I don't recall one for matlab. If anyone has  
> done one, I sure would like to incorporate it into the tutorial I'm  
> revising if possible.
>
>   
I believe the reference is to this: http://scipy.org/NumPy_for_Matlab_Users
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran order arrays to and from numpy arrays

2007-02-24 Thread Andrew Straw
Alexander Schmolck wrote:
> 2. Despite this overhead, copying around large arrays (e.g. >=1e5 elements) in
>above way causes notable additional overhead. Whilst I don't think there's
>a sane way to avoid copying by sharing data between numpy and matlab the
>copying could likely be done better.
>   
Alex, what do you think about "hybrid arrays"?

http://www.mail-archive.com/numpy-discussion@lists.sourceforge.net/msg03748.html
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Greek Letters

2007-02-20 Thread Andrew Straw
Robert Kern wrote:
> On Windows, you may be out of luck. I don't know of any
> fully-Unicode-capable terminal.
The lack of a decent console application is one of the most problematic 
issues I face whenever attempting to do serious programming in Windows. 
I wish I knew of a better terminal program. Here's one that seems like 
it might work, but I haven't tried it yet: 
http://software.jessies.org/terminator


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Profiling numpy ? (parts written in C)

2006-12-20 Thread Andrew Straw
I added a ticket for Francesc's enhancement:
http://projects.scipy.org/scipy/numpy/ticket/403
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion