Re: [Numpy-discussion] Error building docs
Fri, 25 Sep 2009 12:29:30 -0400, Michael Droettboom wrote: Anybody know why I might be seeing this? [clip] Exception occurred:[ 0%] reference/arrays.classeslass File /home/mdroe/usr/lib/python2.5/site-packages/docutils/nodes.py, line 471, in __getitem__ return self.attributes[key] KeyError: 'numbered' No ideas. I think I've seen KeyErrors before, but not from inside docutils. My only guess would be to remove the build directory and try again. If that does not help, I'd look into if there's some known issue with the current version of Sphinx's hg branch. -- Pauli Virtanen ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy.numarray.transpose()
Hello list, I just thought I'd point out a difference between 'import numarray' and 'import numpy.numarray' . Consider the following In [1]: from numpy.numarray import * In [2]: d = array((1,2,3,4)) In [3]: f = reshape(d,(2,2)) In [4]: print f [[1 2] [3 4]] In [5]: f.transpose() Out[5]: array([[1, 3], [2, 4]]) In [6]: print f [[1 2] [3 4]] Now in pure numarray, f would have changed to the transposed form, so that the output from [5] and [6] would match, and be different from that of [4]. (I don't have numarray installed myself but a workmates computer, and examples on the web have the usage f.transpose() . Now, In [7]: f = f.transpose() In [8]: print f [[1 3] [2 4]] as expected. I mention this because I think that it is worth knowing having lost a LOT of time to it. Is it worth filing as a bug report? Michael Walker Plant Modelling Group CIRAD, Montpellier 04 67 61 57 27 ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] new array type in cython?
Has anyone attempted a new array type in cython? Any hints? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] glumpy: fast OpenGL numpy visualization + matplotlib integration
Hi all, glumpy is a fast OpenGL visualization tool for numpy arrays coded on top of pyglet (http://www.pyglet.org/). The package contains many demos showing basic usage as well as integration with matplotlib. As a reference, the animation script available from matplotlib distribution runs at around 500 fps using glumpy instead of 30 fps on my machine. Package/screenshots/explanations at: http://www.loria.fr/~rougier/coding/glumpy.html (it does not require installation so you can run demos from within the glumpy directory). Nicolas ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy.numarray.transpose()
On 09/28/2009 03:15 AM, Pauli Virtanen wrote: Mon, 28 Sep 2009 10:07:47 +0200, Michael.Walker wrote: [clip] In [7]: f = f.transpose() In [8]: print f [[1 3] [2 4]] as expected. I mention this because I think that it is worth knowing having lost a LOT of time to it. Is it worth filing as a bug report? Yes. It indeed seems that in numarray, transpose() transposes the array in-place. This could maybe be fixed by a new numarray-emulating ndarray subclass. The tricky problem then is that some functions don't, IIRC, preserve subclasses, which may lead to surprises. (Anyway, these should be fixed at some point...) At the least, we should write a well-visible differences to numarray document that explains all differences and known bugs. This is not a bug! This specific difference between numpy and numarray is documented on the 'converting from numarray' page: http://www.scipy.org/Converting_from_numarray What actually is incorrect is that the numpy.numarray.transpose has the same docstring as numpy.transpose. So it would be very helpful to first correct the numpy.array.transpose documentation. A larger goal would be to correctly document all the numpy.numarray and numpy.numeric functions as these should not be linked to the similar numpy functions. If these are identical then it should state that, what differences exist and then refer to equivalent numpy page for example, numpy.numarray.matrixmultiply and numpy.dot. Also, the documentation for these numpy.numarray and numpy.numeric functions should state that these are mainly included for compatibility reasons and may be removed at a future date. Bruce ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy.numarray.transpose()
Mon, 28 Sep 2009 09:29:30 -0500, Bruce Southey wrote: [clip] This is not a bug! This specific difference between numpy and numarray is documented on the 'converting from numarray' page: http://www.scipy.org/Converting_from_numarray Oh. I completely missed that page. Now, it should just be transferred to the main documentation. Also, it might be possible to make numpy.numarray.ndarray different from numpy.ndarray. But I doubt this is high priority -- it may be more efficient just to document the fact. What actually is incorrect is that the numpy.numarray.transpose has the same docstring as numpy.transpose. So it would be very helpful to first correct the numpy.array.transpose documentation. numpy.numarray.transpose is numpy.transpose, so fixing this would involve implementing the numarray-style transpose, too. -- Pauli Virtanen ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] new array type in cython?
On Mon, Sep 28, 2009 at 06:23, Neal Becker ndbeck...@gmail.com wrote: Has anyone attempted a new array type in cython? Any hints? Are you having problems? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] new array type in cython?
Robert Kern wrote: On Mon, Sep 28, 2009 at 06:23, Neal Becker ndbeck...@gmail.com wrote: Has anyone attempted a new array type in cython? Any hints? Are you having problems? No, haven't tried using cython for this yet. Wondering if there are any examples. So far my experiences have been with boost::python. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] new array type in cython?
On Mon, Sep 28, 2009 at 10:36, Neal Becker ndbeck...@gmail.com wrote: Robert Kern wrote: On Mon, Sep 28, 2009 at 06:23, Neal Becker ndbeck...@gmail.com wrote: Has anyone attempted a new array type in cython? Any hints? Are you having problems? No, haven't tried using cython for this yet. Wondering if there are any examples. Have you read the documentation? http://wiki.cython.org/tutorials/numpy -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Matplotlib-users] glumpy: fast OpenGL numpy visualization + matplotlib integration
Well, I've been starting working on a pyglet backend but it is currently painfully slow mainly because I do not know enough of the matplotlib internal machinery to really benefit from it. In the case of glumpy, the use of texture object for representing 2d arrays is a real speed boost since interpolation/colormap/heightmap is made on the GPU. Concerning matplotlib examples, the use of glumpy should be actually two lines of code: from pylab import * from glumpy import imshow, show but I did not package it this way yet (that is easy however). I guess the main question is whether people are interested in glumpy to have a quick dirty debug tool on top of matplotlib or whether they prefer a full fledged and fast pyglet/OpenGL backend (which is really harder). Nicolas On 28 Sep, 2009, at 18:05 , Gökhan Sever wrote: On Mon, Sep 28, 2009 at 9:06 AM, Nicolas Rougier nicolas.roug...@loria.fr wrote: Hi all, glumpy is a fast OpenGL visualization tool for numpy arrays coded on top of pyglet (http://www.pyglet.org/). The package contains many demos showing basic usage as well as integration with matplotlib. As a reference, the animation script available from matplotlib distribution runs at around 500 fps using glumpy instead of 30 fps on my machine. Package/screenshots/explanations at: http://www.loria.fr/~rougier/coding/glumpy.html (it does not require installation so you can run demos from within the glumpy directory). Nicolas ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion Hi Nicolas, This is technically called OpenGL backend, isn't it? It is nice that integrates with matplotlib, however 300 hundred lines of code indeed a lot of lines for an ordinary user. Do you think this could be further integrated into matplotlib with a wrapper to simplify its usage? -- Gökhan -- Come build with us! The BlackBerryreg; Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9#45;12, 2009. Register now#33; http://p.sf.net/sfu/devconf___ Matplotlib-users mailing list matplotlib-us...@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/matplotlib-users ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] unpacking bytes directly in numpy
David Cournapeau wrote: However, is there a more direct way of directly transforming bytes into a np.int32 type without the intermediate 'struct.unpack' step? Assuming you have an array of bytes, you could just use view: # x is an array of bytes, whose length is a multiple of 4 x.view(np.int32) and if you don't have an array, you can use one of: np.fromstring np.frombuffer np.fromfile they all take a dtype as a parameter. For your example: bytes = f.read(4) i = struct.unpack('i', bytes)[0] i = np.fromfile(f, dtype=np.int32, count=1) and, of course, you cold read a lot more than one number in at once. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/ORR(206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception chris.bar...@noaa.gov ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Question about improving genfromtxt errors
Skipper Seabold wrote: FWIW, I have a script that creates and savez arrays from several text files in total about 1.5 GB of text. without the incrementing in genfromtxt Run time: 122.043943 seconds with the incrementing in genfromtxt Run time: 131.698873 seconds If we just want to always keep track of things, I would be willing to take a poorly measured 8 % slowdown, I also think 8% is worth it, but I'm still surprised it's that much. What addition code is inside the inner loop? (or , I guess, the each line loop...) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/ORR(206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception chris.bar...@noaa.gov ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Question about improving genfromtxt errors
On Mon, Sep 28, 2009 at 12:41 PM, Christopher Barker chris.bar...@noaa.gov wrote: Skipper Seabold wrote: FWIW, I have a script that creates and savez arrays from several text files in total about 1.5 GB of text. without the incrementing in genfromtxt Run time: 122.043943 seconds with the incrementing in genfromtxt Run time: 131.698873 seconds If we just want to always keep track of things, I would be willing to take a poorly measured 8 % slowdown, I also think 8% is worth it, but I'm still surprised it's that much. What addition code is inside the inner loop? (or , I guess, the each line loop...) -Chris This was probably due to the way that I timed it, honestly. I only did it once. The only differences I made for that part were in the first post of the thread. Two incremented scalars for line numbers and column numbers and a try/except block. I'm really not against a debug mode if someone wants to do it, and it's deemed necessary. If it could be made to log all of the errors that would be extremely helpful. I still need to post some of my use cases though. Anything to help make data cleaning less of a chore... Skipper ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Matplotlib-users] glumpy: fast OpenGL numpy visualization + matplotlib integration
This is good. I have been looking forward to seeing something like this for a while. I'd be cool however, to dump a *real* python function into a vertex shader and let it do real mesh deformations. I know, it would be hard to validate if it wasn;t doing some crazy stuff. Of course, with new (ie soon-to be-introduced) tesselation extensions to opengl, the mesh itself could be generated on the gpu itself. On Mon, Sep 28, 2009 at 10:07 PM, Nicolas Rougier nicolas.roug...@loria.fr wrote: Well, I've been starting working on a pyglet backend but it is currently painfully slow mainly because I do not know enough of the matplotlib internal machinery to really benefit from it. In the case of glumpy, the use of texture object for representing 2d arrays is a real speed boost since interpolation/colormap/heightmap is made on the GPU. Concerning matplotlib examples, the use of glumpy should be actually two lines of code: from pylab import * from glumpy import imshow, show but I did not package it this way yet (that is easy however). I guess the main question is whether people are interested in glumpy to have a quick dirty debug tool on top of matplotlib or whether they prefer a full fledged and fast pyglet/OpenGL backend (which is really harder). Nicolas On 28 Sep, 2009, at 18:05 , Gökhan Sever wrote: On Mon, Sep 28, 2009 at 9:06 AM, Nicolas Rougier nicolas.roug...@loria.fr wrote: Hi all, glumpy is a fast OpenGL visualization tool for numpy arrays coded on top of pyglet (http://www.pyglet.org/). The package contains many demos showing basic usage as well as integration with matplotlib. As a reference, the animation script available from matplotlib distribution runs at around 500 fps using glumpy instead of 30 fps on my machine. Package/screenshots/explanations at: http://www.loria.fr/~rougier/coding/glumpy.html (it does not require installation so you can run demos from within the glumpy directory). Nicolas ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion Hi Nicolas, This is technically called OpenGL backend, isn't it? It is nice that integrates with matplotlib, however 300 hundred lines of code indeed a lot of lines for an ordinary user. Do you think this could be further integrated into matplotlib with a wrapper to simplify its usage? -- Gökhan -- Come build with us! The BlackBerryreg; Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9#45;12, 2009. Register now#33; http://p.sf.net/sfu/devconf___ Matplotlib-users mailing list matplotlib-us...@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/matplotlib-users ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Rohit Garg http://rpg-314.blogspot.com/ Senior Undergraduate Department of Physics Indian Institute of Technology Bombay ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Question about improving genfromtxt errors
On Sep 28, 2009, at 12:51 PM, Skipper Seabold wrote: This was probably due to the way that I timed it, honestly. I only did it once. The only differences I made for that part were in the first post of the thread. Two incremented scalars for line numbers and column numbers and a try/except block. I'm really not against a debug mode if someone wants to do it, and it's deemed necessary. If it could be made to log all of the errors that would be extremely helpful. I still need to post some of my use cases though. Anything to help make data cleaning less of a chore... I was thinking about something this week-end: we could create a second list when looping on the rows, where we would store the length of each splitted row. After the loop, we can find if these values don't match the expected number of columns `nbcols` and where. Then, we can decide to strip the `rows` list of its invalid values (that corresponds to skipping) or raise an exception, but in both cases we know where the problem is. My only concern is that we'd be creating yet another list of integers, which would increase memory usage. Would it be a problem ? In other news, I should eventually be able to tackle that this week... ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Question about improving genfromtxt errors
On Mon, Sep 28, 2009 at 1:36 PM, Pierre GM pgmdevl...@gmail.com wrote: On Sep 28, 2009, at 12:51 PM, Skipper Seabold wrote: This was probably due to the way that I timed it, honestly. I only did it once. The only differences I made for that part were in the first post of the thread. Two incremented scalars for line numbers and column numbers and a try/except block. I'm really not against a debug mode if someone wants to do it, and it's deemed necessary. If it could be made to log all of the errors that would be extremely helpful. I still need to post some of my use cases though. Anything to help make data cleaning less of a chore... I was thinking about something this week-end: we could create a second list when looping on the rows, where we would store the length of each splitted row. After the loop, we can find if these values don't match the expected number of columns `nbcols` and where. Then, we can decide to strip the `rows` list of its invalid values (that corresponds to skipping) or raise an exception, but in both cases we know where the problem is. My only concern is that we'd be creating yet another list of integers, which would increase memory usage. Would it be a problem ? In other news, I should eventually be able to tackle that this week... I don't think it would be prohibitively large. One of the datasets I was working with was about a million lines with about 500 columns in each. So...if this is how you actually do this then you have. L = [500] * 1201798 import sys print sys.getsizeof(L)/(100.), MB # (9.61445606, 'MB') I can't think of a case where I would want to just skip bad rows. Also, I'd definitely like to know about each line that had problems in an error log if we're going to go through the whole file anyway. No hurry on this, just getting my thoughts out there after my experience. I will post some test cases tonight probably. Skipper ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] new array type in cython?
On Mon, Sep 28, 2009 at 10:47 AM, Neal Becker ndbeck...@gmail.com wrote: Robert Kern wrote: On Mon, Sep 28, 2009 at 10:36, Neal Becker ndbeck...@gmail.com wrote: Robert Kern wrote: On Mon, Sep 28, 2009 at 06:23, Neal Becker ndbeck...@gmail.com wrote: Has anyone attempted a new array type in cython? Any hints? Are you having problems? No, haven't tried using cython for this yet. Wondering if there are any examples. Have you read the documentation? http://wiki.cython.org/tutorials/numpy Yes, I didn't notice anything about adding user-defined datatypes to numpy, did I miss something? This is for a fixed point type, no? Why not write a class based around something like int32, override some of the methods, and use object arrays? It's quick an easy and unless you really, really need speed it should do the trick. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] setup.py does not use the correct library
I use the following command to build numpy-1.3.0rc2. But it seems not able to find the appropriate library files. Can somebody let me know how to make it use the correct ones? #command export LD_LBRARY_PATH= export CPPFLAGS=-I$HOME/utility/linux/opt/Python-2.6.2/include/python2.6 export LDFLAGS=-L$HOME/utility/linux/opt/Python-2.6.2/lib -lpython2.6 export PATH=$HOME/utility/linux/opt/Python-2.6.2/bin:/usr/local/bin:/usr/bin:/bin python setup.py build --fcompiler=gnu95 ###error /home/pengy/download/linux/python/numpy-1.3.0rc2/build/src.linux-x86_64-2.6/numpy/core/include/numpy/__multiarray_api.h:996: undefined reference to `PyErr_Format' build/temp.linux-x86_64-2.6/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': /home/pengy/download/linux/python/numpy-1.3.0rc2/numpy/linalg/lapack_litemodule.c:833: undefined reference to `PyDict_SetItemString' /home/pengy/download/linux/python/numpy-1.3.0rc2/numpy/linalg/lapack_litemodule.c:830: undefined reference to `PyErr_SetString' build/temp.linux-x86_64-2.6/numpy/linalg/python_xerbla.o: In function `xerbla_': /home/pengy/download/linux/python/numpy-1.3.0rc2/numpy/linalg/python_xerbla.c:35: undefined reference to `PyExc_ValueError' /home/pengy/download/linux/python/numpy-1.3.0rc2/numpy/linalg/python_xerbla.c:35: undefined reference to `PyErr_SetString' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/libgfortranbegin.a(fmain.o): In function `main': (.text+0xa): undefined reference to `MAIN__' collect2: ld returned 1 exit status ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] setup.py does not use the correct library
On Mon, Sep 28, 2009 at 14:31, Peng Yu pengyu...@gmail.com wrote: I use the following command to build numpy-1.3.0rc2. But it seems not able to find the appropriate library files. Can somebody let me know how to make it use the correct ones? #command export LD_LBRARY_PATH= export CPPFLAGS=-I$HOME/utility/linux/opt/Python-2.6.2/include/python2.6 export LDFLAGS=-L$HOME/utility/linux/opt/Python-2.6.2/lib -lpython2.6 When compiling Fortran extensions, $LDFLAGS replaces every linker flag, including things like -shared. Be sure you know what you are doing when using this. But you really shouldn't have to be using $LDFLAGS or $CPPFLAGS like this. Python should know where its libraries and include directories are. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy.reshape() bug?
I am trying to collapse two dimensions of a 3-D array, using reshape: (Pdb) dims = np.shape(trace) (Pdb) dims Out[2]: (1000, 4, 3) (Pdb) newdims = (dims[0], sum(dims[1:])) (Pdb) newdims Out[2]: (1000, 7) However, reshape seems to think I am missing something: (Pdb) np.reshape(trace, newdims) *** ValueError: total size of new array must be unchanged Clearly the total size of the new array *is* unchanged. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Deprecate poly1d and replace with Poly1d ?
Because poly1d exports the __array__ interface, a design error IMHO that makes it play badly with the prospective chebyshev module. For example the following should convert from a Chebyshev series to a power series chebval([1,0,0], poly1d(1,0)) and it does if I make sure to pass the poly1d as object in an array, but to do that I have to check that it is an instance of poly1d and take special measures. I shouldn't have to do that, it violates duck typing. The more basic problem here is making poly1d look like an array, which it isn't. The array bit is an implementation detail and would be private in C++. with an as_array method to retrieve the details if wanted. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy.reshape() bug?
On 2009-09-28 15:39 , Chris wrote: I am trying to collapse two dimensions of a 3-D array, using reshape: (Pdb) dims = np.shape(trace) (Pdb) dims Out[2]: (1000, 4, 3) (Pdb) newdims = (dims[0], sum(dims[1:])) (Pdb) newdims Out[2]: (1000, 7) However, reshape seems to think I am missing something: (Pdb) np.reshape(trace, newdims) *** ValueError: total size of new array must be unchanged Clearly the total size of the new array *is* unchanged. I think you meant prod(dims[1:]). A 4 x 3 sub-array has 12 elements, not 7. (Whence the curse of dimensionality...) -Neil ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Deprecate poly1d and replace with Poly1d ?
On Mon, Sep 28, 2009 at 15:46, Charles R Harris charlesr.har...@gmail.com wrote: The more basic problem here is making poly1d look like an array, which it isn't. The array bit is an implementation detail and would be private in C++. with an as_array method to retrieve the details if wanted. I'm pretty sure that it is an intentional public API and not an implementation detail. The __array__() method is not making poly1d look like an array; it is the standard name for such as_array() conversion methods. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] The problem with zero dimesnsional array
Dear All, I'm facing a bog problem in following . the code snippet is as follows % Compute the area indicator### for kT in range(leftbound,rightbound): # Here the left bound and rightbound both are indexing array is cutlevel = sum(s[(kT-ptwin):(kT+ptwin)],0)/(ptwin*2+1) corsig = s[(kT-swin+1):kT]-cutlevel areavalue1 =sum((corsig),0) #print areavalue.size print leftbound, rightbound Tval=areavalue1[leftbound:rightbound] Everything works fine till areavalue1, then whenever I try to access the Tval=areavalue1[leftbound:rightbound] it says IndexError: invalid index to scalar variable.. When i try to access areavalue1[0] it gives me entire array but for areavalue1[2:8]..it gives the same error . Thanx in advance.. Regards ymk ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Deprecate poly1d and replace with Poly1d ?
On Mon, Sep 28, 2009 at 2:59 PM, Robert Kern robert.k...@gmail.com wrote: On Mon, Sep 28, 2009 at 15:46, Charles R Harris charlesr.har...@gmail.com wrote: The more basic problem here is making poly1d look like an array, which it isn't. The array bit is an implementation detail and would be private in C++. with an as_array method to retrieve the details if wanted. I'm pretty sure that it is an intentional public API and not an implementation detail. The __array__() method is not making poly1d look like an array; it is the standard name for such as_array() conversion methods. Exactly, and that is why it is a design decision error. It *shouldn't* work with as_array unless it is *an array*, which it isn't. Really In [19]: sin(poly1d([1,2,3])) Out[19]: array([ 0.84147098, 0.90929743, 0.14112001]) That makes no sense. On the other hand, it is difficult to make arrays of poly1d, which does make sense because the polynomials are a commutative ring. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] setup.py does not use the correct library
On Mon, Sep 28, 2009 at 2:53 PM, Robert Kern robert.k...@gmail.com wrote: On Mon, Sep 28, 2009 at 14:31, Peng Yu pengyu...@gmail.com wrote: I use the following command to build numpy-1.3.0rc2. But it seems not able to find the appropriate library files. Can somebody let me know how to make it use the correct ones? #command export LD_LBRARY_PATH= export CPPFLAGS=-I$HOME/utility/linux/opt/Python-2.6.2/include/python2.6 export LDFLAGS=-L$HOME/utility/linux/opt/Python-2.6.2/lib -lpython2.6 When compiling Fortran extensions, $LDFLAGS replaces every linker flag, including things like -shared. Be sure you know what you are doing when using this. But you really shouldn't have to be using $LDFLAGS or $CPPFLAGS like this. Python should know where its libraries and include directories are. My python is compiled with gcc-3.4.4. I used gcc-4.1.2 which generate the error in my previous email. Do I have to use the same compiler to compile numpy? Regards, Peng ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] setup.py does not use the correct library
On Mon, Sep 28, 2009 at 16:27, Peng Yu pengyu...@gmail.com wrote: On Mon, Sep 28, 2009 at 2:53 PM, Robert Kern robert.k...@gmail.com wrote: On Mon, Sep 28, 2009 at 14:31, Peng Yu pengyu...@gmail.com wrote: I use the following command to build numpy-1.3.0rc2. But it seems not able to find the appropriate library files. Can somebody let me know how to make it use the correct ones? #command export LD_LBRARY_PATH= export CPPFLAGS=-I$HOME/utility/linux/opt/Python-2.6.2/include/python2.6 export LDFLAGS=-L$HOME/utility/linux/opt/Python-2.6.2/lib -lpython2.6 When compiling Fortran extensions, $LDFLAGS replaces every linker flag, including things like -shared. Be sure you know what you are doing when using this. But you really shouldn't have to be using $LDFLAGS or $CPPFLAGS like this. Python should know where its libraries and include directories are. My python is compiled with gcc-3.4.4. I used gcc-4.1.2 which generate the error in my previous email. Do I have to use the same compiler to compile numpy? It's a good idea, particularly with the jump from 3.4 to 4.1. However, the source of the error you saw is that you have defined $LDFLAGS. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] setup.py does not use the correct library
On Mon, Sep 28, 2009 at 16:40, Peng Yu pengyu...@gmail.com wrote: I attached the script that I run for build and the build output. I think that setup.py doesn't use the correct python library. But I'm not sure why. Would you please help me figure out what the problem is? Setting $LDFLAGS to be empty is also incorrect. Simply do not set $LDFLAGS or $CPPFLAGS at all. [And please do not Cc: me. I read the list.] -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Convert data into rectangular grid
Hi, Suppose I have a set of x,y,c data (something useful for matplotlib.pyplot.plot() ). Generally, this data is not rectangular at all. Does there exist a numpy function (or set of functions) which will take this data and construct the smallest two-dimensional arrays X,Y,C ( suitable for matplotlib.pyplot.contour() ). Essentially, I want to pass in the data and a grid step size in the x- and y-directions. The function would average the c-values for all points which land in any particular square. Optionally, I'd like to be able to specify a value to use when there are no points in x,y which are in the square. Hope this makes sense. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] setup.py does not use the correct library
On Mon, Sep 28, 2009 at 19:35, Peng Yu pengyu...@gmail.com wrote: On Mon, Sep 28, 2009 at 4:44 PM, Robert Kern robert.k...@gmail.com wrote: On Mon, Sep 28, 2009 at 16:40, Peng Yu pengyu...@gmail.com wrote: I attached the script that I run for build and the build output. I think that setup.py doesn't use the correct python library. But I'm not sure why. Would you please help me figure out what the problem is? Setting $LDFLAGS to be empty is also incorrect. Simply do not set $LDFLAGS or $CPPFLAGS at all. [And please do not Cc: me. I read the list.] Even if I don't set $LDFLAGS in my build script, I still have LDFLAGS set in my ~/.bash_profile. Then unset it: unset CPPFLAGS unset LDFLAGS -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Convert data into rectangular grid
On Mon, Sep 28, 2009 at 4:48 PM, josef.p...@gmail.com wrote: On Mon, Sep 28, 2009 at 7:19 PM, jah jah.mailingl...@gmail.com wrote: Hi, Suppose I have a set of x,y,c data (something useful for matplotlib.pyplot.plot() ). Generally, this data is not rectangular at all. Does there exist a numpy function (or set of functions) which will take this data and construct the smallest two-dimensional arrays X,Y,C ( suitable for matplotlib.pyplot.contour() ). Essentially, I want to pass in the data and a grid step size in the x- and y-directions. The function would average the c-values for all points which land in any particular square. Optionally, I'd like to be able to specify a value to use when there are no points in x,y which are in the square. Hope this makes sense. If I understand correctly numpy.histogram2d(x, y, ..., weights=c) might do what you want. There was a recent thread on its usage. It is very close, but it normed=True, will first normalize the weights (undesirably) and then it will normalize the normalized weights by dividing by the cell area. Instead, what I want is the cell value to be the average off all the points that were placed in the cell. This seems like a common use case, so I'm guessing this functionality is present already. So if 3 points with weights [10,20,30] were placed in cell (i,j), then the cell should have value 20 (the arithmetic mean of the points placed in the cell). Here is the desired use case: I have a set of x,y,c values that I could pass into matplotlib's scatter() or hexbin(). I'd like to take this same set of points and transform them so that I can pass them into matplotlib's contour() function. Perhaps matplotlib has a function which does this. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Convert data into rectangular grid
On Mon, Sep 28, 2009 at 19:45, jah jah.mailingl...@gmail.com wrote: Here is the desired use case: I have a set of x,y,c values that I could pass into matplotlib's scatter() or hexbin(). I'd like to take this same set of points and transform them so that I can pass them into matplotlib's contour() function. Perhaps matplotlib has a function which does this. http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.griddata -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion