On Mon, Jun 7, 2010 at 4:52 AM, Pavel Bazant maxpla...@seznam.cz wrote:
Correct me if I am wrong, but the paragraph
Note to those used to IDL or Fortran memory order as it relates to
indexing. Numpy uses C-order indexing. That means that the last index
usually (see xxx for exceptions)
I don't want to complain
But what is wrong with a limit of 40kB ? There are enough places where
one could upload larger files for everyone interested...
My 2 cents,
Sebastian Haase
PS: what is the limit now set to ?
On Mon, Jun 7, 2010 at 11:24 PM, Vincent Davis vinc...@vincentdavis.net
Hi,
http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html#specifying-and-constructing-data-types
says f2 instead of f1
Numarray introduced a short-hand notation for specifying the format of
a record as a comma-separated string of basic formats.
...
The generated data-type fields are
Hi,
I just wondered why numpy.load(foo.npz) was so much faster than loading
(gzip-compressed) hdf5 file contents, and found that numpy.savez did not
compress my files at all. So there is currently no point in using numpy.savez
instead of numpy.save when you're not using the
another note:
http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#arrays-indexing-rec
should not say record array - because recarray a special in that
they can even access the named fields via attribute access rather
than using the dictionary-like syntax.
-S.
PS: I guess I should have
On 8 June 2010 09:46, Sebastian Haase seb.ha...@gmail.com wrote:
Hi,
http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html#specifying-and-constructing-data-types
says f2 instead of f1
Numarray introduced a short-hand notation for specifying the format of
a record as a
Hi all,
I tried to build a rpm of numpy using
python setup.py bdist --format=rpm
removing 'numpy-2.0.0.dev8460' (and everything under it)
copying dist/numpy-2.0.0.dev8460.tar.gz -
build/bdist.linux-x86_64/rpm/SOURCES
building RPMs
rpm -ba --define _topdir
2010/6/8 Hans Meine me...@informatik.uni-hamburg.de:
I just wondered why numpy.load(foo.npz) was so much faster than loading
(gzip-compressed) hdf5 file contents, and found that numpy.savez did not
compress my files at all.
But is that intended? The numpy.savez docstring says Save several
On Tuesday 08 June 2010 11:40:59 Scott Sinclair wrote:
The savez docstring should probably be clarified to provide this
information.
I would prefer to actually offer compression to the user. Unfortunately,
adding another argument to this function will never be 100% secure, since
currently,
ti, 2010-06-08 kello 12:03 +0200, Hans Meine kirjoitti:
On Tuesday 08 June 2010 11:40:59 Scott Sinclair wrote:
The savez docstring should probably be clarified to provide this
information.
I would prefer to actually offer compression to the user. Unfortunately,
adding another argument to
Hi,
Is there a reason that np.append converts recarray to ndarray while
np.insert keeps recarray:
type(a)
class 'numpy.core.records.recarray'
type(N.append(a,a))
type 'numpy.ndarray'
type(N.insert(a,-1, a))
class 'numpy.core.records.recarray'
Thanks,
Sebastian Haase
On Tuesday 08 June 2010 12:11:28 Pauli Virtanen wrote:
ti, 2010-06-08 kello 12:03 +0200, Hans Meine kirjoitti:
I would prefer to actually offer compression to the user. Unfortunately,
adding another argument to this function will never be 100% secure, since
currently, all kwargs will be
On 8 June 2010 06:11, Pauli Virtanen p...@iki.fi wrote:
ti, 2010-06-08 kello 12:03 +0200, Hans Meine kirjoitti:
On Tuesday 08 June 2010 11:40:59 Scott Sinclair wrote:
The savez docstring should probably be clarified to provide this
information.
I would prefer to actually offer compression
Hi Anne,
thanks for your input, too.
On Tuesday 08 June 2010 12:53:51 Anne Archibald wrote:
I'm also a little dubious about making compression the default.
np.savez provides a feature - storing multiple arrays - that is not
otherwise available. I suspect many users care more about speed than
On Mon, Jun 7, 2010 at 5:52 AM, Pavel Bazant maxpla...@seznam.cz wrote:
Correct me if I am wrong, but the paragraph
Note to those used to IDL or Fortran memory order as it relates to
indexing. Numpy uses C-order indexing. That means that the last index
usually (see xxx for exceptions)
2010/6/8 Hans Meine me...@informatik.uni-hamburg.de:
On Tuesday 08 June 2010 11:40:59 Scott Sinclair wrote:
The savez docstring should probably be clarified to provide this
information.
I would prefer to actually offer compression to the user.
In the meantime, I've edited the docstring to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Is it possible to read an array of 12bit encoded numbers from file (or string)
using numpy ?
Thanks,
Martin
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/
You can. If each number occupies 2 bytes (16 bits) it is straight forward. If
it is a continues 12 bits stream you have to unpack by your self:
data = np.fromstring(str12bits, dtype=np.uint8)
data1 = data.astype(no.uint16)
data1[::3] = data1[::3]*256 + data1[1::3] // 16
data1[1::3] = (data[1::3]
I tried to install scipy, but I get the error with not being able to find
get_info() from numpy.distutils.misc_util. I read that you need the SVN
version of numpy to fix this. I recompiled numpy and reinstalled from the
SVN, which says is version 1.3.0 (was using 1.4.1 version before) and that
This is unexpected, from the error log:
/Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
Python.h:11:20: error: limits.h: No such file or directory
No good... it can't find basic system headers. Perhaps it's due to the
MACOSX_DEPLOYMENT_TARGET environment variable that
Tue, 08 Jun 2010 09:47:41 -0400, Jeff Hsu wrote:
I tried to install scipy, but I get the error with not being able to
find get_info() from numpy.distutils.misc_util. I read that you need
the SVN version of numpy to fix this. I recompiled numpy and
reinstalled from the SVN, which says is
On Tue, Jun 8, 2010 at 7:58 AM, Zachary Pincus zachary.pin...@yale.edu wrote:
This is unexpected, from the error log:
/Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
Python.h:11:20: error: limits.h: No such file or directory
No good... it can't find basic system headers.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Nadav Horesh skrev:
You can. If each number occupies 2 bytes (16 bits) it is straight
forward. If it is a continues 12 bits stream you have to unpack by your
self:
data = np.fromstring(str12bits, dtype=np.uint8)
data1 = data.astype(no.uint16)
I am working in C, and I want to check whether a Python object (PyObject*) is a
numpy float object.
i.e. numpy.float32, numpy.float64 etc.
I have looked at the docs and the NumPy Manual Part II C-API, but I am not sure
which of the many type checking functions to use.
Could someone please give
On Tue, Jun 8, 2010 at 12:10 AM, Sebastian Haase seb.ha...@gmail.comwrote:
I don't want to complain
But what is wrong with a limit of 40kB ? There are enough places where
one could upload larger files for everyone interested...
Not everyone knows about 'em, though - can you list some
On Tue, Jun 8, 2010 at 5:23 PM, David Goldsmith d.l.goldsm...@gmail.com wrote:
On Tue, Jun 8, 2010 at 12:10 AM, Sebastian Haase seb.ha...@gmail.com
wrote:
I don't want to complain
But what is wrong with a limit of 40kB ? There are enough places where
one could upload larger files for
On Tue, Jun 8, 2010 at 9:27 AM, Pavel Bazant maxpla...@seznam.cz wrote:
Correct me if I am wrong, but the paragraph
Note to those used to IDL or Fortran memory order as it relates to
indexing. Numpy uses C-order indexing. That means that the last index
usually (see xxx for
On Tue, Jun 8, 2010 at 8:27 AM, Pavel Bazant maxpla...@seznam.cz wrote:
Correct me if I am wrong, but the paragraph
Note to those used to IDL or Fortran memory order as it relates to
indexing. Numpy uses C-order indexing. That means that the last index
usually (see xxx for
On Tue, Jun 8, 2010 at 8:43 AM, John Hunter jdh2...@gmail.com wrote:
On Tue, Jun 8, 2010 at 10:33 AM, Sebastian Haase seb.ha...@gmail.com
wrote:
On Tue, Jun 8, 2010 at 5:23 PM, David Goldsmith d.l.goldsm...@gmail.com
wrote:
On Tue, Jun 8, 2010 at 12:10 AM, Sebastian Haase
On Tue, Jun 8, 2010 at 7:58 AM, Zachary Pincus zachary.pin...@yale.edu
wrote:
This is unexpected, from the error log:
/Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
Python.h:11:20: error: limits.h: No such file or directory
No good... it can't find basic system
Hi there,
I have a problem, which I'm sure can somehow be solved using np.choose()
- but I cannot figure out how :(
I have an array idx, which holds int values and has a 2d shape. All
values inside idx are 0 = idx n. And I have a second array times,
which is 1d, with times.shape = (n,).
Out
Dear Eric,
thank you for the insight and suggestion. Reading between the lines I
developed the suspicion that the problem might be in the extension
function 'unpack_vdr_data'. Previously the last part of that was
Cmplx = PyComplex_FromCComplex(cmplx);
if (PyList_SetItem(Clist,
On Tue, Jun 8, 2010 at 11:24 AM, Andreas Hilboll li...@hilboll.de wrote:
Hi there,
I have a problem, which I'm sure can somehow be solved using np.choose()
- but I cannot figure out how :(
I have an array idx, which holds int values and has a 2d shape. All
values inside idx are 0 = idx n.
I accidentally posted this on scipy list also, I meant it to be here
since we already have a tread going about the mail list and I now know
there is a moderator :)
I prefer the mail list but stackoverflow.com is good. But what I
really like about stackoverflow is not the answers but the ability
Thanks, that works. Unfortunately it uncovered another problem. When I try
and reinstall numpy, it keeps building with intel mkl libraries even when I
get a fresh install of numpy with the site.cfg set to default or no site.cfg
at all.
Giving me:
FOUND:
libraries = ['mkl_intel_lp64',
Not pretty, but it works:
idx
array([[4, 2],
[3, 1]])
times
array([100, 101, 102, 103, 104])
numpy.reshape(times[idx.flatten()],idx.shape)
array([[104, 102],
[103, 101]])
On Tue, Jun 8, 2010 at 10:09 AM, Gökhan Sever gokhanse...@gmail.com wrote:
On Tue, Jun 8,
On 06/08/2010 05:50 AM, Charles R Harris wrote:
On Tue, Jun 8, 2010 at 9:39 AM, David Goldsmith d.l.goldsm...@gmail.com
mailto:d.l.goldsm...@gmail.com wrote:
On Tue, Jun 8, 2010 at 8:27 AM, Pavel Bazant maxpla...@seznam.cz
mailto:maxpla...@seznam.cz wrote:
Correct me
Failed again, I have attached the output including the execution of
the above commands.
Thanks for link to the environment variables, I need to read that.
In the attached file (and the one from the next email too) I didn't
see the
MACOSX_DEPLOYMENT_TARGET=10.4
export
On 06/08/2010 08:16 AM, Eric Firing wrote:
On 06/08/2010 05:50 AM, Charles R Harris wrote:
On Tue, Jun 8, 2010 at 9:39 AM, David Goldsmithd.l.goldsm...@gmail.com
mailto:d.l.goldsm...@gmail.com wrote:
On Tue, Jun 8, 2010 at 8:27 AM, Pavel Bazantmaxpla...@seznam.cz
In fact, 'report_memory' shows that memory use was constant throughout
all 256 iterations. Thank you for the hint, Eric.
Date: Tue, 08 Jun 2010 09:22:41 -0700
From: Tom Kuiper kui...@jpl.nasa.gov
Subject: Re: [Numpy-discussion] Memory Usage Question
To: numpy-discussion@scipy.org
Hi,
newtimes = [times[idx[x][y]] for x in range(2) for y in range(2)]
np.array(newtimes).reshape(2,2)
array([[104, 102],
[103, 101]])
Great, thanks a lot!
Cheers,
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On 8 June 2010 14:16, Eric Firing efir...@hawaii.edu wrote:
On 06/08/2010 05:50 AM, Charles R Harris wrote:
On Tue, Jun 8, 2010 at 9:39 AM, David Goldsmith d.l.goldsm...@gmail.com
mailto:d.l.goldsm...@gmail.com wrote:
On Tue, Jun 8, 2010 at 8:27 AM, Pavel Bazant maxpla...@seznam.cz
If we were at so or ask.scipy I would vote for Mark's solution :)
Usually in cases like yours, I tend to use the shortest version of the
solutions.
On Tue, Jun 8, 2010 at 2:08 PM, Andreas Hilboll li...@hilboll.de wrote:
Hi,
newtimes = [times[idx[x][y]] for x in range(2) for y in range(2)]
Hi!
Am 08.06.2010 um 18:24 schrieb Andreas Hilboll:
I have an array idx, which holds int values and has a 2d shape. All
values inside idx are 0 = idx n. And I have a second array times,
which is 1d, with times.shape = (n,).
Out of these two arrays I now want to create a 2d array having
On Tue, Jun 8, 2010 at 2:32 PM, Hans Meine
me...@informatik.uni-hamburg.dewrote:
Funny, that's exactly what I wanted to do (idx being a label/region image
here),
and what I tried today.
You will be happy to hear that the even simpler solution is to just use
fancy indexing (the name is
Hi,
actually, I meant to reply to Mark's mail, as I used his solution ;)
Thanks!
On 06/08/2010 09:21 PM, Gökhan Sever wrote:
If we were at so or ask.scipy I would vote for Mark's solution :)
Usually in cases like yours, I tend to use the shortest version of the
solutions.
On Tue, Jun 8,
On Tue, Jun 8, 2010 at 12:05 PM, Anne Archibald
aarch...@physics.mcgill.cawrote:
On 8 June 2010 14:16, Eric Firing efir...@hawaii.edu wrote:
On 06/08/2010 05:50 AM, Charles R Harris wrote:
On Tue, Jun 8, 2010 at 9:39 AM, David Goldsmith
d.l.goldsm...@gmail.com
googling for
greatest common divisor OR denominator numpy OR scipy OR python
I found this:
http://projects.scipy.org/numpy/browser/trunk/numpy/core/_internal.py?rev=8316
554 def _gcd(a, b):
555 Calculate the greatest common divisor of a and b
556 while b:
557 a, b
On 8 June 2010 17:17, David Goldsmith d.l.goldsm...@gmail.com wrote:
On Tue, Jun 8, 2010 at 1:56 PM, Benjamin Root ben.r...@ou.edu wrote:
On Tue, Jun 8, 2010 at 1:36 PM, Eric Firing efir...@hawaii.edu wrote:
On 06/08/2010 08:16 AM, Eric Firing wrote:
On 06/08/2010 05:50 AM, Charles R Harris
On 8 June 2010 11:13, Vincent Davis vinc...@vincentdavis.net wrote:
2) A web based interface that is similar to stackoverflow in that a
user can search and post within the same page and as they type a
question suggested relevant posts are shown.
Also have a look at ask.scipy.org.
Cheers
2010/6/8 Anne Archibald aarch...@physics.mcgill.ca:
Numpy arrays can have any configuration of memory strides, including
some that are zero; C and Fortran contiguous arrays are simply those
that have special arrangements of the strides. The actual stride
values is normally almost irrelevant to
I do have limits.h in 10.4 sdk
So what next. Ay Ideas?
I had tried to build py 3.1.2 from source but that did not work either.
Thanks
Vincent
On Tue, Jun 8, 2010 at 3:15 PM, Zachary Pincus zachary.pin...@yale.edu wrote:
Hi Vincent,
I'm not really sure -- now the build is using the 10.4 SDK
On Mon, Jun 7, 2010 at 4:19 AM, Daniele Nicolodi dani...@grinta.net wrote:
Hello. There is a method in numpy to compute the greater common divisor
of the elements of an array? Searching through the documentation I
didn't find it.
If you need to do more complicated stuff based around the GCD,
On 06/09/2010 08:04 AM, Vincent Davis wrote:
I do have limits.h in 10.4 sdk
So what next. Ay Ideas?
I had tried to build py 3.1.2 from source but that did not work either.
I had the same issue when I tried the python 3 branch on mac os x. I
have not found the issue yet, but I am afraid it is
54 matches
Mail list logo