Am 29.07.2011 um 20:23 schrieb Nathaniel Smith:
> Even so, surely this behavior should be consistent between base class
> ndarrays and subclasses? If returning 0d arrays is a good idea, then
> we should do it everywhere. If it's a bad idea, then we shouldn't do
> it at all...?
Very well put. That
Am 29.07.2011 um 17:07 schrieb Mark Wiebe:
> I dug a little bit into the relevant 1.5.x vs 1.6.x code, in the places I
> would most suspect a change, but couldn't find anything obvious.
Thanks for having a look. This strengthens my suspicion that the behavior
change was not intentional.
Have a
On Fri, Jul 29, 2011 at 4:12 AM, Hans Meine
wrote:
>
> /home/hmeine/new_numpy/lib64/python2.6/site-packages/vigra/arraytypes.pyc in
> reshape(self, shape, order)
>587
>588 def reshape(self, shape, order='C'):
> --> 589 res = numpy.ndarray.resha
Hi Martin,
I think it would be more useful if isfinite returned true if *all* elements
were finite. (Opposite of isnan and isinf.)
HTH,
Hans
PS: did not check the complex dtype, hopefully that one's no different.
(The above has been typed using a small on-screen keyboard, which may account
Am Freitag, 29. Juli 2011, 11:31:24 schrieb Hans Meine:
> Am Donnerstag, 28. Juli 2011, 17:42:38 schrieb Matthew Brett:
> > Was there a particular case you ran into where this was a problem?
> [...]
> Basically, the problem arose because our ndarray subclass does not support
> z
Am Donnerstag, 28. Juli 2011, 17:42:38 schrieb Matthew Brett:
> If I understand you correctly, the problem is that, for 1.5.1:
> >>> class Test(np.ndarray): pass
> >>> type(np.min(Test((1,
>
>
>
> and for 1.6.0 (and current trunk):
> >>> class Test(np.ndarray): pass
> >>> type(np.min(Test((1
Hi again!
Am Donnerstag, 21. Juli 2011, 16:56:21 schrieb Hans Meine:
> import numpy
>
> class Test(numpy.ndarray):
> pass
>
> a1 = numpy.ndarray((1,))
> a2 = Test((1,))
>
> assert type(a1.min()) == type(a2.min()), \
> "%s !=
Hi,
I have the same problem as Martin DRUON, who wrote 10 days ago:
> I have a problem with the ufunc return type of a numpy.ndarray derived
> class. In fact, I subclass a numpy.ndarray using the tutorial :
> http://docs.scipy.org/doc/numpy/user/basics.subclassing.html
>
> But, for example, if I
On Friday 23 July 2010 10:16:41 Ian Mallett wrote:
> self.patches.sort( lambda x,y:cmp(x.residual_radiance,y.residual_radiance),
> reverse=True )
Using sort(key = lambda x: x.residual_radiance) should be faster.
> Because I've never used arrays of Python objects (and Googling didn't turn
> up any
Hi Pauli and Anne,
On Tuesday 15 June 2010 13:37:30 Pauli Virtanen wrote:
> pe, 2010-06-11 kello 10:52 +0200, Hans Meine kirjoitti:
> > At the bottom you can see that he basically wraps all numpy.ufuncs he can
> > find in the numpy top-level namespace automatically.
>
>
On Friday 11 June 2010 10:38:28 Pauli Virtanen wrote:
> Fri, 11 Jun 2010 10:29:28 +0200, Hans Meine wrote:
> > Ideally, algorithms would get wrapped in between two additional
> > pre-/postprocessing steps:
> >
> > 1) Preprocessing: After broadcasting, transpose the inpu
On Thursday 10 June 2010 22:28:28 Pauli Virtanen wrote:
> Some places where Openmp could probably help are in the inner ufunc
> loops. However, improving the memory efficiency of the data access
> pattern is another low-hanging fruit for multidimensional arrays.
I was about to mention this when th
Hi!
Am 08.06.2010 um 18:24 schrieb Andreas Hilboll:
> I have an array idx, which holds int values and has a 2d shape. All
> values inside idx are 0 <= idx < n. And I have a second array times,
> which is 1d, with times.shape = (n,).
>
> Out of these two arrays I now want to create a 2d array ha
Hi Anne,
thanks for your input, too.
On Tuesday 08 June 2010 12:53:51 Anne Archibald wrote:
> I'm also a little dubious about making compression the default.
> np.savez provides a feature - storing multiple arrays - that is not
> otherwise available. I suspect many users care more about speed th
On Tuesday 08 June 2010 12:11:28 Pauli Virtanen wrote:
> ti, 2010-06-08 kello 12:03 +0200, Hans Meine kirjoitti:
> > I would prefer to actually offer compression to the user. Unfortunately,
> > adding another argument to this function will never be 100% secure, since
> >
On Tuesday 08 June 2010 11:40:59 Scott Sinclair wrote:
> The savez docstring should probably be clarified to provide this
> information.
I would prefer to actually offer compression to the user. Unfortunately,
adding another argument to this function will never be 100% secure, since
currently,
Hi,
I just wondered why numpy.load("foo.npz") was so much faster than loading
(gzip-compressed) hdf5 file contents, and found that numpy.savez did not
compress my files at all. So there is currently no point in using numpy.savez
instead of numpy.save when you're not using the multiple-arrays-p
Am Donnerstag 06 Mai 2010 08:10:35 schrieb Austin Bingham:
> Suppose I defined neither macro in my 'util.h', and that I included
> 'arrayobject.h'. If a user of my library did this:
>
> #include // <-- my library's header
>
> #define PY_ARRAY_UNIQUE_SYMBOL MY_UNIQUE_SYMBOL
> #define NO_IMP
On Friday 12 February 2010 13:43:56 Hans Meine wrote:
> I was just looking for numpy.ma.compressed, but forgot its name.
Another strange thing is the docstring of numpy.ma.compress, which appears in
ipython like this:
Type: instance
Base Class: numpy.ma.core._frommet
Hi,
I was just looking for numpy.ma.compressed, but forgot its name. I suggest to
add a pointer/"see also" to numpy.ma.filled at least:
http://docs.scipy.org/numpy/docs/numpy.ma.core.filled/
Unfortunately, I forgot the PW of my account (hans_meine), otherwise I'd have
given it a shot.
Ciao,
Hi,
I have just uploaded a first release of qimage2ndarray, a tiny python
extension for quickly converting between QImages and numpy.ndarrays
(in both directions). These are very common tasks when programming e.g.
scientific visualizations in Python using PyQt4 as the GUI library.
Similar code
On Tuesday 22 September 2009 13:14:55 Hrvoje Niksic wrote:
> Hans Meine wrote:
> > On Tuesday 22 September 2009 11:01:37 Hrvoje Niksic wrote:
> >> Is it intended for deserialization to uncouple arrays that share a
> >> common base?
> >
> > I think it's n
On Tuesday 22 September 2009 11:01:37 Hrvoje Niksic wrote:
> Is it intended for deserialization to uncouple arrays that share a
> common base?
I think it's not really intended, but it's a limitation by design.
AFAIK, it's related to Luca Citi's recent "ultimate base" thread - you simply
cannot en
Hi!
On Monday 21 September 2009 12:31:27 Citi, Luca wrote:
> I think you do not need to do the chain up walk on view creation.
> If the assumption is that base is the ultimate base, on view creation
> you can do something like (pseudo-code):
> view.base = parent if parent.owndata else parent.base
On Thursday 10 September 2009 19:03:20 John [H2O] wrote:
> I have a routine that is iterating through a series of directories, loading
> files, plotting, then moving on...
>
> It runs very well for the first few iterations, but then slows tremendously
Maybe you "collect" some data into growing dat
On Thursday 06 August 2009 17:27:51 Robert Kern wrote:
> 2009/8/6 Hans Meine :
> > On Wednesday 05 August 2009 22:06:03 David Goldsmith wrote:
> >> But you can "cheat" and put them on one line (if that's all you're
after):
> >> >>> x = np.ar
On Wednesday 05 August 2009 22:06:03 David Goldsmith wrote:
> But you can "cheat" and put them on one line (if that's all you're after):
> >>> x = np.array([1, 2, 3])
> >>> maxi = x.argmax(); maxv = x[maxi]
Is there any reason not to put this as a convenience function into numpy?
It is needed so f
On Tuesday 04 August 2009 22:06:38 Keith Goodman wrote:
> On Tue, Aug 4, 2009 at 12:59 PM, Gael
>
> Varoquaux wrote:
> > On Tue, Aug 04, 2009 at 01:54:49PM -0500, Gökhan Sever wrote:
> >>I see that you should have a browser embedding plugin for Ipyhon
> >> which you don't want to share with us
On Tuesday 04 August 2009 19:19:22 Andrew Friedley wrote:
> OK, have some interesting results. First is my array creation was not
> doing what I thought it was. This (what I've been doing) creates an
> array of 159161 elements:
>
> numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=numpy.float32
On Wednesday 22 July 2009 17:16:31 Ralf Gommers wrote:
> Examples where min/max probably does not do what you want, and
> find_common_type does:
>
> In [49]: max(float, float32)
> Out[49]:
> In [50]: find_common_type([], [float, float32])
> Out[50]: dtype('float64')
I don't understand the followi
On Wednesday 22 July 2009 11:48:29 Gael Varoquaux wrote:
> On Fri, Jul 17, 2009 at 04:30:38PM +0200, Hans Meine wrote:
> > I have a simple question: How can I detect whether two arrays share the
> > same data?
>
> np.may_share_memory
Thanks a lot, that (and it's imple
On Wednesday 22 July 2009 17:16:31 Ralf Gommers wrote:
> 2009/7/22 Hans Meine
> > type = min(float32, a.dtype.type, b.dtype.type)
>
> Are you looking for the type to cast to? In that case I guess you meant
> max() not min().
No, at least for integers min(..) does what on
On Wednesday 22 July 2009 15:14:32 Citi, Luca wrote:
> In [2]: a + a
> Out[2]: array([144], dtype=uint8)
>
> Please do not "fix" this, that IS the correct output.
No, I did not mean to fix this. (Although it should be noted that in C/C++,
the result of uint8+uint8 is int.)
> If instead, you ref
On Friday 17 July 2009 22:15:31 Pauli Virtanen wrote:
> On 2009-07-17, Hans Meine wrote:
> > If I understood Travis' comments in the above-mentioned thread [1]
> > correctly, this would already fix some of the performance issues along
> > the way (since it would suddenl
Hi,
Ullrich Köthe found an interesting way to compute a promoted dtype, given two
arrays a and b:
type = min(float32, a.dtype.type, b.dtype.type)
How hackish is this? Is this likely to break on other platforms/numpy
versions? Is there a better API for type promotion?
Have a nice day,
Han
Hi!
(This mail is a reply to a personal conversation with Ullrich Köthe, but is
obviously of a greater concern. This is about VIGRA's new NumPy-based python
bindings.) Ulli considers this behaviour of NumPy to be a bug:
In [1]: a = numpy.array([200], numpy.uint8)
In [2]: a + a
Out[2]: array(
Hi,
I have a simple question: How can I detect whether two arrays share the same
data?
>>> a = numpy.arange(10)
>>> b = a.view(numpy.ndarray)
>>>
>>> a is not b # False, as expected
True
>>> a.data is b.data # I expected this to be True
False
>>>
>>> a.data
>>> b.data # even the memory address
Hi,
as I mentioned in the past [1], we considered refactoring our VIGRA (an image
analysis library [2]) python bindings to be based on NumPy [3].
However, we have the problem that VIGRA uses Fortran-order indexing (i.e.
there's operator()(x, y) in C++), and this should of course be the same in
On Tuesday 10 February 2009 11:11:38 Markus Rosenstihl wrote:
> i usually do something like this:
>
> a = random.rand(3000)
> a.resize((1000,3))
> vec_norms = sqrt(sum(a**2,axis=1))
If you look at the patch I posted (OK, that was some weeks ago, so I'll attach
it again for your convenience), that
On Thursday 20 November 2008 11:54:52 Hans Meine wrote:
> On Thursday 20 November 2008 11:11:14 Hans Meine wrote:
> > I have a 2D matrix comprising a sequence of vectors, and I want to
> > compute the norm of each vector. np.linalg.norm seems to be the best
> > bet, but it
On Friday 19 December 2008 03:27:12 Bradford Cross wrote:
> This is a new project I just released.
>
> I know it is C#, but some of the design and idioms would be nice in
> numpy/scipy for working with discrete event simulators, time series, and
> event stream processing.
>
> http://code.google.com
On Donnerstag 20 November 2008, Alan G Isaac wrote:
> On 11/20/2008 5:11 AM Hans Meine apparently wrote:
> > I have a 2D matrix comprising a sequence of vectors, and I want to
> > compute the norm of each vector. np.linalg.norm seems to be the best
> > bet, but it does not su
On Thursday 20 November 2008 11:11:14 Hans Meine wrote:
> I have a 2D matrix comprising a sequence of vectors, and I want to compute
> the norm of each vector. np.linalg.norm seems to be the best bet, but it
> does not support axis. Wouldn't this be a nice feature?
Here's a b
Hi,
I have a 2D matrix comprising a sequence of vectors, and I want to compute the
norm of each vector. np.linalg.norm seems to be the best bet, but it does not
support axis. Wouldn't this be a nice feature?
Greetings,
Hans
___
Numpy-discussion ma
On Thursday 17 July 2008 19:41:51 Anthony Floyd wrote:
> > > What I need to know is how I can trick pickle or Numpy to
> >
> > put the old class into the new class.
> >
> > If you have an example data-file, send it to me off-list and I'll
> > figure out what to do. Maybe it is as simple as
> >
> >
Am Mittwoch, 17. September 2008 01:43:45 schrieb Brendan Simons:
> On 16-Sep-08, at 4:50 AM, Stéfan van der Walt wrote:
> > I may be completely off base here, but you should be able to do this
> > *very* quickly using your GPU, or even just using OpenGL. Otherwise,
> > coding it up in ctypes is ea
Hi,
I am trying to do something like the following as efficiently as possible:
result[...,0] = (dat * cos(arange(100))).sum(axis = -1)
result[...,1] = (dat * sin(arange(100))).sum(axis = -1)
Thus, I was looking for dot / inner with an 'output' arg.
Q1: What is the difference between dot and inne
On Dienstag 29 Juli 2008, Stéfan van der Walt wrote:
> > One way to achieve this is partial flattening, which I did like this:
> >
> > dat.reshape((numpy.prod(dat.shape[:3]), dat.shape[3])).sum(0)
> >
> > Is there a more elegant way to do this?
>
> That looks like a good way to do it. You can cle
Hi Felix,
I quickly copy-pasted and ran your code; it looks to me like the results you
calculated analytically oscillate too fast to be represented discretely. Did
you try to transform different, simpler signals? (e.g. a Gaussian?)
Ciao, / /
Hi,
with a multidimensional array (say, 4-dimensional), I often want to project
this onto one single dimension, i.e.. let "dat" be a 4D array, I am
interested in
dat.sum(0).sum(0).sum(0) # equals dat.sum(2).sum(1).sum(0)
However, creating intermediate results looks more expensive than necess
Am Donnerstag, 10. April 2008 11:05:35 schrieb Matthew Brett:
> type thing. Upper case also draws the eye to the capital letter, so
>
> print N.sin(a)
>
> pulls the eye to the N, so you have to disengage and remind yourself
> that it's the sin(a) that is important, whereas:
>
> print np.sin(a)
>
>
Am Dienstag, 08. April 2008 17:22:33 schrieb Ken Basye:
> I've had this happen
> often enough that I found the first thing I did when an output
> difference arose was to print the FP in hex to see if the
> difference was "real" or just a formatting artifact.
Nice idea - is that code available some
Am Dienstag, 25. März 2008 15:33:58 schrieb Chris Withers:
> > Because in your particular case, you're inspecting elements one by one,
> > and then, your masked data becomes the masked singleton which is a
> > special value.
>
> I'd argue that the masked singleton having a different fill value to t
Am Dienstag, 11. März 2008 20:57:30 schrieb Alexander Michael:
> Incidentally, while ones_like appears to play nice with derived
> classes, empty_like and zeros_like do not seem to do the same.
Shouldn't this be fixed? (Obviously, this stems from the fact that ones_like
is implemented in C, whil
Am Montag, 07. April 2008 14:34:08 schrieb Hans Meine:
> Am Samstag, 05. April 2008 21:54:27 schrieb Anne Archibald:
> > There's also a fourth option - raise an exception if any points are
> > outside the range.
>
> +1
>
> I think this should be the default. Ot
Am Samstag, 05. April 2008 21:54:27 schrieb Anne Archibald:
> There's also a fourth option - raise an exception if any points are
> outside the range.
+1
I think this should be the default. Otherwise, I tend towards "exclude", in
order to have comparable bin sizes (when plotting, I always find
Am Dienstag, 11. März 2008 00:24:04 schrieb David Bolme:
> The steps you describe here are correct. I am putting together an
> open source computer vision library based on numpy/scipy. It will
> include an automatic PCA algorithm with face detection, eye detection,
> PCA dimensionally reduction,
Am Donnerstag, 31. Januar 2008 15:35:25 schrieb James Philbin:
> The following gives the wrong answer:
>
> In [2]: A = array(['a','aa','b'])
>
> In [3]: B = array(['d','e'])
>
> In [4]: A.searchsorted(B)
> Out[4]: array([3, 0])
>
> The answer should be [3,3].
Heh, I got both answers in the same ses
Am Mittwoch, 30. Januar 2008 16:21:40 schrieb Nadav Horesh:
> But:
> >>> R[ax,:] = 100
This is calling __setitem__, i.e. does not create either a view or a copy.
Non-contiguous views (e.g. using [::2]) are also possible AFAIK, but fancy
indexing is something different.
--
Ciao, / /
/--/
On Dienstag 29 Januar 2008, Brian Blais wrote:
> Is there a way to read frames of a movie in python? Ideally,
> something as simple as:
>
> for frame in movie('mymovie.mov'):
> pass
>
>
> where frame is either a 2-D list, or a numpy array? The movie format
> can be anything, because I can pr
Am Dienstag, 22. Januar 2008 07:27:19 schrieb Robert Kern:
> > The new document format requires a preprocessor that has yet to be
> > written.
>
> epydoc from SVN works just fine once the following line is added at the
> top.
>
> __docformat__ = "restructuredtext en"
No need to clutter the files,
Am Montag, 21. Januar 2008 15:39:21 schrieb theodore test:
> Right now it takes around 9 seconds for a single 1600x1200 RGB image, a
> conversion that I've seen implemented more or less instantly in, say,
> ImageJ. What can I do to make this conversion more efficient?
You should "just remove" the
Am Sonntag, 20. Januar 2008 23:15:24 schrieb Travis Vaught:
> On Jan 20, 2008, at 3:30 PM, Charles R Harris wrote:
> > Thanks for the attention to this...I think we need to seek continual
> > improvement to the web site (numpy's default pages, again, could use
> > some TLC). I've tweaked the bug a
Am Montag, 14. Januar 2008 19:59:15 schrieb Neal Becker:
> I don't want to use FROM_O here, because I really can only handle certain
> types. If I used FROM_O, then after calling FROM_O, if the type was not
> one I could handle, I'd have to call FromAny and convert it.
What is the problem with tha
Am Freitag, 21. Dezember 2007 13:23:49 schrieb David Cournapeau:
> > Instead of saying "memmap is ALL about disc access" I would rather
> > like to say that "memap is all about SMART disk access" -- what I mean
> > is that memmap should run as fast as a normal ndarray if it works on
> > the cached
On Donnerstag 20 Dezember 2007, Christopher Barker wrote:
> > In [9]: print where( (logical_or(a<1, b<3)), b,c)
> > [4 2 2 1]
> > (Think of the Zen.)
>
> I'm not sure the Zen answers this one for us.
As you have guessed correctly, I was thinking of "explicit is better than
implicit".
> It's real
Am Sonntag, 16. Dezember 2007 20:10:41 schrieb Ross Harder:
> What's the correct way to do something like this?
>
> a=array( (0,1,1,0) )
> b=array( (4,3,2,1) )
> c=array( (1,2,3,4) )
>
> where( (a<1 or b<3), b,c)
Now + and | have been proposed to you, but it looks to me as if the "correct
way" wo
On Sonntag 16 Dezember 2007, Neil Crighton wrote:
> Do we really need these functions in numpy? I mean it's just
> multiplying/dividing the value by pi/180 (who knows why they're in the
> math module..).
I like them in math ("explicit is better than implicit"*), but I don't want to
comment on wh
On Dienstag 11 Dezember 2007, Timothy Hochberg wrote:
> > You mean one of the following?
> > a.clip(min = 10, max = numpy.finfo(a.dtype).max)
> > a.clip(min = 10, max = numpy.iinfo(a.dtype).max)
>
> No. I mean:
>
> numpy.maximum(a, 10)
>
> To correspond to the above example.
Great, thanks for
Am Montag, 10. Dezember 2007 17:23:07 schrieb Matthieu Brucher:
> I had the same problem sooner today, someone told me the answer : use
> numpy.info object ;)
I saw this shortly after posting (what a coincidence), and I planned to reply
to myself, but my mail did not make it to the list very quic
Am Montag, 10. Dezember 2007 23:46:17 schrieb Timothy Hochberg:
> > TypeError: function takes at least 2 arguments (1 given)
> >
> > (I could simulate that by passing max = maximum_value_of(a.dtype), if
> > that existed, see my other mail.)
>
> Why not just use minimum or maximum as needed instead
Hi!
Is there a way to query the minimum and maximum values of the numpy datatypes?
E.g. numpy.uint8.max == 255, numpy.uint8.min == 0 (these attributes exist, but
they are functions, obviously for technical reasons).
Ciao, / /
/--/
/ / ANS
_
Hi again,
I noticed that clip() needs two parameters, but wouldn't it be nice and
straightforward to just pass min= or max= as keyword arg?
In [2]: a = arange(10)
In [3]: a.clip(min = 2, max = 5)
Out[3]: array([2, 2, 2, 3, 4, 5, 5, 5, 5, 5])
In [4]: a.clip(min = 2)
On Freitag 30 November 2007, Joe Harrington wrote:
> I was misinformed about the status of numdisplay's pages. The package
> is available as both part of stsci_python and independently, and its
> (up-to-date) home page is here:
>
> http://stsdas.stsci.edu/numdisplay/
I had a look at ds9/numdispla
On Samstag 01 Dezember 2007, Martin Spacek wrote:
> Kurt Smith wrote:
> > You might try numpy.memmap -- others have had success with it for
> > large files (32 bit should be able to handle a 1.3 GB file, AFAIK).
>
> Yeah, I looked into numpy.memmap. Two issues with that. I need to
> eliminate as
On Montag 26 November 2007, Matthew Perry wrote:
> Hi all,
>
> I'm not sure if my terminology is familiar but I'm trying to do a
> "moving window" analysis (ie a spatial filter or kernel) on a 2-D
> array representing elevation. For example, a 3x3 window centered on
> each cell is used to calculat
Am Donnerstag, 15. November 2007 16:29:12 schrieb Warren Focke:
> On Thu, 15 Nov 2007, George Nurser wrote:
> > It looks to me like
> > a,b = (zeros((2,)),)*2
> > is equivalent to
> > x= zeros((2,))
> > a,b=(x,)*2
>
> Correct.
>
> > If this is indeed a feature rather than a bug, is there an alterna
On Samstag 10 November 2007, David Cournapeau wrote:
> > After some measurements, I must say that even the slower Fortran variant
> > is competitive (read: faster ;-) ), compared with our very flexible
> > dynamic functors used in the current interactive VIGRA. IOW: Good job.
> > :-)
>
> I am not
On Freitag 09 November 2007, Travis E. Oliphant wrote:
> > While this is
> > a good idea (also probably quite some work), the real thing bugging me is
> > that the above DOUBLE_add could (and should!) be called by the ufunc
> > framework in such a way that it is equally efficient for C and Fortran
Am Freitag, 09. November 2007 13:04:24 schrieb Sebastian Haase:
> Since all my code, if not n Python, is written in C or C++ and not
> Fortran, I decided early on that I had to get used to "invese
> indexing", as in
> image[y,x] or image[z,y,x]
We cannot do that here, since
a) we use the opposite
Am Freitag, 09. November 2007 00:16:12 schrieb Travis E. Oliphant:
> C-order is "special" in NumPy due to the history. I agree that it
> doesn't need to be and we have taken significant steps in that
> direction.
Thanks for this clarifying statement.
> Right now, the fundamental problem is proba
On Donnerstag 08 November 2007, Christopher Barker wrote:
> This discussion makes me wonder if the basic element-wise operations
> could (should?) be special cased for contiguous arrays, reducing them to
> simple pointer incrementing from the start to the finish of the data
> block. The same code
Am Donnerstag, 08. November 2007 17:31:40 schrieb David Cournapeau:
> This is because the current implementation for at least some of the
> operations you are talking about are using PyArray_GenericReduce and
> other similar functions, which are really high level (they use python
> callable, etc..)
Am Donnerstag, 08. November 2007 16:37:06 schrieb David Cournapeau:
> The problem is not F vs C storage: for element-wise operation, it does
> not matter at all; you just apply the same function
> (perform_operation) over and over on every element of the array. The
> order does not matter at all.
Am Donnerstag, 08. November 2007 16:25:57 schrieb Martin Teichmann:
> Some more thoughts:
> * Other implementations: There is other people who have done such a thing.
> the PIL knows how to read and write PNG, but only 8 bit. The same holds
> for matplotlib.
Our VIGRA imaging library can read and
Am Donnerstag, 08. November 2007 13:44:59 schrieb David Cournapeau:
> Hans Meine wrote:
> > I wonder why simple elementwise operations like "a * 2" or "a + 1" are
> > not performed in order of increasing memory addresses in order to exploit
> > CPU caches
Hi!
I wonder why simple elementwise operations like "a * 2" or "a + 1" are not
performed in order of increasing memory addresses in order to exploit CPU
caches etc. - as it is now, their speed drops by a factor of around 3 simply
by transpose()ing. Similarly (but even less logical), copy() and
87 matches
Mail list logo