On Fri, May 4, 2012 at 12:49 AM, Keith Goodman wrote:
> On Thu, May 3, 2012 at 3:12 PM, Moroney, Catherine M (388D)
> wrote:
>
>> Here is the python code:
>>
>> def single(element, targets):
>>
>> if (isinstance(element, tuple)):
>> xelement = element[0]
>> elif (isinstance(element,
On Thu, May 3, 2012 at 3:12 PM, Moroney, Catherine M (388D)
wrote:
> Here is the python code:
>
> def single(element, targets):
>
> if (isinstance(element, tuple)):
> xelement = element[0]
> elif (isinstance(element, numpy.ndarray)):
> xelement = element
> else:
> re
On May 3, 2012, at 1:00 PM,
wrote:
>> A quick recap of the problem: a 128x512 array of 7-element vectors
>> (element), and a 5000-vector
>> training dataset (targets). For each vector in element, I want to find the
>> best-match in targets,
>> defined as minimizing the Euclidean distance.
>
On Thu, May 3, 2012 at 12:46 PM, Paul Anton Letnes
wrote:
>
> Could you show us the code? It's hard to tell otherwise. As Keith Goodman
> pointed out, if he gets 7.5x with cython, it could be that the Fortran code
> could be improved as well. Fortran has a reputation of being the gold
> standar
On Thu, May 3, 2012 at 1:51 AM, Henry Gomersall wrote:
> Right, so this is expected behaviour then. Is this documented somewhere?
> It strikes me that this is pretty unexpected behaviour.
Imagine the way you would code this in a for-loop. You want
a = np.arange(10)
a[2:] = a[:-2]
Now you write
On 3. mai 2012, at 19:33, Moroney, Catherine M (388D) wrote:
> A quick recap of the problem: a 128x512 array of 7-element vectors
> (element), and a 5000-vector
> training dataset (targets). For each vector in element, I want to find the
> best-match in targets,
> defined as minimizing the Eu
On May 3, 2012, at 1:38 PM, Moroney, Catherine M (388D) wrote:
>
> On May 3, 2012, at 10:33 AM, Moroney, Catherine M (388D) wrote:
>
>> A quick recap of the problem: a 128x512 array of 7-element vectors
>> (element), and a 5000-vector
>> training dataset (targets). For each vector in element,
On Thu, May 3, 2012 at 10:38 AM, Moroney, Catherine M (388D)
wrote:
> Actually Fortran with correct array ordering - 13 seconds! What horrible
> python/numpy
> mistake am I making to cause such a slowdown?
For the type of problem you are working on, I'd flip it around and ask
what you are doin
On May 3, 2012, at 10:33 AM, Moroney, Catherine M (388D) wrote:
> A quick recap of the problem: a 128x512 array of 7-element vectors
> (element), and a 5000-vector
> training dataset (targets). For each vector in element, I want to find the
> best-match in targets,
> defined as minimizing the
On Thu, May 3, 2012 at 12:51 PM, Tony Yu wrote:
>
>
> On Thu, May 3, 2012 at 9:57 AM, Robert Kern wrote:
>>
>> On Thu, May 3, 2012 at 2:50 PM, Robert Elsner wrote:
>> >
>> > Am 03.05.2012 15:45, schrieb Robert Kern:
>> >> On Thu, May 3, 2012 at 2:24 PM, Robert Elsner
>> >> wrote:
>> >>> Hello E
A quick recap of the problem: a 128x512 array of 7-element vectors (element),
and a 5000-vector
training dataset (targets). For each vector in element, I want to find the
best-match in targets,
defined as minimizing the Euclidean distance.
I coded it up three ways: (a) looping through each vec
>
>
> --
>
> Message: 6
> Date: Thu, 3 May 2012 10:00:11 -0700
> From: Keith Goodman
> Subject: Re: [Numpy-discussion] record arrays initialization
> To: Discussion of Numerical Python
> Message-ID:
>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Wed
On Wed, May 2, 2012 at 4:46 PM, Kevin Jacobs
wrote:
> The cKDTree implementation is more than 4 times faster than the brute-force
> approach:
>
> T = scipy.spatial.cKDTree(targets)
>
> In [11]: %timeit foo1(element, targets) # Brute force
> 1000 loops, best of 3: 385 us per loop
>
> In [12]: %
On Thu, May 3, 2012 at 9:57 AM, Robert Kern wrote:
> On Thu, May 3, 2012 at 2:50 PM, Robert Elsner wrote:
> >
> > Am 03.05.2012 15:45, schrieb Robert Kern:
> >> On Thu, May 3, 2012 at 2:24 PM, Robert Elsner
> wrote:
> >>> Hello Everybody,
> >>>
> >>> is there any news on the status of np.bincou
Hi,
I compiled lapack, atlas, umfpack, fftw in local folder, in similar
way as described here: http://www.scipy.org/Installing_SciPy/Linux
on 32bit Ubuntu Precise
In ~/.local/lib I have:
libamd.2.2.3.a
libamd.a -> libamd.2.2.3.a
libatlas.a
libcblas.a
li
On Thu, May 3, 2012 at 2:50 PM, Robert Elsner wrote:
>
> Am 03.05.2012 15:45, schrieb Robert Kern:
>> On Thu, May 3, 2012 at 2:24 PM, Robert Elsner wrote:
>>> Hello Everybody,
>>>
>>> is there any news on the status of np.bincount with respect to "big"
>>> numbers? It seems I have just been bitte
Am 03.05.2012 15:45, schrieb Robert Kern:
> On Thu, May 3, 2012 at 2:24 PM, Robert Elsner wrote:
>> Hello Everybody,
>>
>> is there any news on the status of np.bincount with respect to "big"
>> numbers? It seems I have just been bitten by #225. Is there an efficient
>> way around? I found the np
On Thu, May 3, 2012 at 2:24 PM, Robert Elsner wrote:
> Hello Everybody,
>
> is there any news on the status of np.bincount with respect to "big"
> numbers? It seems I have just been bitten by #225. Is there an efficient
> way around? I found the np.histogram function painfully slow.
>
> Below a si
On Thu, May 3, 2012 at 3:41 AM, Nathaniel Smith wrote:
> On Thu, May 3, 2012 at 4:44 AM, Charles R Harris
> wrote:
> >
> >
> > On Wed, May 2, 2012 at 3:20 PM, Nathaniel Smith wrote:
> >> This coordinate format is also what's used by the MATLAB Tensor
> >> Toolbox. They have a paper justifying t
Hello Everybody,
is there any news on the status of np.bincount with respect to "big"
numbers? It seems I have just been bitten by #225. Is there an efficient
way around? I found the np.histogram function painfully slow.
Below a simple script, that demonstrates bincount failing with a memory
erro
On Thu, May 3, 2012 at 4:44 AM, Charles R Harris
wrote:
>
>
> On Wed, May 2, 2012 at 3:20 PM, Nathaniel Smith wrote:
>> This coordinate format is also what's used by the MATLAB Tensor
>> Toolbox. They have a paper justifying this choice and describing some
>> tricks for how to work with them:
>>
On Wed, 2012-05-02 at 12:58 -0700, Stéfan van der Walt wrote:
> On Wed, May 2, 2012 at 9:03 AM, Henry Gomersall
> wrote:
> > Is this some nuance of the way numpy does things? Or am I missing
> some
> > stupid bug in my code?
>
> Try playing with the parameters of the following code:
>
>
> For
On 05/03/2012 03:25 AM, Travis Oliphant wrote:
>
> On May 2, 2012, at 5:28 PM, Stéfan van der Walt wrote:
>
>> On Wed, May 2, 2012 at 3:20 PM, Francesc Alted wrote:
>>> On 5/2/12 4:07 PM, Stéfan van der Walt wrote:
>>> Well, as the OP said, coo_matrix does not support dimensions larger than
>>> 2,
Sounds like it could be a good match for `scipy.spatial.cKDTree`.
It can handle single-element queries...
>>> element = numpy.arange(1, 8)
>>> targets = numpy.random.uniform(0, 8, (1000, 7))
>>> tree = scipy.spatial.cKDTree(targets)
>>> distance, index = tree.query(element)
>>> targets[index]
arr
On 05/03/2012 06:27 AM, Travis Oliphant wrote:
>
> On May 2, 2012, at 10:03 PM, Stéfan van der Walt wrote:
>
>> On Wed, May 2, 2012 at 6:25 PM, Travis Oliphant wrote:
>>> The only new principle (which is not strictly new --- but new to NumPy's
>>> world-view) is using one (or more) fields of a st
25 matches
Mail list logo