On Sat, Jul 22, 2017 at 10:50 PM, Ilhan Polat wrote:
> A few months ago, I had the innocent intention to wrap LDLt decomposition
> routines of LAPACK into SciPy but then I am made aware that the minimum
> required version of LAPACK/BLAS was due to Accelerate framework. Since then
> I've been foll
@Robert, good point, always good to try out code before speculating on a
thread. ;)
Here’s working code to do the averaging, though it’s not block-wise, you’ll
have to add that on top with dask/util.apply_parallel. Note also that because
of the C-order of numpy arrays, it’s much more efficient
On Sat, Sep 16, 2017 at 7:16 AM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:
>
> No thoughts on optimizing memory, but that indexing error probably comes
from np.mean producing float results. An astype call shoulder that work.
Why? It's not being used as an index. It's being assign
+1 on the astype(int) call.
+1 also on using dask. scikit-image has a couple of functions that might be
useful:
- skimage.util.apply_parallel: applies a function to an input array in chunks,
with user-selectable chunk size and margins. This is powered by dask.
- skimage.util.view_as_windows: us
No thoughts on optimizing memory, but that indexing error probably comes
from np.mean producing float results. An astype call shoulder that work.
-CHB
Sent from my iPhone
On Sep 15, 2017, at 5:51 PM, Robert McLeod wrote:
On Fri, Sep 15, 2017 at 2:37 PM, Elliot Hallmark
wrote:
> Nope. Numpy
On Fri, Sep 15, 2017 at 2:37 PM, Elliot Hallmark
wrote:
> Nope. Numpy only works on in memory arrays. You can determine your own
> chunking strategy using hdf5, or something like dask can figure that
> strategy out for you. With numpy you might worry about not accidentally
> making duplicates or
Nope. Numpy only works on in memory arrays. You can determine your own
chunking strategy using hdf5, or something like dask can figure that
strategy out for you. With numpy you might worry about not accidentally
making duplicates or intermediate arrays, but that's the extent of memory
optimization
Your example doesn't run, but here is one that does:
In [8]: x = np.array([50], dtype=float)
In [9]: np.piecewise(x, [0 < x <= 90, 90 < x <= 180], [1.1, 2.1])
array([ 1.1])
The answer to your second question is that it is returning an array
with the same dtype as its first argument.
The answer
On 2017/09/15 2:02 AM, Joe wrote:
Hello,
I have two questions and hope that you can help me.
1.)
Is np.piecewise only defined for two conditions or should something like
[0 < x <= 90, 90 < x <= 180, 180 < x <= 270]
also work?
2.)
Why does
np.piecewise(np.array([50]), [0 < x <= 90, 90 < x <=
Hello,
I have two questions and hope that you can help me.
1.)
Is np.piecewise only defined for two conditions or should something like
[0 < x <= 90, 90 < x <= 180, 180 < x <= 270]
also work?
2.)
Why does
np.piecewise(np.array([50]), [0 < x <= 90, 90 < x <= 180], [1.1, 2.1])
return [2] and
I was hoping that numpy doing this in a vectorised way would only load the
surrounding traces into memory for each X and Y as it needs to rather than
the whole cube. I'm using hdf5 for the storage. My example was just a short
example without using hdf5.
On 15 Sep 2017 1:16 am, "Elliot Hallmark" w
11 matches
Mail list logo