n 2014-11-07 00:51:02, Brad Buran wrote:
> > On Windows 7 using Anaconda with numpy 1.9.1 I get False (indicating that
> > the FFT is not treating each row separately). When I test on a Ubuntu
> box
> > using numpy 1.9.1 I get True. Is this expected behavior? If I
>
Given the following code:
import numpy as np
x = np.random.random(size=2**14)
y = x.copy()
z = np.concatenate([x[np.newaxis], y[np.newaxis]], axis=0)
print(np.all(np.fft.fft(z, axis=-1)[0] == np.fft.fft(z[0])))
On Windows 7 using Anaconda with numpy 1.9.1 I get False (indicating that
the FFT is n
Thanks for the great input. The idea of implementing jump-ahead in
numpy.random would be a very nice feature, but I don't currently have
the time to work on implementing such a feature. For now, it seems
the simplest approach is to cache the RandomState and reuse that
later.
Brad
On Thu, Oct 2,
Given the following:
from numpy import random
rs = random.RandomState(seed=1)
# skip the first X billion samples
x = rs.uniform(0, 10)
How do I accomplish "skip the first X billion samples" (e.g. 7.2
billion)? I see that there's a numpy.random.RandomState.set_state
which accepts (among other par
them for testing, but it appears that they do not use the
traitsdoc extension.
Brad
On Sat, Mar 24, 2012 at 5:39 PM, Ralf Gommers
wrote:
>
>
> On Sat, Mar 24, 2012 at 6:58 PM, Brad Buran wrote:
>>
>> Not sure if this is the appropriate place to report the issue, but
>
Not sure if this is the appropriate place to report the issue, but
I've been getting the following error when trying to build my docs
using Sphinx 1.1.3:
File "C:\Python27\lib\site-packages\numpydoc\numpydoc.py", line 36,
in mangle_docstrings
doc = get_doc_object(obj, what, u"\n".join(lines)
same time would be
> > useful. I should note that there is a numpy.ptp() function that
> > returns the difference between the min and the max, but I don't see
> > anything that returns the actual values.
> >
> > Ben Root
> >
> > On Thu, Jun
that
> returns the actual values.
>
> Ben Root
>
> On Thu, Jun 17, 2010 at 4:50 PM, Brad Buran wrote:
>>
>> I have a 1D array with >100k samples that I would like to reduce by
>> computing the min/max of each "chunk" of n samples. Right now, my
>>
I have a 1D array with >100k samples that I would like to reduce by
computing the min/max of each "chunk" of n samples. Right now, my
code is as follows:
n = 100
offset = array.size % downsample
array_min = array[offset:].reshape((-1, n)).min(-1)
array_max = array[offset:].reshape((-1, n)).max(-1