ay. In my
opinion, the meta-data-management should be done by another (sub-?)
class. This way, numpy-arrays are simple enough for new users (as I was
roughly two years ago...).
I would be very interested in a class that *uses* numpy-arrays to
provide a datastructure for physical data with coo
Hi,
> 2) Is there a way to use another algorithm (at the cost of performance)
>> > that uses less memory during calculation so that I can generate bigger
>> > histograms?
>
>
> You could work through your array block by block. Simply fix the range and
> generate an histogram for each slice of 10
)
that uses less memory during calculation so that I can generate bigger
histograms?
My numpy version is '1.0.4.dev3937'
Thanks,
Lars
--
Dipl.-Ing. Lars Friedrich
Photonic Measurement Technology
Department of Microsystems Engineering -- IMTEK
University of Freiburg
Georges-Köhler-A
Thank you for your comments!
I will try this fftw3-scipy approach and see how much faster I can get.
Maybe this is enough for me...?
Lars
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-disc
Hello,
thanks for your comments. If I got you right, I should look for a
FFT-code that uses SSE (what does this actually stand for?), which means
that it vectorizes 32bit-single-operations into larger chunks that make
efficient use of recent CPUs.
You mentioned FFTW and MKL. Is this www.fftw.o
Hello,
David Cournapeau wrote:
> As far as I can read from the fft code in numpy, only double is
> supported at the moment, unfortunately. Note that you can get some speed
> by using scipy.fftpack methods instead, if scipy is an option for you.
What I understood is that numpy uses FFTPACK's alg
Hello,
is there a way to tell numpy.fft.fft2 to use complex64 instead of
complex128 as output dtype to speed the up transformation?
Thanks
Lars
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/nump
Hello,
I tried the following:
### start code
a = N.random.rand(100)
myFile = file('test.bin', 'wb')
for i in range(100):
a.tofile(myFile)
myFile.close()
### end code
And this gives roughly 50 MB/s on my office-machine but only 6.5 MB/s on
the machine that I was report
Hello everyone,
thank you for the replies.
Sebastian, the chunk size is roughly 4*10^6 samples, with two byte per
sample, this is about 8MB. I can vary this size, but increasing it only
helps for much smaller values. For example, when I use a size of 100
Samples, I am much too slow. It gets be
(harddisk). Currently I can stream with roughly 4 Mbyte/s, which is
quite fast, I guess. However, if anyone can point me to a way to write
my data to harddisk faster, I would be very happy!
Thanks
Lars
--
Dipl.-Ing. Lars Friedrich
Photonic Measurement Technology
Department of Microsystems E
ehaviour is different from arange, I think it is not very
intentional. But maybe there is a good reason for this behaviour?
I am using numpy, version 1.0.1. Maybe the behaviour was already changed
in more recent versions?
Thank you for any comment
Lars Friedrich
--
Dipl.-Ing. Lars Friedrich
O
11 matches
Mail list logo