I have come across a similar error as this, and I've traced it down a bit. My
code originally worked fine with HDF 1.8 and Pytables 2.0, but now that I'm
running HDF 1.8.2 and Pytables 2.1, I see the error below with this table
description:
{'Timestamp':tables.Float64Col(),
'A_ID':t
g two copies of the data (which
is not impossible, if that's the only way).
Basically I want to divide each row of data by the mean of that row, and then
perform a calculation on each column.
Glenn
-
Check out the new Sourc
Francesc Alted pytables.com> writes:
>
> A Monday 16 June 2008, Glenn escrigué:
> > Hello,
> > I am storing 40 rows to an EArray as follows:
> > if grp.__contains__('normI'):
> > fh.removeNode(grp,'normI')
> > fh.createEArray(
np.asarray call takes ~30 seconds, which is only
32Kbyte/s if only 40*4 bytes are being read as it should be, but 16Mbyte/s
if all 512*4*40 are being read, and then sliced. When I check the disk read
performance, I see that indeed it is reading continuously at around 16 Mbyte/s.
Am I
ld not have
to deal with any special cases. Oh well. Thanks again.
Glenn
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
htt
way around having to add this extra annoying code everywhere? Why
shouldn't any array with 512 elements in one dimension and any set of singleton
or non existent other dimensions be appendable?
Thanks,
Glenn
-
This
Francesc Alted pytables.org> writes:
>
> A Monday 19 May 2008, Glenn escrigué:
> > I am working on refining some algorithms to process some spectral
> > data I have stored in an h5 file using PyTables. The data is stored
> > as an EArray. As I work on my a
fh.createEArray(grp, 'res', Float32Atom(), (0,512))
in order to remove the old array so that I can start fresh. This seems to be
slow and cumbersome.
Is there any better way to do this? Perhaps a way to tell the EArray to start at
the beginning again so subsequent append operations over
dataB
self.table.row.append()
Periodically flush the data:
if now - self.LastUpdateTime > self.UpdatePeriod:
self.table.flush()
Writing the data is indeed very fast.
I just tried timing the following:
table = fh.root.SpectrometerTimeSeries
de
Francesc Alted pytables.org> writes:
>
> A Friday 02 May 2008, Glenn escrigué:
> > Hello,
> > I would like to use pytables to store the output from a spectrometer.
> > The spectra come in at a rapid rate. I am having trouble
> > understanding how to set up a dat
wondering how to
make an array of numpy 1D array rows that I can dynamically add to. With a
Table, I tried setting up an IsDescription subclass but could not figure out how
to add a member to again represent a 1D array.
I appreciate any help you can offer,
Glenn
11 matches
Mail list logo