A Monday 16 June 2008, Glenn escrigué:
> Hello,
> I am storing 400000 rows to an EArray as follows:
> if grp.__contains__('normI'):
> fh.removeNode(grp,'normI')
> fh.createEArray(grp,'normI',Float32Atom(), (0,512),
> expectedrows=800000)
>
> ... populate 400000 rows of normI array ...
>
> When I use it as follows:
> tmp = np.asarray(grp.normI[:,k]) # Grab the k'th column of the
> Earray tmp = SomeCalculation(tmp) #this is very fast
> grp.SomeCArray[:,k] = tmp #this is also very fast, but I am only
> storing ~100 # values, so I'm not sure if it actually has good
> performance or not
>
>
> it is horribly slow, the np.asarray call takes ~30 seconds, which is
> only 32Kbyte/s if only 400000*4 bytes are being read as it should be,
> but 16Mbyte/s if all 512*4*400000 are being read, and then sliced.
> When I check the disk read performance, I see that indeed it is
> reading continuously at around 16 Mbyte/s. Am I doing something
> wrong?
Mmmm, I think your message above misses some information. Could you
please double check which is exactly the statement showing slow
performance?
Also, I'm not sure why you are using the:
tmp = np.asarray(grp.normI[:,k])
idiom. It is not:
tmp = grp.normI[:,k]
enough? Not that the asarray() call would be slowing down things, it is
just that I'm curious.
Cheers,
--
Francesc Alted
Freelance developer
Tel +34-964-282-249
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
_______________________________________________
Pytables-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/pytables-users