Hello all,
I´m managing a file close to 26 Gb size. It´s main structure is a table
with a bit more than 8 million rows. The table is made by four columns, the
first two columns store names, the 3rd one has a 53 items array in each
cell and the last column has a 133x6 matrix in each cell.
I use to work with a Linux workstation with 24 Gb. My usual way of working
with the file is to retrieve, from each cell in the 4th column of the
table, the same row from the 133x6 matrix.
I store the information in a bumpy array with shape 8e6x6. In this process
I almost use the whole workstation memory.
Is there anyway to optimize the memory usage?
If not, I have been thinking about splitting the file.
Thank you,
Juanma
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users