El dc 18 de 04 del 2007 a les 11:38 +0100, en/na Michael Hoffman va
escriure:
> What is the I/O block size that PyTables uses? I ask because on my 
> Lustre system, reading blocks of less than 2 MB results in degraded 
> performance.

Yes, for chunked datasets (all except for Array), you can compute the
chunksize in the next way (using 2.0 here):

# For arrays
>>> reduce(lambda x, y: x*y, array.chunkshape)*array.atom.size
4096L  # 4 KB
# For tables
>>> table.chunkshape[0]*table.description._v_itemsize
4032L  # almost 4 KB

> Is there a way to change the I/O block size?

Yup, you can in 2.0 by passing the desired value in new argument
'chunkshape' in constructors (remember to divide by the size of your
atom first).

HTH,

-- 
Francesc Altet    |  Be careful about using the following code --
Carabos Coop. V.  |  I've only proven that it works, 
www.carabos.com   |  I haven't tested it. -- Donald Knuth


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Pytables-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to