On 11/2/12 5:19 PM, Ben Elliston wrote:
> On Fri, Nov 02, 2012 at 04:56:55PM -0400, Francesc Alted wrote:
>
>> Hmm, that's strange. Using lzo or zlib works for you?
> Well, it seems that switching compression algorithms could be a
> nightmare (or can I do this with ptrepack?).
Yes, ptrepack can d
On Fri, Nov 02, 2012 at 04:56:55PM -0400, Francesc Alted wrote:
> Hmm, that's strange. Using lzo or zlib works for you?
Well, it seems that switching compression algorithms could be a
nightmare (or can I do this with ptrepack?). However, I may have a
workaround: I now open the HDF5 file with ta
On 11/2/12 4:49 PM, Ben Elliston wrote:
> Hi Francesc
>
>> Hmm, now that I think, Blosc is not thread safe, and that can bring
>> these sort of problems if you use it from several threads (but it
>> should be safe when using several *processes*).
> I am using multiprocessing.Pool, like so:
>
> if _
Hi Francesc
> Hmm, now that I think, Blosc is not thread safe, and that can bring
> these sort of problems if you use it from several threads (but it
> should be safe when using several *processes*).
I am using multiprocessing.Pool, like so:
if __name__ == '__main__':
pool = Pool(processes=2)
On 11/2/12 4:22 PM, Ben Elliston wrote:
> My reading of the PyTables FAQ is that concurrent read access should
> be safe with PyTables. However, when using a pool of worker processes
> to read different parts of a large blosc-compressed CArray, I see:
>
> HDF5-DIAG: Error detected in HDF5 (1.8.8) t
My reading of the PyTables FAQ is that concurrent read access should
be safe with PyTables. However, when using a pool of worker processes
to read different parts of a large blosc-compressed CArray, I see:
HDF5-DIAG: Error detected in HDF5 (1.8.8) thread 140476163647232:
#000: ../../../src/H5Dio