Hmm sorry to hear that Owen Let me know how it goes.
On Thu, Oct 11, 2012 at 11:07 AM, Owen Mackwood <
owen.mackw...@bccn-berlin.de> wrote:
> Hi Anthony,
>
> I tried your suggestion and it has not solved the problem. It could be
> that it makes the problem go away in the test code because it
Hi Anthony,
I tried your suggestion and it has not solved the problem. It could be that
it makes the problem go away in the test code because it changes the timing
of the processes. I'll see if I can modify the test code to reproduce the
hang even with reloading the tables module.
Regards,
Owen
So Owen,
I am still not sure what the underlying problem is, but I altered your
parallel function to forciably reload pytables each time it is called.
This seemed to work perfectly on my larger system but not at all on my
smaller one. If there is a way that you can isolate pytables and not
impor
On 10 October 2012 20:08, Anthony Scopatz wrote:
> So just to confirm this behavior, having run your sample on a couple of my
> machines, what you see is that the code looks like it gets all the way to
> the end, and then it stalls right before it is about to exit, leaving some
> small number of
Hi Owen,
So just to confirm this behavior, having run your sample on a couple of my
machines, what you see is that the code looks like it gets all the way to
the end, and then it stalls right before it is about to exit, leaving some
small number of processes (here names python tables_test.py) in t
Hi Anthony,
I've created a reduced example which reproduces the error. I suppose the
more processes you can run in parallel the more likely it is you'll see the
hang. On a machine with 8 cores, I see 5-6 processes hang out of 2000.
All of the hung tasks had a call stack that looked like this:
#0
On Mon, Oct 8, 2012 at 11:19 AM, Owen Mackwood wrote:
> Hi Anthony,
>
> On 8 October 2012 15:54, Anthony Scopatz wrote:
>
>> Hmmm, Are you actually copying the data (f.root.data[:]) or are you
>> simply passing a reference as arguments (f.root.data)?
>>
>
> I call f.root.data.read() on any arra
Hi Anthony,
On 8 October 2012 15:54, Anthony Scopatz wrote:
> Hmmm, Are you actually copying the data (f.root.data[:]) or are you
> simply passing a reference as arguments (f.root.data)?
>
I call f.root.data.read() on any arrays to load them into the process
target args dictionary. I had assum
On Mon, Oct 8, 2012 at 5:13 AM, Owen Mackwood
wrote:
> Hi Anthony,
>
> There is a single multiprocessing.Pool which usually has 6-8 processes,
> each of which is used to run a single task, after which a new process is
> created for the next task (maxtasksperchild=1 for the Pool constructor).
> The
Hi Anthony,
There is a single multiprocessing.Pool which usually has 6-8 processes,
each of which is used to run a single task, after which a new process is
created for the next task (maxtasksperchild=1 for the Pool constructor).
There is a master process that regularly opens an HDF5 file to read
Hi Owen,
How many pools do you have? Is this a random runtime failure? What kind
of system is this one? Is there some particular fucntion in Python that
you are running? (It seems to be openFile(), but I can't be sure...) The
error is definitely happening down in the H5open() routine. Now wh
Hi Anthony,
I'm not trying to write in parallel. Each worker process has its own file
to write to. After all tasks are completed, I collect the results in the
master process. So the problem I'm seeing (a hang in the worker process)
shouldn't have anything to do with parallel writes. Do you have an
Hello Owen,
While you can use process pools to read from a file in parallel just fine,
writing is another story completely. While HDF5 itself supports parallel
writing though MPI, this comes at the high cost of compression no longer
being available and a much more complicated code base. So for t
Hello,
I'm using a multiprocessing.Pool to parallelize a set of tasks which record
their results into separate hdf5 files. Occasionally (less than 2% of the
time) the worker process will hang. According to gdb, the problem occurs
while opening the hdf5 file, when it attempts to obtain the associat
14 matches
Mail list logo