On Mon, Oct 1, 2012 at 10:33 AM, Ralph Castain wrote:
> Yes, that is the expected behavior as you describe it.
>
> If you want to run on hosts that are not already provided (via hostfile in
> the environment or on the command line), then you need to use the "add-host"
> or "add-hostfile" MPI_Inf
Yes, that is the expected behavior as you describe it.
If you want to run on hosts that are not already provided (via hostfile in the
environment or on the command line), then you need to use the "add-host" or
"add-hostfile" MPI_Info key. See "man MPI_Comm_spawn" for details.
On Oct 1, 2012, a
On Wed, Sep 12, 2012 at 10:23 AM, Ralph Castain wrote:
>
> On Sep 12, 2012, at 9:55 AM, Brian Budge wrote:
>
>> On Wed, Aug 17, 2011 at 12:05 AM, Simone Pellegrini
>> wrote:
>>> On 08/16/2011 11:15 PM, Ralph Castain wrote:
I'm not finding a bug - the code looks clean. If I send you a p
On Sep 12, 2012, at 9:55 AM, Brian Budge wrote:
> On Wed, Aug 17, 2011 at 12:05 AM, Simone Pellegrini
> wrote:
>> On 08/16/2011 11:15 PM, Ralph Castain wrote:
>>>
>>> I'm not finding a bug - the code looks clean. If I send you a patch, could
>>> you apply it, rebuild, and send me the resulting
On Wed, Aug 17, 2011 at 12:05 AM, Simone Pellegrini
wrote:
> On 08/16/2011 11:15 PM, Ralph Castain wrote:
>>
>> I'm not finding a bug - the code looks clean. If I send you a patch, could
>> you apply it, rebuild, and send me the resulting debug output?
>
> yes, I could do that. No problem.
>
> tha
On Sep 7, 2011, at 4:03 PM, Simone Pellegrini wrote:
> By the way, I solved the problem by invoking MPI_Comm_disconnect on the
> inter-communicator I receive from the spawning task (MPI_Finalize is not
> enough). This makes the spawned tasks to close the parent communicator and
> terminate.
Th
On 09/06/2011 06:11 PM, Ralph Castain wrote:
Hmmm...well, nothing definitive there, I'm afraid.
All I can suggest is to remove/reduce the threading. Like I said, we aren't
terribly thread safe at this time. I suspect you're stepping into one of those
non-safe areas here.
Hopefully will do bet
Hmmm...well, nothing definitive there, I'm afraid.
All I can suggest is to remove/reduce the threading. Like I said, we aren't
terribly thread safe at this time. I suspect you're stepping into one of those
non-safe areas here.
Hopefully will do better in later releases.
On Sep 6, 2011, at 1:20
On 09/06/2011 04:58 PM, Ralph Castain wrote:
On Sep 6, 2011, at 12:49 PM, Simone Pellegrini wrote:
On 09/06/2011 02:57 PM, Ralph Castain wrote:
Hi Simone
Just to clarify: is your application threaded? Could you please send the OMPI
configure cmd you used?
yes, it is threaded. There are basi
On Sep 6, 2011, at 12:49 PM, Simone Pellegrini wrote:
> On 09/06/2011 02:57 PM, Ralph Castain wrote:
>> Hi Simone
>>
>> Just to clarify: is your application threaded? Could you please send the
>> OMPI configure cmd you used?
>
> yes, it is threaded. There are basically 3 threads, 1 for the out
On 09/06/2011 02:57 PM, Ralph Castain wrote:
Hi Simone
Just to clarify: is your application threaded? Could you please send the OMPI
configure cmd you used?
yes, it is threaded. There are basically 3 threads, 1 for the outgoing
messages (MPI_send), 1 for incoming messages (MPI_Iprobe / MPI_R
Hi Simone
Just to clarify: is your application threaded? Could you please send the OMPI
configure cmd you used?
Adding the debug flags just changes the race condition. Interestingly, those
values only impact the behavior of mpirun, so it looks like the race condition
is occurring there.
On S
Dear all,
I am developing an MPI application which uses heavily MPI_Spawn. Usually
everything works fine for the first hundred spawn but after a while the
application exist with a curious message:
[arch-top:27712] [[36904,165],0] ORTE_ERROR_LOG: Data unpack would read
past end of buffer in fi
On 08/16/2011 11:15 PM, Ralph Castain wrote:
I'm not finding a bug - the code looks clean. If I send you a patch, could you
apply it, rebuild, and send me the resulting debug output?
yes, I could do that. No problem.
thanks again, Simone
On Aug 16, 2011, at 10:18 AM, Ralph Castain wrote:
S
I'm not finding a bug - the code looks clean. If I send you a patch, could you
apply it, rebuild, and send me the resulting debug output?
On Aug 16, 2011, at 10:18 AM, Ralph Castain wrote:
> Smells like a bug - I'll take a look.
>
>
> On Aug 16, 2011, at 9:10 AM, Simone Pellegrini wrote:
>
>
Smells like a bug - I'll take a look.
On Aug 16, 2011, at 9:10 AM, Simone Pellegrini wrote:
> On 08/16/2011 02:11 PM, Ralph Castain wrote:
>> That should work, then. When you set the "host" property, did you give the
>> same name as was in your machine file?
>>
>> Debug options that might help
On 08/16/2011 02:11 PM, Ralph Castain wrote:
That should work, then. When you set the "host" property, did you give the same
name as was in your machine file?
Debug options that might help:
-mca plm_base_verbose 5 -mca rmaps_base_verbose 5
You'll need to configure --enable-debug to get the ou
That should work, then. When you set the "host" property, did you give the same
name as was in your machine file?
Debug options that might help:
-mca plm_base_verbose 5 -mca rmaps_base_verbose 5
You'll need to configure --enable-debug to get the output, but that should help
tell us what is hap
On 08/16/2011 12:30 PM, Ralph Castain wrote:
What version are you using?
OpenMPI 1.4.3
On Aug 16, 2011, at 3:19 AM, Simone Pellegrini wrote:
Dear all,
I am developing a system to manage MPI tasks on top of MPI. The architecture is
rather simple, I have a set of scheduler processes which
What version are you using?
On Aug 16, 2011, at 3:19 AM, Simone Pellegrini wrote:
> Dear all,
> I am developing a system to manage MPI tasks on top of MPI. The architecture
> is rather simple, I have a set of scheduler processes which takes care to
> manage the resources of a node. The idea is
Dear all,
I am developing a system to manage MPI tasks on top of MPI. The
architecture is rather simple, I have a set of scheduler processes which
takes care to manage the resources of a node. The idea is to have 1 (or
more) of those scheduler allocated on each node of a cluster and then
creat
Hi,
Is it possible to get the process IDs of the processes created from
mpi_spawn?
Thanks,
Rob
22 matches
Mail list logo