Sorry for the delay - I have tracked this down and fixed it. The fix will be 
included in 1.10.2.

Thanks for bringing it to our attention!
Ralph


> On Jan 7, 2016, at 9:38 PM, Ralph Castain <r...@open-mpi.org> wrote:
> 
> A singleton will indeed have an extra thread, but it should be quiescent. 
> I’ll check the 1.10.2 release candidate and see if it still exhibits that 
> behavior.
> 
> 
>> On Jan 7, 2016, at 9:32 PM, Au Eelis <auee...@gmail.com> wrote:
>> 
>> Hi!
>> 
>> It is in so far related, that one of these threads is actually doing 
>> something.
>> 
>> Btw, I noticed this on two separate machines! A computing cluster with 
>> admin-built openmpi and Archlinux with openmpi from the repositories.
>> 
>> However, running the code with openmpi 1.6.2 and ifort 13.0.0 does not show 
>> this behaviour.
>> 
>> Best regards,
>> Stefan
>> 
>> 
>> 
>> On 01/07/2016 03:27 PM, Sasso, John (GE Power, Non-GE) wrote:
>>> Stefan,  I don't know if this is related to your issue, but FYI...
>>> 
>>> 
>>>> Those are async progress threads - they block unless something requires 
>>>> doing
>>>> 
>>>> 
>>>>> On Apr 15, 2015, at 8:36 AM, Sasso, John (GE Power & Water, Non-GE) 
>>>>> <john1.sa...@ge.com> wrote:
>>>>> 
>>>>> I stumbled upon something while using 'ps -eFL' to view threads of 
>>>>> processes, and Google searches have failed to answer my question.  This 
>>>>> question holds for OpenMPI 1.6.x and even OpenMPI 1.4.x.
>>>>> 
>>>>> For a program which is pure MPI (built and run using OpenMPI) and does 
>>>>> not implement Pthreads or OpenMP, why is it that each MPI task appears as 
>>>>> having 3 threads:
>>>>> 
>>>>> UID      PID  PPID   LWP  C NLWP    SZ   RSS PSR STIME TTY          TIME 
>>>>> CMD
>>>>> sasso  20512 20493 20512 99    3 187849 582420 14 11:01 ?       00:26:37 
>>>>> /home/sasso/mpi_example.exe
>>>>> sasso  20512 20493 20588  0    3 187849 582420 11 11:01 ?       00:00:00 
>>>>> /home/sasso/mpi_example.exe
>>>>> sasso  20512 20493 20599  0    3 187849 582420 12 11:01 ?       00:00:00 
>>>>> /home/sasso/mpi_example.exe
>>>>> 
>>>>> whereas if I compile and run a non-MPI program, 'ps -eFL' shows it 
>>>>> running as a single thread?
>>>>> 
>>>>> Granted the CPU utilization (C) for 2 of the 3 threads is zero, but the 
>>>>> threads are bound to different processors (11,12,14).   I am curious as 
>>>>> to why this is, and no complaining that there is a problem.  Thanks!
>>>>> 
>>>>> --john
>>> 
>>> 
>>> -----Original Message-----
>>> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Au Eelis
>>> Sent: Thursday, January 07, 2016 7:10 AM
>>> To: us...@open-mpi.org
>>> Subject: [OMPI users] Singleton process spawns additional thread
>>> 
>>> Hi!
>>> 
>>> I have a weird problem with executing a singleton OpenMPI program, where an 
>>> additional thread causes full load, while the master thread performs the 
>>> actual calculations.
>>> 
>>> In contrast, executing "mpirun -np 1 [executable]" performs the same 
>>> calculation at the same speed but the additional thread is idling.
>>> 
>>> In my understanding, both calculations should behave in the same way (i.e., 
>>> one working thread) for a program which is simply moving some data around 
>>> (mainly some MPI_BCAST and MPI_GATHER commands).
>>> 
>>> I could observe this behaviour in OpenMPI 1.10.1 with ifort 16.0.1 and 
>>> gfortran 5.3.0. I could create a minimal working example, which is appended 
>>> to this mail.
>>> 
>>> Am I missing something?
>>> 
>>> Best regards,
>>> Stefan
>>> 
>>> -----
>>> 
>>> MWE: Compile this with "mpifort main.f90". When executing with "./a.out", 
>>> there is thread wasting cycles, while the master thread waits for input. 
>>> When executing with "mpirun -np 1 ./a.out" this thread is idling.
>>> 
>>> program main
>>>     use mpi_f08
>>>     implicit none
>>> 
>>>     integer :: ierror,rank
>>> 
>>>     call MPI_Init(ierror)
>>>     call MPI_Comm_Rank(MPI_Comm_World,rank,ierror)
>>> 
>>>     ! let master thread wait on [RETURN]-key
>>>     if (rank == 0) then
>>>         read(*,*)
>>>     end if
>>> 
>>>     write(*,*) rank
>>> 
>>>     call mpi_barrier(mpi_comm_world, ierror) end program 
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: 
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.open-2Dmpi.org_mailman_listinfo.cgi_users&d=CwICAg&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=tqKZ2vRCLufSSXPvzNxBrKr01YPimBPnb-JT-Js0Fmk&m=NPeEHKik35WrcHGDl5ZRq4IC6Le5g03o5YoqD9InrHw&s=eRYTNaknio7tNJFdOMTqvdlNNIq9p6evJoQxuvmqrLs&e=
>>> Link to this post: 
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.open-2Dmpi.org_community_lists_users_2016_01_28237.php&d=CwICAg&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=tqKZ2vRCLufSSXPvzNxBrKr01YPimBPnb-JT-Js0Fmk&m=NPeEHKik35WrcHGDl5ZRq4IC6Le5g03o5YoqD9InrHw&s=2_axdls1JH4Wm5MlkOXRrtXFb2LLVLCleKVx4ybpltU&e=
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post: 
>>> http://www.open-mpi.org/community/lists/users/2016/01/28238.php
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/01/28240.php
> 

Reply via email to