Good point! I'm traveling this week with limited resources, but will try to 
address when able.

Sent from my iPad

On Jan 24, 2012, at 7:07 AM, Reuti <re...@staff.uni-marburg.de> wrote:

> Am 24.01.2012 um 15:49 schrieb Ralph Castain:
> 
>> I'm a little confused. Building procs static makes sense as libraries may 
>> not be available on compute nodes. However, mpirun is only executed in one 
>> place, usually the head node where it was built. So there is less reason to 
>> build it purely static.
>> 
>> Are you trying to move mpirun somewhere? Or is it the daemons that mpirun 
>> launches that are the real problem?
> 
> This depends: if you have a queuing system, the master node of a parallel job 
> may be one of the slave nodes already where the jobscript runs. Nevertheless 
> I have the nodes uniform, but I saw places where it wasn't the case.
> 
> An option would be to have a special queue, which will execute the jobscript 
> always on the headnode (i.e. without generating any load) and use only 
> non-local granted slots for mpirun. For this it might be necssary to have a 
> high number of slots on the headnode for this queue, and request always one 
> slot on this machine in addition to the necessary ones on the computing node.
> 
> -- Reuti
> 
> 
>> Sent from my iPad
>> 
>> On Jan 24, 2012, at 5:54 AM, Ilias Miroslav <miroslav.il...@umb.sk> wrote:
>> 
>>> Dear experts,
>>> 
>>> following http://www.open-mpi.org/faq/?category=building#static-build I 
>>> successfully build static OpenMPI library.  
>>> Using such prepared library I succeeded in building parallel static 
>>> executable - dirac.x (ldd dirac.x-not a dynamic executable).
>>> 
>>> The problem remains, however,  with the mpirun (orterun) launcher. 
>>> While on the local machine, where I compiled both static OpenMPI & static 
>>> dirac.x  I am able to launch parallel job
>>> <OpenMPI_static>/mpirun -np 2 dirac.x ,
>>> I can not lauch it elsewhere, because "mpirun" is dynamically linked, thus 
>>> machine dependent:
>>> 
>>> ldd mpirun:
>>>      linux-vdso.so.1 =>  (0x00007fff13792000)
>>>      libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f40f8cab000)
>>>      libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007f40f8a93000)
>>>      libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f40f888f000)
>>>      libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f40f860d000)
>>>      libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
>>> (0x00007f40f83f1000)
>>>      libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f40f806c000)
>>>      /lib64/ld-linux-x86-64.so.2 (0x00007f40f8ecb000)
>>> 
>>> Please how to I build "pure" static mpirun launcher, usable (in my case 
>>> together with static dirac.x) also on other computers  ?
>>> 
>>> Thanks, Miro
>>> 
>>> -- 
>>> RNDr. Miroslav Iliaš, PhD.
>>> 
>>> Katedra chémie
>>> Fakulta prírodných vied
>>> Univerzita Mateja Bela
>>> Tajovského 40
>>> 97400 Banská Bystrica
>>> tel: +421 48 446 7351
>>> email : miroslav.il...@umb.sk
>>> 
>>> Department of Chemistry
>>> Faculty of Natural Sciences
>>> Matej Bel University
>>> Tajovského 40
>>> 97400 Banska Bystrica
>>> Slovakia
>>> tel: +421 48 446 7351
>>> email :  miroslav.il...@umb.sk
>>> 
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to