Re: [OMPI devel] Fwd: [OMPI commits] Git: open-mpi/ompi branch master updated. dev-327-gccaecf0

2014-11-20 Thread Jeff Squyres (jsquyres)
Nathan and I talked last night.

He reverted the BTL updates until after SC and the US Thanksgiving holiday next 
week.

I will update the usNIC BTL during the week of Dec 1.

Who will be updating the other BTLs that need updating?



On Nov 19, 2014, at 4:45 PM, Hjelm, Nathan Thomas  wrote:

> Yes. Usnic, yoda, and smcuda need to be updated for the new interface. The 
> warnings in opening I will fix.
> 
> 
> 
> From: devel on behalf of Ralph Castain
> Sent: Wednesday, November 19, 2014 3:15:07 PM
> To: Open MPI Developers
> Subject: [OMPI devel] Fwd: [OMPI commits] Git: open-mpi/ompi branch master
>   updated. dev-327-gccaecf0
> 
> Was this commit intended to happen? It broke the trunk:
> 
> btl_openib.c:119:9: warning: initialization from incompatible pointer type 
> [enabled by default]
> .btl_atomic_fop = mca_btl_openib_atomic_fop,
> ^
> btl_openib.c:119:9: warning: (near initialization for 
> 'mca_btl_openib_module.super.btl_atomic_fop') [enabled by default]
> btl_openib.c:120:9: warning: initialization from incompatible pointer type 
> [enabled by default]
> .btl_atomic_cswap = mca_btl_openib_atomic_cswap,
> ^
> btl_openib.c:120:9: warning: (near initialization for 
> 'mca_btl_openib_module.super.btl_atomic_cswap') [enabled by default]
> btl_openib.c: In function 'mca_btl_openib_prepare_src':
> btl_openib.c:1456:9: warning: variable 'rc' set but not used 
> [-Wunused-but-set-variable]
> int rc;
> ^
> btl_openib.c:1450:30: warning: variable 'openib_btl' set but not used 
> [-Wunused-but-set-variable]
> mca_btl_openib_module_t *openib_btl;
>  ^
> btl_openib_component.c: In function 'init_one_device':
> btl_openib_component.c:2047:54: warning: comparison between 'enum 
> ' and 'mca_base_var_source_t' [-Wenum-compare]
> else if (BTL_OPENIB_RQ_SOURCE_DEVICE_INI ==
>  ^
> btl_usnic_frag.c: In function 'recv_seg_constructor':
> btl_usnic_frag.c:144:17: error: 'mca_btl_base_descriptor_t' has no member 
> named 'des_remote'
> seg->rs_desc.des_remote = NULL;
> ^
> btl_usnic_frag.c:145:17: error: 'mca_btl_base_descriptor_t' has no member 
> named 'des_remote_count'
> seg->rs_desc.des_remote_count = 0;
> ^
> btl_usnic_frag.c: In function 'send_frag_constructor':
> btl_usnic_frag.c:168:9: error: 'mca_btl_base_descriptor_t' has no member 
> named 'des_remote'
> desc->des_remote = frag->sf_base.uf_remote_seg;
> ^
> btl_usnic_frag.c:169:9: error: 'mca_btl_base_descriptor_t' has no member 
> named 'des_remote_count'
> desc->des_remote_count = 0;
> ^
> make[2]: *** [btl_usnic_frag.lo] Error 1
> make[2]: *** Waiting for unfinished jobs
> btl_usnic_module.c: In function 'usnic_put':
> btl_usnic_module.c:1107:56: error: 'struct mca_btl_base_descriptor_t' has no 
> member named 'des_remote'
> frag->sf_base.uf_remote_seg[0].seg_addr.pval = 
> desc->des_remote->seg_addr.pval;
>^
> btl_usnic_module.c: At top level:
> btl_usnic_module.c:2325:9: error: unknown field 'btl_seg_size' specified in 
> initializer
> .btl_seg_size = sizeof(mca_btl_base_segment_t), /* seg size */
> ^
> btl_usnic_module.c:2332:9: warning: initialization from incompatible pointer 
> type [enabled by default]
> .btl_prepare_src = usnic_prepare_src,
> ^
> btl_usnic_module.c:2332:9: warning: (near initialization for 
> 'opal_btl_usnic_module_template.super.btl_prepare_src') [enabled by default]
> btl_usnic_module.c:2333:9: error: unknown field 'btl_prepare_dst' specified 
> in initializer
> .btl_prepare_dst = usnic_prepare_dst,
> ^
> btl_usnic_module.c:2333:9: warning: initialization from incompatible pointer 
> type [enabled by default]
> btl_usnic_module.c:2333:9: warning: (near initialization for 
> 'opal_btl_usnic_module_template.super.btl_send') [enabled by default]
> btl_usnic_module.c:2335:9: warning: initialization from incompatible pointer 
> type [enabled by default]
> .btl_put = usnic_put,
> ^
> btl_usnic_module.c:2335:9: warning: (near initialization for 
> 'opal_btl_usnic_module_template.super.btl_put') [enabled by default]
> make[2]: *** [btl_usnic_module.lo] Error 1
> make[1]: *** [all-recursive] Error 1
> make: *** [all-recursive] Error 1
> 
> 
> 
> Begin forwarded message:
> 
> To: ompi-comm...@open-mpi.org
> Date: November 19, 2014 at 2:01:45 PM PST
> From: git...@crest.iu.edu
> Subject: [OMPI commits] Git: open-mpi/ompi branch master updated. 
> dev-327-gccaecf0
> Reply-To: de...@open-mpi.org
> 
> This is an automated email from the git hooks/post-receive script. It was
> generated because a ref change was pushed to the repository containing
> the project "open-mpi/ompi".
> 
> The

Re: [OMPI devel] [EXTERNAL] Re: Fwd: [OMPI commits] Git: open-mpi/ompi branch master updated. dev-327-gccaecf0

2014-11-20 Thread Grant, Ryan Eric (-EXP)
We will be updating the Portals4 BTL.

--Ryan

From: devel  on behalf of Jeff Squyres (jsquyres) 

Sent: Thursday, November 20, 2014 6:26 AM
To: Open MPI Developers List
Subject: [EXTERNAL] Re: [OMPI devel] Fwd: [OMPI commits] Git: open-mpi/ompi 
branch master   updated. dev-327-gccaecf0

Nathan and I talked last night.

He reverted the BTL updates until after SC and the US Thanksgiving holiday next 
week.

I will update the usNIC BTL during the week of Dec 1.

Who will be updating the other BTLs that need updating?



On Nov 19, 2014, at 4:45 PM, Hjelm, Nathan Thomas  wrote:

> Yes. Usnic, yoda, and smcuda need to be updated for the new interface. The 
> warnings in opening I will fix.
>
>
> 
> From: devel on behalf of Ralph Castain
> Sent: Wednesday, November 19, 2014 3:15:07 PM
> To: Open MPI Developers
> Subject: [OMPI devel] Fwd: [OMPI commits] Git: open-mpi/ompi branch master
>   updated. dev-327-gccaecf0
>
> Was this commit intended to happen? It broke the trunk:
>
> btl_openib.c:119:9: warning: initialization from incompatible pointer type 
> [enabled by default]
> .btl_atomic_fop = mca_btl_openib_atomic_fop,
> ^
> btl_openib.c:119:9: warning: (near initialization for 
> 'mca_btl_openib_module.super.btl_atomic_fop') [enabled by default]
> btl_openib.c:120:9: warning: initialization from incompatible pointer type 
> [enabled by default]
> .btl_atomic_cswap = mca_btl_openib_atomic_cswap,
> ^
> btl_openib.c:120:9: warning: (near initialization for 
> 'mca_btl_openib_module.super.btl_atomic_cswap') [enabled by default]
> btl_openib.c: In function 'mca_btl_openib_prepare_src':
> btl_openib.c:1456:9: warning: variable 'rc' set but not used 
> [-Wunused-but-set-variable]
> int rc;
> ^
> btl_openib.c:1450:30: warning: variable 'openib_btl' set but not used 
> [-Wunused-but-set-variable]
> mca_btl_openib_module_t *openib_btl;
>  ^
> btl_openib_component.c: In function 'init_one_device':
> btl_openib_component.c:2047:54: warning: comparison between 'enum 
> ' and 'mca_base_var_source_t' [-Wenum-compare]
> else if (BTL_OPENIB_RQ_SOURCE_DEVICE_INI ==
>  ^
> btl_usnic_frag.c: In function 'recv_seg_constructor':
> btl_usnic_frag.c:144:17: error: 'mca_btl_base_descriptor_t' has no member 
> named 'des_remote'
> seg->rs_desc.des_remote = NULL;
> ^
> btl_usnic_frag.c:145:17: error: 'mca_btl_base_descriptor_t' has no member 
> named 'des_remote_count'
> seg->rs_desc.des_remote_count = 0;
> ^
> btl_usnic_frag.c: In function 'send_frag_constructor':
> btl_usnic_frag.c:168:9: error: 'mca_btl_base_descriptor_t' has no member 
> named 'des_remote'
> desc->des_remote = frag->sf_base.uf_remote_seg;
> ^
> btl_usnic_frag.c:169:9: error: 'mca_btl_base_descriptor_t' has no member 
> named 'des_remote_count'
> desc->des_remote_count = 0;
> ^
> make[2]: *** [btl_usnic_frag.lo] Error 1
> make[2]: *** Waiting for unfinished jobs
> btl_usnic_module.c: In function 'usnic_put':
> btl_usnic_module.c:1107:56: error: 'struct mca_btl_base_descriptor_t' has no 
> member named 'des_remote'
> frag->sf_base.uf_remote_seg[0].seg_addr.pval = 
> desc->des_remote->seg_addr.pval;
>^
> btl_usnic_module.c: At top level:
> btl_usnic_module.c:2325:9: error: unknown field 'btl_seg_size' specified in 
> initializer
> .btl_seg_size = sizeof(mca_btl_base_segment_t), /* seg size */
> ^
> btl_usnic_module.c:2332:9: warning: initialization from incompatible pointer 
> type [enabled by default]
> .btl_prepare_src = usnic_prepare_src,
> ^
> btl_usnic_module.c:2332:9: warning: (near initialization for 
> 'opal_btl_usnic_module_template.super.btl_prepare_src') [enabled by default]
> btl_usnic_module.c:2333:9: error: unknown field 'btl_prepare_dst' specified 
> in initializer
> .btl_prepare_dst = usnic_prepare_dst,
> ^
> btl_usnic_module.c:2333:9: warning: initialization from incompatible pointer 
> type [enabled by default]
> btl_usnic_module.c:2333:9: warning: (near initialization for 
> 'opal_btl_usnic_module_template.super.btl_send') [enabled by default]
> btl_usnic_module.c:2335:9: warning: initialization from incompatible pointer 
> type [enabled by default]
> .btl_put = usnic_put,
> ^
> btl_usnic_module.c:2335:9: warning: (near initialization for 
> 'opal_btl_usnic_module_template.super.btl_put') [enabled by default]
> make[2]: *** [btl_usnic_module.lo] Error 1
> make[1]: *** [all-recursive] Error 1
> make: *** [all-recursive] Error 1
>
>
>
> Begin forwarded message:
>
> To: ompi-comm...@open-mpi.org
> Date: November 19, 2014 at 2:01:45 PM PST
> From: git...@crest.iu.edu
> Subject: 

[OMPI devel] Open MPI SC'14 BOF slides

2014-11-20 Thread Jeff Squyres (jsquyres)
For those of you who weren't able to be at the SC'14 BOF yesterday -- and even 
for those of you who were there and wanted to be able to read the slides in a 
little more detail (and get the links from the slides) -- I have posted them 
here:

http://www.open-mpi.org/papers/sc-2014/

Enjoy!

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



Re: [OMPI devel] Question about tight integration with not-yet-supported queuing systems

2014-11-20 Thread Ralph Castain
Hold on - was discussing this with a (possibly former) OpenLava developer who 
made some suggestions that would make this work. It all hinges on one thing.

Can you please check and see if you have “lsrun” on your system? If you do, 
then I can offer a tight integration in that we would use OpenLava to actually 
launch the OMPI daemons. Still not sure I could support you directly launching 
MPI apps without using mpirun, if that’s what you are after.


> On Nov 18, 2014, at 7:58 AM, Marc Höppner  wrote:
> 
> Hi Ralph,
> 
> I really appreciate you guys looking into this! At least now I know that 
> there isn't a better way to run mpi jobs. Probably worth looking into LSF 
> again..
> 
> Cheers,
> 
> Marc
>> I took a brief gander at the OpenLava source code, and a couple of things 
>> jump out. First, OpenLava is a batch scheduler and only supports batch 
>> execution - there is no interactive command for "run this job". So you would 
>> have to "bsub" mpirun regardless.
>> 
>> Once you submit the job, mpirun can certainly read the local allocation via 
>> the environment. However, we cannot use the OpenLava internal functions to 
>> launch the daemons or processes as the code is GPL2, and thus has a viral 
>> incompatible license. Ordinarily, we get around that by just executing the 
>> interactive job execution command, but OpenLava doesn't have one.
>> 
>> So we'd have no other choice but to use ssh to launch the daemons on the 
>> remote nodes. This is exactly what the provided openmpi wrapper script that 
>> comes with OpenLava already does.
>> 
>> Bottom line: I don't see a way to do any deeper integration minus the 
>> interactive execution command. If OpenLava had a way of getting an 
>> allocation and then interactively running jobs, we could support what you 
>> requested. This doesn't seem to be what they are intending, unless I'm 
>> missing something (the documentation is rather incomplete).
>> 
>> Ralph
>> 
>> 
>> On Tue, Nov 18, 2014 at 6:20 AM, Marc Höppner > > wrote:
>> Hi,
>> 
>> sure, no problem. And about the C Api, I really don’t know more than what I 
>> was told in the google group post I referred to (i.e. the API is essentially 
>> identical to LSF 4-6, which should be on the web).
>> 
>> The output of env can be found here: 
>> https://dl.dropboxusercontent.com/u/1918141/env.txt 
>> 
>> 
>> /M
>> 
>> Marc P. Hoeppner, PhD
>> Team Leader
>> BILS Genome Annotation Platform
>> Department for Medical Biochemistry and Microbiology
>> Uppsala University, Sweden
>> marc.hoepp...@bils.se 
>>> On 18 Nov 2014, at 15:14, Ralph Castain >> > wrote:
>>> 
>>> If you could just run a single copy of "env" and send the output along, 
>>> that would help a lot. I'm not interested in the usual path etc, but would 
>>> like to see the envars that OpenLava is setting.
>>> 
>>> Thanks
>>> Ralph
>>> 
>>> 
>>> On Tue, Nov 18, 2014 at 2:19 AM, Gilles Gouaillardet 
>>> mailto:gilles.gouaillar...@iferc.org>> 
>>> wrote:
>>> Marc,
>>> 
>>> the reply you pointed is a bit confusing to me :
>>> 
>>> "There is a native C API which can submit/start/stop/kill/re queue jobs"
>>> this is not what i am looking for :-(
>>> 
>>> "you need to make an appropriate call to openlava to start a remote process"
>>> this is what i am interested in :-)
>>> could you be more specific (e.g. point me to the functions, since the 
>>> OpenLava doc is pretty minimal ...)
>>> 
>>> the goal here is to spawn the orted daemons as part of the parallel job,
>>> so these daemons are accounted within the parallel job.
>>> /* if we use an API that simply spawns orted, but the orted is not related 
>>> whatsoever to the parallel job,
>>> then we can simply use ssh */
>>> 
>>> Cheers,
>>> 
>>> Gilles
>>> 
>>> 
>>> On 2014/11/18 18:24, Marc Höppner wrote:
 Hi Gilles, 
 
 thanks for the prompt reply. Yes, as far as I know there is a C API to 
 interact with jobs etc. Some mentioning here: 
 https://groups.google.com/forum/#!topic/openlava-users/w74cRUe9Y9E 
  
  
 
 
 
 /Marc
 
 Marc P. Hoeppner, PhD
 Team Leader
 BILS Genome Annotation Platform
 Department for Medical Biochemistry and Microbiology
 Uppsala University, Sweden
 marc.hoepp...@bils.se 
 
> On 18 Nov 2014, at 08:40, Gilles Gouaillardet 
>   
> wrote:
> 
> Hi Marc,
> 
> OpenLava is based on a pretty old version of LSF (4.x if i remember
> correctly)
> and i do not think LSF had support for parallel jobs tight integration
> at that time.
> 
> my understanding i

Re: [OMPI devel] Question about tight integration with not-yet-supported queuing systems

2014-11-20 Thread Marc Höppner

Hi,

yes, lsrun exists under openlava.

Using mpirun is fine, but openlava currently requires that to be 
launched through a bash script (openmpi-mpirun). Would be neater if one 
could do away with that.


Agan, thanks for looking into this!

/Marc

Hold on - was discussing this with a (possibly former) OpenLava 
developer who made some suggestions that would make this work. It all 
hinges on one thing.


Can you please check and see if you have "lsrun" on your system? If 
you do, then I can offer a tight integration in that we would use 
OpenLava to actually launch the OMPI daemons. Still not sure I could 
support you directly launching MPI apps without using mpirun, if 
that's what you are after.



On Nov 18, 2014, at 7:58 AM, Marc Höppner > wrote:


Hi Ralph,

I really appreciate you guys looking into this! At least now I know 
that there isn't a better way to run mpi jobs. Probably worth looking 
into LSF again..


Cheers,

Marc
I took a brief gander at the OpenLava source code, and a couple of 
things jump out. First, OpenLava is a batch scheduler and only 
supports batch execution - there is no interactive command for "run 
this job". So you would have to "bsub" mpirun regardless.


Once you submit the job, mpirun can certainly read the local 
allocation via the environment. However, we cannot use the OpenLava 
internal functions to launch the daemons or processes as the code is 
GPL2, and thus has a viral incompatible license. Ordinarily, we get 
around that by just executing the interactive job execution command, 
but OpenLava doesn't have one.


So we'd have no other choice but to use ssh to launch the daemons on 
the remote nodes. This is exactly what the provided openmpi wrapper 
script that comes with OpenLava already does.


Bottom line: I don't see a way to do any deeper integration minus 
the interactive execution command. If OpenLava had a way of getting 
an allocation and then interactively running jobs, we could support 
what you requested. This doesn't seem to be what they are intending, 
unless I'm missing something (the documentation is rather incomplete).


Ralph


On Tue, Nov 18, 2014 at 6:20 AM, Marc Höppner>wrote:


Hi,

sure, no problem. And about the C Api, I really don't know more
than what I was told in the google group post I referred to
(i.e. the API is essentially identical to LSF 4-6, which should
be on the web).

The output of env can be found here:
https://dl.dropboxusercontent.com/u/1918141/env.txt

/M

Marc P. Hoeppner, PhD
Team Leader
BILS Genome Annotation Platform
Department for Medical Biochemistry and Microbiology
Uppsala University, Sweden
marc.hoepp...@bils.se 


On 18 Nov 2014, at 15:14, Ralph Castain mailto:r...@open-mpi.org>> wrote:

If you could just run a single copy of "env" and send the
output along, that would help a lot. I'm not interested in the
usual path etc, but would like to see the envars that OpenLava
is setting.

Thanks
Ralph


On Tue, Nov 18, 2014 at 2:19 AM, Gilles
Gouaillardetmailto:gilles.gouaillar...@iferc.org>>wrote:

Marc,

the reply you pointed is a bit confusing to me :

"There is a native C API which can
submit/start/stop/kill/re queue jobs"
this is not what i am looking for :-(

"you need to make an appropriate call to openlava to start
a remote process"
this is what i am interested in :-)
could you be more specific (e.g. point me to the functions,
since the OpenLava doc is pretty minimal ...)

the goal here is to spawn the orted daemons as part of the
parallel job,
so these daemons are accounted within the parallel job.
/* if we use an API that simply spawns orted, but the orted
is not related whatsoever to the parallel job,
then we can simply use ssh */

Cheers,

Gilles


On 2014/11/18 18:24, Marc Höppner wrote:

Hi Gilles,

thanks for the prompt reply. Yes, as far as I know there is a C API to interact with jobs 
etc. Some mentioning here:https://groups.google.com/forum/#!topic/openlava-users/w74cRUe9Y9E  
  
  

/Marc Marc P. Hoeppner, PhD Team Leader BILS Genome
Annotation Platform Department for Medical Biochemistry
and Microbiology Uppsala University, Sweden
marc.hoepp...@bils.se 

On 18 Nov 2014, at 08:40, Gilles Gouaillardet  
  wrote:

Hi Marc,

OpenLava is based on a pretty old version of LSF (4.x if i remember
correctly)
and i do not thi

Re: [OMPI devel] Question about tight integration with not-yet-supported queuing systems

2014-11-20 Thread Ralph Castain
Here’s what I can provide:

* lsrun -n N bash  This causes openlava to create an allocation and start you 
off in a bash shell (or pick your shell)

* mpirun …..   Will read the allocation and use openlava to start the daemons, 
and then the application, on the allocated nodes

You can execute as many mpirun’s as you like, then release the allocation (I 
believe by simply exiting the shell) when done.

Does that match your expectations?
Ralph


> On Nov 20, 2014, at 2:03 PM, Marc Höppner  wrote:
> 
> Hi,
> 
> yes, lsrun exists under openlava. 
> 
> Using mpirun is fine, but openlava currently requires that to be launched 
> through a bash script (openmpi-mpirun). Would be neater if one could do away 
> with that. 
> 
> Agan, thanks for looking into this!
> 
> /Marc
> 
>> Hold on - was discussing this with a (possibly former) OpenLava developer 
>> who made some suggestions that would make this work. It all hinges on one 
>> thing.
>> 
>> Can you please check and see if you have “lsrun” on your system? If you do, 
>> then I can offer a tight integration in that we would use OpenLava to 
>> actually launch the OMPI daemons. Still not sure I could support you 
>> directly launching MPI apps without using mpirun, if that’s what you are 
>> after.
>> 
>> 
>>> On Nov 18, 2014, at 7:58 AM, Marc Höppner >> > wrote:
>>> 
>>> Hi Ralph,
>>> 
>>> I really appreciate you guys looking into this! At least now I know that 
>>> there isn't a better way to run mpi jobs. Probably worth looking into LSF 
>>> again..
>>> 
>>> Cheers,
>>> 
>>> Marc
 I took a brief gander at the OpenLava source code, and a couple of things 
 jump out. First, OpenLava is a batch scheduler and only supports batch 
 execution - there is no interactive command for "run this job". So you 
 would have to "bsub" mpirun regardless.
 
 Once you submit the job, mpirun can certainly read the local allocation 
 via the environment. However, we cannot use the OpenLava internal 
 functions to launch the daemons or processes as the code is GPL2, and thus 
 has a viral incompatible license. Ordinarily, we get around that by just 
 executing the interactive job execution command, but OpenLava doesn't have 
 one.
 
 So we'd have no other choice but to use ssh to launch the daemons on the 
 remote nodes. This is exactly what the provided openmpi wrapper script 
 that comes with OpenLava already does.
 
 Bottom line: I don't see a way to do any deeper integration minus the 
 interactive execution command. If OpenLava had a way of getting an 
 allocation and then interactively running jobs, we could support what you 
 requested. This doesn't seem to be what they are intending, unless I'm 
 missing something (the documentation is rather incomplete).
 
 Ralph
 
 
 On Tue, Nov 18, 2014 at 6:20 AM, Marc Höppner >>> > wrote:
 Hi,
 
 sure, no problem. And about the C Api, I really don’t know more than what 
 I was told in the google group post I referred to (i.e. the API is 
 essentially identical to LSF 4-6, which should be on the web).
 
 The output of env can be found here: 
 https://dl.dropboxusercontent.com/u/1918141/env.txt 
 
 
 /M
 
 Marc P. Hoeppner, PhD
 Team Leader
 BILS Genome Annotation Platform
 Department for Medical Biochemistry and Microbiology
 Uppsala University, Sweden
 marc.hoepp...@bils.se 
> On 18 Nov 2014, at 15:14, Ralph Castain  > wrote:
> 
> If you could just run a single copy of "env" and send the output along, 
> that would help a lot. I'm not interested in the usual path etc, but 
> would like to see the envars that OpenLava is setting.
> 
> Thanks
> Ralph
> 
> 
> On Tue, Nov 18, 2014 at 2:19 AM, Gilles Gouaillardet 
> mailto:gilles.gouaillar...@iferc.org>> 
> wrote:
> Marc,
> 
> the reply you pointed is a bit confusing to me :
> 
> "There is a native C API which can submit/start/stop/kill/re queue jobs"
> this is not what i am looking for :-(
> 
> "you need to make an appropriate call to openlava to start a remote 
> process"
> this is what i am interested in :-)
> could you be more specific (e.g. point me to the functions, since the 
> OpenLava doc is pretty minimal ...)
> 
> the goal here is to spawn the orted daemons as part of the parallel job,
> so these daemons are accounted within the parallel job.
> /* if we use an API that simply spawns orted, but the orted is not 
> related whatsoever to the parallel job,
> then we can simply use ssh */
> 
> Cheers,
> 
> Gilles
> 
> 
> On 2014/11/18 18:24, Marc Höppner wrote:
>> Hi Gilles, 
>>