The traditional way to do this stuff in MPI is

If rank==0 do_master_stuff
Else do_slave_stuff

Sounds like that pattern should apply to your app.

Pvm has been dead for years.

Sent from my phone. No type good.

On Jun 6, 2013, at 9:43 AM, "Ralph Castain" 
<r...@open-mpi.org<mailto:r...@open-mpi.org>> wrote:

I honestly don't know - you'd have to look at the PVM docs. You also might look 
at OpenMP and try doing it with multiple threads instead of processes, though 
that limits you to running on a single node.

On Jun 6, 2013, at 9:37 AM, José Luis García Pallero 
<jgpall...@gmail.com<mailto:jgpall...@gmail.com>> wrote:

2013/6/6 Ralph Castain <r...@open-mpi.org<mailto:r...@open-mpi.org>>

On Jun 6, 2013, at 8:58 AM, José Luis García Pallero 
<jgpall...@gmail.com<mailto:jgpall...@gmail.com>> wrote:

2013/6/6 Ralph Castain <r...@open-mpi.org<mailto:r...@open-mpi.org>>
should work!

Thank you for your answer.

So I understand that MPI_Comm_spawn() is my function. But I see in the 
documentation that the first argument is char* command, and command is the name 
of the program to spawn, but I not want to execute an external program. Only a 
piece of code in the same program. How can I deal with that.

You'll have to move that code into a separate program, then pass any data it 
requires using MPI_Send/receive or a collective operation

Mmm... bad news. It is impossible, then in MPI. I need all the code in the same 
executable.

I know this is offtopic in this list, but, can I perform the calculations as I 
want using PVM? (ScaLAPACK et al. can also run with PVM)

Cheers


The second argument is char* argv[]. Admits MPI_Comm_spawn() the NULL value for 
argv[], as MPI_Init()?

I know also that I can do my program by putting the code before and after the 
call funcCompScalapack() into an if() checking if the node is the root and then 
these pieces of code will be executed only by the root. But I want to maintain 
all the program free of MPI code except the funcCompScalapack() function

Cheers

On Jun 6, 2013, at 8:24 AM, José Luis García Pallero 
<jgpall...@gmail.com<mailto:jgpall...@gmail.com>> wrote:

2013/6/6 Ralph Castain <r...@open-mpi.org<mailto:r...@open-mpi.org>>
Afraid not. You could start a single process, and then have that process call 
MPI_Comm_spawn to launch the rest of them

Mmmm... sounds good

I'm writing an example program using ScaLAPACK. I have written the ScaLAPACK 
code in an independent function that must be called after some work in an 
individual node (the root one). So I need the first part of the program be 
executed by one process. My example looks like:

int main()
{
    //some work that must be done by only one node
    .....
    //function that runs internally some scalapack computations
    funcCompScalapack();
    //other work must be done by the original node
    ....
    return 0;
}

void funcCompScalapack()
{
    //Initialize MPI
    MPI_Init(NULL,NULL);
    //here I think I should write some code in order to inform that the work 
must be done by a number X or processors
    //maybe using MPI_Comm_spawn?
    ....
    //some BLACS and ScaLAPACK computations
    ....
    //finalize MPI
    MPI_Finalize();
    return;
}

When I execute this program as mpirun -np X myprogram, the pieces of code 
before and after the call to function funcCompScalapack() are executed by X 
nodes, but these orders must be executed only by one. So my idea is to execute 
the binary as ./myprogram (the same I think as mpirun -np 1 myprogram) and 
internally set the number of processes in funcCompScalapack() after the 
MPI_Init() call.

Is my idea possible?

Thanks


On Jun 6, 2013, at 7:54 AM, José Luis García Pallero 
<jgpall...@gmail.com<mailto:jgpall...@gmail.com>> wrote:

Hello:

I'm newbie in the use of MPI, so probably I ask some stupid question (or 
previously asked, but in this case I have searched in the archive and I haven't 
found anything):

Exists any other way than -np X in order to pass the number of processes to 
start for an MPI program? I mean a function of the style 
MPI_Set_Number_Processes() or similar

Thanks

--
*****************************************
José Luis García Pallero
jgpall...@gmail.com<mailto:jgpall...@gmail.com>
(o<
/ / \
V_/_
Use Debian GNU/Linux and enjoy!
*****************************************
_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
*****************************************
José Luis García Pallero
jgpall...@gmail.com<mailto:jgpall...@gmail.com>
(o<
/ / \
V_/_
Use Debian GNU/Linux and enjoy!
*****************************************
_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
*****************************************
José Luis García Pallero
jgpall...@gmail.com<mailto:jgpall...@gmail.com>
(o<
/ / \
V_/_
Use Debian GNU/Linux and enjoy!
*****************************************
_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
*****************************************
José Luis García Pallero
jgpall...@gmail.com<mailto:jgpall...@gmail.com>
(o<
/ / \
V_/_
Use Debian GNU/Linux and enjoy!
*****************************************
_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to