It depends on the application you are using. Some are "balanced" - i.e., they
run faster if the number of processes is a power of two. You'll see that n8 is
faster than n7, so this is likely the situation.
On Jun 6, 2013, at 4:10 PM, "Blosch, Edwin L" wrote:
> I am
I am running single-node Sandy Bridge cases with OpenMPI and looking at scaling.
I'm using -bind-to-core without any other options (default is -bycore I
believe).
These numbers indicate number of cores first, then the second digit is the run
number (except for n=1, all runs repeated 3 times).
Pvm is not dead. It's "stable".
2013/6/6 Jeff Squyres (jsquyres)
> The traditional way to do this stuff in MPI is
>
> If rank==0 do_master_stuff
> Else do_slave_stuff
>
> Sounds like that pattern should apply to your app.
>
> Pvm has been dead for years.
>
> Sent from
Hi,
I have a quick question.
Is there an openib (in btl framework) equivalent in coll framework?
I have an MPI application with gatherv and scatterv. I am wondering if I
can leverage RDMA capabilities of the underlying Infiniband fabric.
Thanks,
--
Joba
The traditional way to do this stuff in MPI is
If rank==0 do_master_stuff
Else do_slave_stuff
Sounds like that pattern should apply to your app.
Pvm has been dead for years.
Sent from my phone. No type good.
On Jun 6, 2013, at 9:43 AM, "Ralph Castain"
What wrong with only allowing the rank 0 to execute the code before and after
the funcCompScalapack function as indicated in the example below:
int main()
{
//Initialize MPI
MPI_Init(NULL,NULL);
MPI_Comm_rank(MPI_COMM_WORLD, );
if( 0 == rank ) {
//some work that must be
On Jun 6, 2013, at 9:29 AM, "Nima Aghajari" wrote:
> Hello,
> first of all thanks for your reply. I tried specifying the --slot-list option
> like you proposed. This will unfortunately lead to the same mpirun with 5
> cores. Adding another slot-list command for the second
I honestly don't know - you'd have to look at the PVM docs. You also might look
at OpenMP and try doing it with multiple threads instead of processes, though
that limits you to running on a single node.
On Jun 6, 2013, at 9:37 AM, José Luis García Pallero
wrote:
>
2013/6/6 Ralph Castain
>
> On Jun 6, 2013, at 8:58 AM, José Luis García Pallero
> wrote:
>
> 2013/6/6 Ralph Castain
>
>> should work!
>>
>
> Thank you for your answer.
>
> So I understand that MPI_Comm_spawn() is my function. But I see
On Jun 6, 2013, at 8:58 AM, José Luis García Pallero
wrote:
> 2013/6/6 Ralph Castain
> should work!
>
> Thank you for your answer.
>
> So I understand that MPI_Comm_spawn() is my function. But I see in the
> documentation that the first argument is
2013/6/6 Ralph Castain
> should work!
>
Thank you for your answer.
So I understand that MPI_Comm_spawn() is my function. But I see in the
documentation that the first argument is char* command, and command is the
name of the program to spawn, but I not want to execute an
should work!
On Jun 6, 2013, at 8:24 AM, José Luis García Pallero
wrote:
> 2013/6/6 Ralph Castain
> Afraid not. You could start a single process, and then have that process call
> MPI_Comm_spawn to launch the rest of them
>
> Mmmm... sounds good
>
>
2013/6/6 Ralph Castain
> Afraid not. You could start a single process, and then have that process
> call MPI_Comm_spawn to launch the rest of them
>
Mmmm... sounds good
I'm writing an example program using ScaLAPACK. I have written the
ScaLAPACK code in an independent
Afraid not. You could start a single process, and then have that process call
MPI_Comm_spawn to launch the rest of them
On Jun 6, 2013, at 7:54 AM, José Luis García Pallero
wrote:
> Hello:
>
> I'm newbie in the use of MPI, so probably I ask some stupid question (or
>
Hello:
I'm newbie in the use of MPI, so probably I ask some stupid question (or
previously asked, but in this case I have searched in the archive and I
haven't found anything):
Exists any other way than -np X in order to pass the number of processes to
start for an MPI program? I mean a function
You could do it by specifying which cores to use - something like
mpirun -np 4 --slot-list 0-3 prog_1 : -np 1 prog_2
On Jun 6, 2013, at 1:52 AM, Nima Aghajari wrote:
> Dear all,
> I am currently using openmpi 1.6.4 and trying to do a parallel performance
> analysis for a
Wow - that is ancient! Can you update to something more recent - perhaps
something like 1.6.4?
I have no idea what the problem might be in something that old, but it
certainly was working with LSF back then. Can you run a simple app, perhaps
something like "mpirun -n 1 hostname"?
On Jun 5,
Dear all,
I am currently using openmpi 1.6.4 and trying to do a parallel performance analysis for a parallel two-program mpirun. So what I have are two programs that are executed like this:
mpirun -np 4 my_prog1 : -np 1 my_prog2
my_prog1 and my_prog2 run sequentially, so when one
Hi ,
We are using openmpi version 1.4.5.
Thanks and Regards,
Mahalakshmi Murthy [Maha]
TCS | GE Global Research
Mail ID: mahalakshmi.mur...@ge.com
Mobile No:8147923917
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent:
19 matches
Mail list logo