You mean via an API of some kind? Not through an MPI call, but you can do it
(though your code will wind up OMPI-specific). Look at the OMPI source code
in opal/mca/paffinity/paffinity.h and you'll see the necessary calls as well
as some macros to help parse the results.
Depending upon what versio
In a word, no. If a node crashes, OMPI will abort the currently-running job
if it had processes on that node. There is no current ability to "ride-thru"
such an event.
That said, there is work being done to support "ride-thru". Most of that is
in the current developer's code trunk, and more is com
ompi-ps talks to mpirun to get the info, and then pretty-prints it to
stderr. Best guess is that it is having problems contacting mpirun. Are you
running it on the same node as mpirun (a requirement, unless you pass it the
full contact info)?
Check the ompi-ps man page and also "ompi-ps -h" to ens
Eloi, I am curious about your problem. Can you tell me what size of job
it is? Does it always fail on the same bcast, or same process?
Eloi Gaudry wrote:
Hi Nysal,
Thanks for your suggestions.
I'm now able to get the checksum computed and redirected to stdout, thanks (I forgot the
"-mca p
Hi Ambrose,
I'm interested in you work, i have a app to convert for myself and i don't
know enough the MPI structure and syntaxe to make it...
So if you wanna share your app i'm interested in taking a look at it!!
Thanks and have a nice day!!
Mikael Lavoie
2010/9/23 Lewis, Ambrose J.
> Hi Al
That's a great suggestion...Thanks!
amb
-Original Message-
From: users-boun...@open-mpi.org on behalf of Bowen Zhou
Sent: Thu 9/23/2010 1:18 PM
To: Open MPI Users
Subject: Re: [OMPI users] "self scheduled" work & mpi receive???
> Hi All:
>
> I've written an openmpi program that "se
Jeff and Ralph,
Thank you for your reply.
1) I'm not running on machines with OpenFabrics.
2) In my example, ompi-ps prints a maximum 82 bytes per line. Even so, I
augment to 300 bytes per line to be sure that it is not the problem.
char mystring [300];
...
fgets (mystring , 300 , pFile);
2) W
CC stands for any Collective Communication operation. Every CC occurs on
some communicator.
Every CC is issued (basically the thread the call is on enters the call)
at some point in time. If two threads are issuing CC calls on the same
communicator, the issue order can become ambiguous so mak
Hi all, I'm new in the list. I don't know if this post has been treated
before.
My question is:
Is there a way in the OMPI library to report which process is running
on which core in a SMP system? I need to know processor affinity for
optimizations issues.
Regards
Fernando Saez
Dear Open MPI,
How essential is Open MPI's opal_sys_timer_get_cycles() function?
It apparently needs to access a timestamp register directly. That is
a trivial operation in PPC (mftb) or x86 (tsc), but the ARM processor
apparently doesn't have a similar function in its instruction set.
Is it cr
Hi All:
I’ve written an openmpi program that “self schedules” the work.
The master task is in a loop chunking up an input stream and handing off
jobs to worker tasks. At first the master gives the next job to the
next highest rank. After all ranks have their first job, the master
wai
Sorry Richard,
what is CC issue order on the communicator?, in particular, "CC", what does
it mean?
2010/9/23 Richard Treumann
>
> request_1 and request_2 are just local variable names.
>
> The only thing that determines matching order is CC issue order on the
> communicator. At each process,
Hi Lewis,
On Thu, Sep 23, 2010 at 9:38 AM, Lewis, Ambrose J.
wrote:
> Hi All:
>
> I’ve written an openmpi program that “self schedules” the work.
>
> The master task is in a loop chunking up an input stream and handing off
> jobs to worker tasks. At first the master gives the next job to the nex
On Sep 23, 2010, at 10:54 AM, Richard Treumann wrote:
> I do not agree with Jeff on this below. The Proc 1 case where the MPI_Waits
> are reversed simply requires the MPI implementation to make progress on both
> MPI_Ibcast operations in the first MPI_Wait. The second MPI_Wait call will
> sim
request_1 and request_2 are just local variable names.
The only thing that determines matching order is CC issue order on the
communicator. At each process, some CC is issued first and some CC is
issued second. The first issued CC at each process will try to match the
first issued CC at the
On Sep 23, 2010, at 10:00 AM, Gabriele Fatigati wrote:
> to be sure, if i have one processor who does:
>
> MPI_IBcast(MPI_COMM_WORLD, request_1) // first Bcast
> MPI_IBcast(MPI_COMM_WORLD, request_2) // second Bcast
>
> it means that i can't have another process who does the follow:
>
> MPI_IBc
Mm,
to be sure, if i have one processor who does:
MPI_IBcast(MPI_COMM_WORLD, request_1) // first Bcast
MPI_IBcast(MPI_COMM_WORLD, request_2) // second Bcast
it means that i can't have another process who does the follow:
MPI_IBcast(MPI_COMM_WORLD, request_2) // firt Bcast for another process
MP
On Sep 23, 2010, at 6:28 AM, Gabriele Fatigati wrote:
> i'm studing the interfaces of new collective routines in next MPI-3, and i've
> read that new collectives haven't any tag.
Correct.
> So all collective operations must follow the ordering rules for collective
> calls.
Also correct.
> F
Hi All:
I've written an openmpi program that "self schedules" the work.
The master task is in a loop chunking up an input stream and handing off
jobs to worker tasks. At first the master gives the next job to the
next highest rank. After all ranks have their first job, the master
waits via an
Dear users,
Our cluster has a number of nodes which have high probability to crash, so
it happens quite often that calculations stop due to one node getting down.
May be you know if it is possible to block the crashed nodes during run-time
when running with OpenMPI? I am asking about principal pos
Dear all,
i'm studing the interfaces of new collective routines in next MPI-3, and
i've read that new collectives haven't any tag.
So all collective operations must follow the ordering rules for collective
calls.
>From what i understand, this means that i can't use:
MPI_IBcast(MPI_COMM_WORLD, r
You should probably take this up with Pathscale's support team.
On Sep 23, 2010, at 3:56 AM, Rafael Arco Arredondo wrote:
> I am using GCC 4.x:
>
> $ pathCC -v
> PathScale(TM) Compiler Suite: Version 3.2
> Built on: 2008-06-16 16:41:38 -0700
> Thread model: posix
> GNU gcc version 4.2.0 (PathSc
I am using GCC 4.x:
$ pathCC -v
PathScale(TM) Compiler Suite: Version 3.2
Built on: 2008-06-16 16:41:38 -0700
Thread model: posix
GNU gcc version 4.2.0 (PathScale 3.2 driver)
$ pathCC -show-defaults
Optimization level and compilation target:
-O2 -mcpu=opteron -m64 -msse -msse2 -mno-sse3 -mno-3
23 matches
Mail list logo