Hi Todd,
I personally don't know the answer, but I see that Andreas from the open
source grid engine alias (u...@gridengine.sunsource.net) is addressing
your issues. He should be able to address your issues since it's more
related to the internals of qmaster.
http://gridengine.sunsource.net/
I'm quite sure that we have since fixed the command line parsing
problem, and I *think* we fixed the mmap problem.
Is there any way that you can upgrade to v1.1.3?
On Jan 29, 2007, at 3:24 PM, Avishay Traeger wrote:
Hello,
I have just installed Open MPI 1.1 on a 64-bit FC6 machine using yu
Without analyzing your source, it's hard to say. I will say that
OMPI may send fragments out of order, but we do, of course, provide
the same message ordering guarantees that MPI mandates. So let me
ask a few leading questions:
- Are you using any wildcards in your receives, such as
MPI
On 1/29/07 6:38 PM, "Jeff Squyres" wrote:
> On Jan 19, 2007, at 5:21 PM, Evan Smyth wrote:
>
>> I had been using MPICH and its serv_p4 daemon to speed startup times.
>> I've decided to try OpenMPI (primarily for the fault-tolerance
>> features)
>> and would like to know what the equivalent of
Sorry for the delay in replying to this -- if normal users can't
create mmaped shared memory files, then our shared memory ("sm")
device will fail. I'm afraid I don't know much about OpenSSI --
perhaps they have some specific restrictions about mmaped files...?
On Jan 21, 2007, at 9:17 PM
On Jan 19, 2007, at 5:21 PM, Evan Smyth wrote:
I had been using MPICH and its serv_p4 daemon to speed startup times.
I've decided to try OpenMPI (primarily for the fault-tolerance
features)
and would like to know what the equivalent of the serv_p4 daemon is.
We don't yet have one. "Persist
I have sent the following experiences to the SGE mailing list, but I
thought I would also try here...
I have been trying out version 1.2b2 for its integration with SGE. The
simple "hello world" test program works fin by itself, but there are
issues when submitting it to SGE.
For small numbe
Hello,
I have just installed Open MPI 1.1 on a 64-bit FC6 machine using yum.
The packages that were installed are:
openmpi-devel-1.1-7.fc6
openmpi-libs-1.1-7.fc6
openmpi-1.1-7.fc6
I tried running ompi_info, but it results in a segmentation fault.
Running strace shows this at the end:
mmap(NULL,
On Jan 29, 2007, at 4:36 AM, Chevchenkovic Chevchenkovic wrote:
Can we have a FORTRAN stop statement before MPI_Finalise?
It is not advisable.
What is the expected behaviour?
Correct MPI applications will call MPI_FINALIZE before stop.
--
Jeff Squyres
Server Virtualization Business Unit
Hi,
Can we have a FORTRAN stop statement before MPI_Finalise?
What is the expected behaviour?
-Chev
10 matches
Mail list logo