Perhaps this post in the Open-MPI archives can help:
http://www.open-mpi.org/community/lists/users/2008/01/4898.php
Jody
On Sun, Oct 26, 2008 at 4:30 AM, Davi Vercillo C. Garcia (ダヴィ)
wrote:
> Anybody !?
>
> On Thu, Oct 23, 2008 at 12:41 AM, Davi Vercillo C. Garcia (ダヴィ)
> wrote:
>> Hi,
>>
>> I
Apologies if this has been covered in a previous thread - I
went back through a lot of posts without seeing anything
similar.
In an attempt to protect some users from themselves, I was hoping
that OpenMPI could be configured so that an MPI task calling
exit before calling MPI_Finalize() would ca
I take it this is using OMPI 1.2.x? If so, there really isn't a way to
do this in that series.
If they are using 1.3 (in some pre-release form), then there are two
options:
1. they could use the sequential mapper by specifying "-mca rmaps
seq". This mapper takes a hostfile and maps one pr
This was added to the 1.3 version - it was not back-ported to the
1.2.x series.
Ralph
On Oct 27, 2008, at 5:46 AM, David Singleton wrote:
Apologies if this has been covered in a previous thread - I
went back through a lot of posts without seeing anything
similar.
In an attempt to protect s
I dabble in Fortran but am not an expert -- is REAL(kind=16) the same
as REAL*16? MPI_REAL16 should be a 16 byte REAL; I'm not 100% sure
that REAL(kind=16) is the same thing...?
On Oct 23, 2008, at 7:37 AM, Julien Devriendt wrote:
Hi,
I'm trying to do an MPI_ALLREDUCE with quadruple pre
I think the KINDs are compiler dependent. For Sun Studio Fortran,
REAL*16 and REAL(16) are the same thing. For Intel, maybe it's
different. I don't know. Try running this program:
double precision xDP
real(16) x16
real*16 xSTAR16
write(6,*) kind(xDP), kind(x16), kind(xSTAR16), kind(1.0_16)
I can't seem to run your code, either. Can you provide a more precise
description of what exactly is happening? It's quite possible /
probable that Rob's old post is the answer, but I can't tell from your
original post -- there just aren't enough details.
Thanks.
On Oct 27, 2008, at 3:
Sorry for the lack of reply; several of us were at the MPI Forum
meeting last week, and although I can't speak for everyone else, I
know that I always fall [way] behind on e-mail when I travel. :-\
The windows port is very much a work-in-progress. I'm not surprised
that it doesn't work.
can you update me with the mapping or the way to get it from the OS on the
Cell.
thanks
On Thu, Oct 23, 2008 at 8:08 PM, Mi Yan wrote:
> Lenny,
>
> Thanks.
> I asked the Cell/BE Linux Kernel developer to get the CPU mapping :) The
> mapping is fixed in current kernel.
>
> Mi
> [image: Inactive
Hi,
On Mon, Oct 27, 2008 at 6:48 PM, Jeff Squyres wrote:
> I can't seem to run your code, either. Can you provide a more precise
> description of what exactly is happening? It's quite possible / probable
> that Rob's old post is the answer, but I can't tell from your original post
> -- there ju
After a little digging, I am able to run your code (it looks like it
expects both an input file and an output file on the command line, or
it segv's). But I don't get those errors, either with OMPI v1.2.8 or
the upcoming v1.3 series; I ran with as many as 16 processes across 4
nodes.
Can
On Oct 24, 2008, at 12:10 PM, V. Ram wrote:
Resuscitating this thread...
Well, we spent some time testing the various options, and Leonardo's
suggestion seems to work!
We disabled TCP Segment Offloading on the e1000 NICs using "ethtool -K
eth tso off" and this type of crash no longer happens.
is the code using shared file pointer operations (e.g.
MPI_File_write_shared/ordered)?
There was a fix which removed a warning/error about not being to delete
the file when using shared file pointer around v.1.2.6 ( I don't
remember precisely when it hit the trunk), and I was wandering whethe
13 matches
Mail list logo