Hello,
Thank you for the reply! All of the autotools I am using have the same
or higher versions than those specified at
http://www.open-mpi.org/software/ompi/v1.6/. I referenced the specific
versions at the end of my initial email.
After some digging on the svn branch and some help from Nat
I don't think this is actually old autotools, since those are the most
recent. My guess is that there's an m4 file not being included in the
tarball. I'll try to take a look, but we probably need to fix a Makefile
in ROMIO.
Brian
On 10/31/12 2:46 PM, "Ralph Castain" wrote:
>We've seen this be
We've seen this before - it's caused by using autotools that are too old.
Please look at the HACKING file to see the required version levels.
BTW: you should not be running autogen.sh on a tarball version. You should only
run configure.
On Oct 31, 2012, at 1:31 PM, David Shrader wrote:
> Hel
Hello,
When using Open MPI from the 1.6.3 tarball, I have found that running
the top-level autogen.sh breaks the romio component. Here are the steps
to reproduce:
1) download openmpi-1.6.3.tar.bz2 from
http://www.open-mpi.org/software/ompi/v1.6/
2) untar openmpi-1.6.3.tar.bz2
3) cd openmpi-
Hello Yevgeny, hello all,
Yevgeny, first of all thanks for explaining what the MTT parameters do and why
there are two of them! I mean this post:
http://www.open-mpi.org/community/lists/devel/2012/08/11417.php
Well, the official recommendation is "twice the RAM amount".
And here we are: we ha
On 10/31/12 1:57 PM, "Dmitri Gribenko" wrote:
>On Wed, Oct 31, 2012 at 9:51 PM, Barrett, Brian W
>wrote:
>> On 10/31/12 1:39 PM, "Paul Hargrove" wrote:
>>
>>>No, I don't have specific usage cases that concern me.
>>>
>>>
>>>As I said a minute or two ago in a reply to Ralph, my concern is that
>
On Wed, Oct 31, 2012 at 12:47 PM, Ralph Castain wrote:
>
> On Oct 31, 2012, at 12:36 PM, Paul Hargrove wrote:
>
> Ralph,
>
> I don't think I missed the point about the origin of the concern - we just
> have different view points.
>
> Jeff indicated that he previously thought the "odd" practice s
On Wed, Oct 31, 2012 at 9:51 PM, Barrett, Brian W wrote:
> On 10/31/12 1:39 PM, "Paul Hargrove" wrote:
>
>>No, I don't have specific usage cases that concern me.
>>
>>
>>As I said a minute or two ago in a reply to Ralph, my concern is that the
>>Sandia codes provide an "existence proof" that "rea
On 10/31/12 1:39 PM, "Paul Hargrove" wrote:
>No, I don't have specific usage cases that concern me.
>
>
>As I said a minute or two ago in a reply to Ralph, my concern is that the
>Sandia codes provide an "existence proof" that "really smart people" can
>write questionable code at times. So, I fe
On Oct 31, 2012, at 12:36 PM, Paul Hargrove wrote:
> Ralph,
>
> I don't think I missed the point about the origin of the concern - we just
> have different view points.
>
> Jeff indicated that he previously thought the "odd" practice shown in the
> example was uncommon until he learned is wa
On Wed, Oct 31, 2012 at 12:16 PM, Dmitri Gribenko wrote:
> On Wed, Oct 31, 2012 at 9:11 PM, Paul Hargrove wrote:
> > Ralph,
> >
> > I work at a National Lab, and like many of my peers I develop/prototype
> > codes on my desktop and/or laptop. So, I think the default behavior of
> > mpicc on a Cl
Hello all,
Open MPI is clever and use by default multiple IB adapters, if available.
http://www.open-mpi.org/faq/?category=openfabrics#ofa-port-wireup
Open MPI is lazy and establish connections only iff needed.
Both is good.
We have kinda special nodes: up to 16 sockets, 128 cores, 4 boards, 4
Ralph,
I don't think I missed the point about the origin of the concern - we just
have different view points.
Jeff indicated that he previously thought the "odd" practice shown in the
example was uncommon until he learned is was common in codes at Sandia.
Perhaps you and I have interpreted the "
On Wed, Oct 31, 2012 at 9:11 PM, Paul Hargrove wrote:
> Ralph,
>
> I work at a National Lab, and like many of my peers I develop/prototype
> codes on my desktop and/or laptop. So, I think the default behavior of
> mpicc on a Clang-based Mac is entirely relevant.
>
> FWIW:
> I agree w/ Jeff that t
Understood - I also do my development on a Mac. You missed my point entirely -
the concern was raised by a member from Sandia based on not generating warnings
for their users. Those users are NOT on a Mac - they are building on the head
node of the HPC cluster.
If we want to raise the issue of
Ralph,
I work at a National Lab, and like many of my peers I develop/prototype
codes on my desktop and/or laptop. So, I think the default behavior of
mpicc on a Clang-based Mac is entirely relevant.
FWIW:
I agree w/ Jeff that these datatype checking warnings "feel" like
a candidate for "-Wall" (
On Wed, Oct 31, 2012 at 9:04 PM, Ralph Castain wrote:
> Understood, but also remember that the national labs don't have Mac clusters
> - and so they couldn't care less about Clang.
Clang is also the new system compiler for FreeBSD. But there are not
many FreeBSD clusters either.
Dmitri
--
mai
Understood, but also remember that the national labs don't have Mac clusters -
and so they couldn't care less about Clang. The concerns over these changes
were from the national labs, so my point was that this discussion may all be
irrelevant.
On Oct 31, 2012, at 11:47 AM, Paul Hargrove wrote
Note that with Apple's latest versions of Xcode (4.2 and higher, IIRC)
Clang is now the default C compiler. I am told that Clang is the ONLY
bundled compiler for OSX 10.8 (Mountain Lion) unless you take extra steps
to install gcc (which is actually llvm-gcc and cross-compiles for OSX 10.7).
So, C
If it's only on for Clang, I very much doubt anyone will care - I'm unaware of
any of our users that currently utilize that compiler, and certainly not on the
clusters in the national labs (gcc, Intel, etc. - but I've never seen them use
Clang).
Not saying anything negative about Clang - just n
On Wed, Oct 31, 2012 at 5:04 PM, Jeff Squyres wrote:
> On Oct 31, 2012, at 9:38 AM, Dmitri Gribenko wrote:
>
>>> The rationale here is that correct MPI applications should not need to add
>>> any extra compiler files to compile without warnings.
>>
>> I would disagree with this. Compiler warning
On Oct 31, 2012, at 9:38 AM, Dmitri Gribenko wrote:
>> The rationale here is that correct MPI applications should not need to add
>> any extra compiler files to compile without warnings.
>
> I would disagree with this. Compiler warnings are most useful when
> they are on by default. Only a few
On Wed, Oct 31, 2012 at 2:36 PM, Jeff Squyres wrote:
> On Oct 31, 2012, at 3:45 AM, Dmitri Gribenko wrote:
>
>>> With this patch, they'd get warnings about these uses, even though they are
>>> completely valid according to MPI.
>>>
>>> A suggestion was that this functionality could be disabled by
*** IF YOU RUN MTT, YOU NEED TO READ THIS.
Due to some server re-organization at Indiana University (read: our gracious
hosting provider), we are moving the Open MPI community MTT database to a new
server. Instead of being found under www.open-mpi.org/mtt/, the OMPI MTT
results will soon be lo
On Oct 31, 2012, at 3:45 AM, Dmitri Gribenko wrote:
>> With this patch, they'd get warnings about these uses, even though they are
>> completely valid according to MPI.
>>
>> A suggestion was that this functionality could be disabled by default, and
>> enabled with a magic macro. Perhaps somet
Dear all,
There are some collective communications with the possibility of terminating
abnormally when MPI_IN_PLACE is specified.
(MPI_Allgather/MPI_Allgatherv/MPI_Gather/MPI_Scatter)
They refer to sdtype or rdtype (For MPI_Scatter) unconditionally by the
consideration leakage of the MPI standar
On Wed, Oct 31, 2012 at 4:25 AM, Jeff Squyres wrote:
> On Oct 28, 2012, at 10:28 AM, Dmitri Gribenko wrote:
>
>> Thank you for the feedback! Hopefully the attached patch fixes both of
>> these.
>>
>> 1. There are two helper structs with complex numbers. I predicated
>> the struct declarations a
pathf95 from PathScale's 3.2.99 compiler suite fails in the same manner:
LOGICAL(KIND=4) not allowed with BIND(C)
-Paul
On Tue, Oct 30, 2012 at 9:03 PM, Paul Hargrove wrote:
> I have a Linux/x86-64 system with PathScale's "ekopath-4.0.12.1" compilers.
>
> Building Fortran 2008 support
The problems I previously reported building the trunk with IBM's xlc/xlf:
http://www.open-mpi.org/community/lists/devel/2012/09/11518.php
is still present in OMPI-1.7.0rc5
-Paul
On Tue, Oct 30, 2012 at 7:01 PM, Ralph Castain wrote:
> Hi folks
>
> We have posted the next release candidate (rc
Linux/x86-64 host with Open64 compilers version 4.5.1 from AMD.
Fortran 2008 support is failing to build as shown below.
My records show the ompi-1.5 branch was fine on this configuration.
-Paul
PPFC mpi-f08-types.lo
^
openf95-855 openf90: ERROR MPI_F08_TYPES, File =
/global/homes
I have a Linux/x86-64 system with PathScale's "ekopath-4.0.12.1" compilers.
Building Fortran 2008 support fails as shown below.
My records show the ompi-1.5 branch and a Feb 2012 trunk were OK on this
configuration.
-Paul
PPFC mpi-f08-interfaces-callbacks.lo
module mpi_f08_interfaces_ca
31 matches
Mail list logo