In the usual place:
http://www.open-mpi.org/software/ompi/v1.4/
This fixes all known issues. Changes since the last rc:
- Add note about intel compilers in README
- Per Paul Hargrove's notes, replace $(RM) with rm in Makefile.am
- Add note about how ROMIO is not supported on OpenBSD.
- Fix
This doesn't sound like a very good idea, despite a significant support from a
lot of institutions. There is no standardization efforts in the targeted
community, and championing a broader support in the Java world was not one of
our main target.
OMPI does not include the Boost bindings, despit
Currently, Hadoop tasks (in a job) are independent of each. If Hadoop
is going to use MPI for inter-task communication, then make sure they
understand that the MPI standard currently does not address fault
folerant.
Note that it is not uncommon to run map reduce jobs on Amazon EC2's
spot instances
The community is aware of the issue. However, the corporations
interested/involved in this area are not running on EC2 nor concerned about
having allocations taken away. The question of failed nodes is something we
plan to address over time, but is not considered an immediate show-stopper.
On F
Nobody is asking us to make any decision or take a position re standardization.
The Hadoop community fully intends to bring the question of Java binding
standards to the Forum over the next year, but we all know that is a long,
arduous journey. In the interim, they not only asked that we provide
Ralph,
I am not totally against the idea. As long as Hadoop is not taking
away the current task communication mechanism until MPI finally (there
are just too many papers on FT MPI, I remember reading checkpointing
MPI jobs more than 10 years ago!) has a standard way to handle node
failure, then I
:-)
I agree, and I don't sense anyone pushing the direction of distorting the
current MPI behaviors. There are some good business reasons to want to use MPI
in the analytics, and there are thoughts on how to work around the failure
issues, but Hadoop clusters have some mechanisms available to t
As an HPC software developer and user of OMPI, I'd like to add my $0.02
here even though I am not an OMPI developer.
Nothing in George's response seems to me to preclude the interested
institutions (listed as FROM in the RFC) from forking a branch to pursue
this work until there can be standar
We already have a stable, standard interface for non-C language bindings, Paul
- the C++ bindings, for example, are built on top of them.
The binding codes are all orthogonal to the base code. All they do is massage
data access and then loop back to the C bindings. This is the normal way we
han
Forgive me if I misunderstand, but I am under the impression that the
MPI Forum has not begun any standardization of MPI bindings for JAVA.
Have I missed something?
-Paul
On 2/7/2012 12:39 PM, Ralph Castain wrote:
We already have a stable, standard interface for non-C language bindings, Paul
On Feb 7, 2012, at 2:33 PM, George Bosilca wrote:
> This doesn't sound like a very good idea, despite a significant support from
> a lot of institutions. There is no standardization efforts in the targeted
> community, and championing a broader support in the Java world was not one of
> our mai
On Feb 7, 2012, at 1:43 PM, Paul H. Hargrove wrote:
> Forgive me if I misunderstand, but I am under the impression that the MPI
> Forum has not begun any standardization of MPI bindings for JAVA. Have I
> missed something?
No, they haven't - but that doesn't mean that the bindings cannot confo
On Feb 7, 2012, at 3:33 PM, Paul H. Hargrove wrote:
> So I'd propose that the work be done on a branch and the RFC can be reissued
> when there is both
> a) a standard to which the bindings can claim to conform
I don't really agree with this statement; see my prior email.
> b) an implementation
Ralph,
I think you and I may be confusing each other with the meaning of
"standard":
You asked me
So I'm not sure what you are asking that hasn't already been doneā¦
My reply to that question is that when I wrote
a) a standard to which the bindings can claim to conform
I meant "a) JAVA bi
No problems, Paul. I appreciate your input.
If everything in the trunk was required to in the standard, then much of the
trunk would have to be removed (e.g., all the fault tolerance code). As Jeff
indicated, the trunk is an area in which we bring new functionality for broader
exposure. I very
On 2/7/2012 8:59 AM, Jeff Squyres wrote:
This fixes all known issues.
Well, not quite...
I've SUCCESSFULLY retested 44 out of the 55 cpu/os/compiler/abi
combinations currently on my list.
I expect 9 more by the end of the day (the older/slower hosts), but two
of my test hosts are down.
S
On 2/7/2012 1:25 PM, Paul H. Hargrove wrote:
So far I see only two problems that remain:
+ I can't build w/ the PGI compilers on MacOS Lion.
This was previously reported in
http://www.open-mpi.org/community/lists/devel/2012/01/10258.php
+ Building w/ Solaris Studio 12.2 or 12.3 on Linux x86
On 2/7/2012 2:37 PM, Paul H. Hargrove wrote:
+ "make check" fails atomics tests using GCCFSS-4.0.4 compilers on
Solaris10/SPARC
Originally reported in:
http://www.open-mpi.org/community/lists/devel/2012/01/10234.php
This is a matter of the Sun/Oracle fork of GCC (known as GCC For SPARC
Syste
On 2/7/2012 1:25 PM, Paul H. Hargrove wrote:
I've SUCCESSFULLY retested 44 out of the 55 cpu/os/compiler/abi
combinations currently on my list.
I expect 9 more by the end of the day (the older/slower hosts), but
two of my test hosts are down.
My testing is complete for this rc:
+ 54 of my 5
19 matches
Mail list logo