Not quite yet, though we are working on it (some descriptive stuff is around,
but needs to be consolidated). Several of us started working together a couple
of months ago to support the MapReduce programming model on HPC clusters using
Open MPI as the platform. In working with our customers and
Ralph,
Do you have any YARN or Mesos performance comparison against HOD? I suppose
since it was customer requirement you might not have explored it. MPI support
seems to be active issue for Mesos now.
Charles
On May 21, 2012, at 10:36 AM, Ralph Castain r...@open-mpi.org wrote:
Not quite yet,
OMPI is a performance-focused community, so we always compare things :-)
Some initial data against YARN, but not Mesos. Someone has been looking at
porting OMPI to Mesos, but it turns out that Mesos isn't a particularly
friendly MPI platform (a couple of us have been trying to provide advice on
hi all,
i'm part of an HPC group of a university, and we have some users that
are interested in Hadoop to see if it can be useful in their research
and we also have researchers that are using hadoop already on their own
infrastructure, but that is is not enough reason for us to start with
We run similar infrastructure in a university project.. we plan to install
hadoop.. and looking for alternatives based on hadoop in case the pure
hadoop is not working as expected.
Keep us updated on the code release.
Best,
PA
2012/5/20 Stijn De Weirdt stijn.dewei...@ugent.be
hi all,
i'm
FWIW: Open MPI now has an initial cut at MR+ that runs map-reduce under any
HPC environment. We don't have the Java integration yet to support the Hadoop
MR class, but you can write a mapper/reducer and execute that programming
paradigm. We plan to integrate the Hadoop MR class soon.
If you
Hi Ralph,
I admit - I've only been half-following the OpenMPI progress. Do you have a
technical write-up of what has been done?
Thanks,
Brian
On May 20, 2012, at 9:31 AM, Ralph Castain wrote:
FWIW: Open MPI now has an initial cut at MR+ that runs map-reduce under any
HPC environment. We
Hi All,
Guess HOD could be useful existing HPC cluster with Torque scheduler which
needs to run map-reduce jobs.
Also read about *myHadoop- Hadoop on demand on traditional HPC
resources*will support many HPC schedulers like SGE, PBS etc to over
come the
integration of shared-architecture(HPC)
If I understand it right HOD is mentioned mainly for merging existing HPC
clusters with hadoop and for testing purposes..
I cannot find what is the role of Torque here (just initial nodes
allocation?) and which is the default scheduler of HOD ? Probably the
scheduler from the hadoop