Interesting... page 11:
http://www.fujitsu.com/downloads/TC/sc10/programming-on-k-computer.pdf
Open MPI based:
* Open Standard, Open Source, Multi-Platform including PC Cluster.
* Adding extension to Open MPI for "Tofu" interconnect
Rayson
==
Grid Engine / Open Grid Sch
hort version: yes, Open MPI is used on K and was used to power the 8PF runs.
>
> w00t!
>
>
>
> On Jun 24, 2011, at 7:16 PM, Jeff Squyres wrote:
>
>> w00t!
>>
>> OMPI powers 8 petaflops!
>> (at least I'm guessing that -- does anyone know if that's t
I guess Platform MPI (which is a "merge" of Scali MPI & HP-MPI) will
get technologies from IBM MPI as well... or "merge" IBM's MPI into
Platform MPI (merge is around quotes because it is in general hard to
merge technologies - like I told a co-worker 10 years ago that one
can't just merge SGE with
On Tue, Dec 20, 2011 at 8:28 PM, Larry Baker wrote:
>> I am pretty sure a literal "rm -rf" should be fine.
>
> Not necessarily. I'm not at work. But I think either -f or -r might not be
> legal on all Unix's (Tru64 Unix? AIX?).
I used to code on AIX daily, and I am pretty sure that "rm -rf" w
Currently, Hadoop tasks (in a job) are independent of each. If Hadoop
is going to use MPI for inter-task communication, then make sure they
understand that the MPI standard currently does not address fault
folerant.
Note that it is not uncommon to run map reduce jobs on Amazon EC2's
spot instances
, 2012, at 1:05 PM, Rayson Ho wrote:
>
>> Currently, Hadoop tasks (in a job) are independent of each. If Hadoop
>> is going to use MPI for inter-task communication, then make sure they
>> understand that the MPI standard currently does not address fault
>> folerant.
>>
See P. 38 - 40, MVAPICH2 outperforms Open-MPI for each test, so is it
something that they are doing to optimize for CUDA & GPUs and those
optimizations are not in OMPI, or did they specifically tune MVAPICH2
to make it shine??
http://hpcadvisorycouncil.com/events/2012/Israel-Workshop/Presentations
Performance depends on the network topology & node hardware, and the
benchmark - so we don't have enough information to determine the root
of the issue...
However, you can do some debugging on your end (once you master the
techniques you will be able to debug all sorts of performance problems
- no
On Mon, Apr 23, 2012 at 4:21 PM, Jeffrey Squyres wrote:
> No one replied to this RFC. Does anyone have an opinion about it?
>
> I have attached a patch (including some debugging output) showing my initial
> implementation. If no one objects by the end of this week, I'll commit to
> the trunk.
On Mon, Apr 23, 2012 at 5:56 PM, Jeffrey Squyres wrote:
> On Apr 23, 2012, at 5:53 PM, George Bosilca wrote:
>
>> However, I did a quick grep and most of our headers are larger than a single
>> line of cache (even Itanium L2) so I suppose that making
>> opal_cache_line_size equal to the L2 cache
On Fri, Jul 27, 2012 at 8:53 AM, Daniel Gruber wrote:
> A while after u5 the open source repository was closed and most of the
> German engineers from Sun/Oracle moved to Univa, working on Univa
> Grid Engine. Currently you have the choice between Univa Grid Engine,
> Son of Grid Engine (free acad
,000 processes level! We feel pleasure to
>>> utilize such a great open-source software.
>>>
>>> We cannot tell detail of our technology yet because of our contract
>>> with RIKEN AICS, however, we will plan to feedback of our improvements
>>> a
ints and trying to fix the bug myself,
> but I've been short on time so haven't gotten around to it yet.
>
> Richard
>
> On Saturday, 20 October, 2012 at 10:12 AM, Rayson Ho wrote:
>
> Hi Eric,
>
> Sounds like it's also related to this problem reported by
.
>
> Alternatively, you might check with Edgar Gabriel about the ompio component
> and see if it either supports > 2GB sizes or can also be extended to do so.
> Might be that a simple change to select that module instead of ROMIO would
> meet the need.
>
> Appreciate your int
by someone else, so long as we don't get into copyright
> issues.
>
> On Sep 19, 2012, at 11:57 PM, Rayson Ho wrote:
>
>> I found this paper recently, "MPI Library and Low-Level Communication
>> on the K computer", available at:
>>
>> http://www.fuj
I googled but found no precise answer...
Is it true that some features are disabled if the code is not
supported for the target platform??
http://www.open-mpi.org/community/lists/devel/2007/07/1896.php
http://www.open-mpi.org/community/lists/devel/2007/07/1886.php
I got everything compiled on MI
What would the MIPS/Linux port miss if ompi_info prints:
Thread support: posix (mpi: yes, progress: yes)
TIA,
Rayson
On Tue, Apr 14, 2009 at 10:00 AM, Rayson Ho wrote:
> I googled but found no precise answer...
>
> Is it true that some features are disabled if the code is not
&g
On Wed, Apr 29, 2009 at 12:38 PM, Jerry Ye wrote:
> I’m currently working in an environment where I cannot use SSH to launch
> child processes. Instead, the process with rank 0 skips the ssh_child
> function in plm_rsh_module.c and the child processes are all started at the
> same time on differe
- give us the list of cores available
> to us so we can map and do affinity, and pass in your own mapping. Maybe
> with some logic so we can decide which to use based on whether OMPI or GE
> did the mapping??
>
> Not sure here - just thinking out loud.
> Ralph
>
> On Sep 30
Hi Jeff,
There's a typo in trunk/README:
-> 1175 ...unrelated to wach other
I guess you mean "unrelated to each other".
Rayson
On Wed, Apr 21, 2010 at 12:35 PM, Jeff Squyres wrote:
> Per the telecon Tuesday, I committed a new OMPI MPI extension to the trunk:
>
> https://svn.open-mpi.org/
BTW, another 2 typos in README:
1193subdirectory off <- directory "of"
1199 thse extensions <- "these" extensions
Rayson
On Thu, Apr 22, 2010 at 10:35 AM, Jeff Squyres wrote:
> Fixed -- thanks!
>
> On Apr 22, 2010, at 12:35 AM, Rayson Ho wrote:
>
>&
In the MPITypes paper (Processing MPI Datatypes Outside MPI), page 7:
Test: Vector
Element type: float
MPICH2: 1788.85 MB/sec
OpenMPI: 1088.01 MB/sec <- *
MPITypes: 1789.37 MB/sec
Manual Copy: 1791.59 MB/sec
Test: YZ Face
Element type: float
MPICH2: 145.32 MB/sec
OpenMPI: 93.08 MB/sec <- *
Hello,
I'm from the Sun Grid Engine (SGE) project (
http://gridengine.sunsource.net ). I am working on processor affinity
support for SGE.
In 2005, we had some discussions on the SGE mailing list with Jeff on
this topic. As quad-core processors are available from AMD and Intel,
and higher core co
, "NUMA and
interconnect transfers":
http://opensolaris.org/jive/thread.jspa?messageID=185268
Rayson
On Jan 11, 2008 6:22 AM, Pak Lui wrote:
> https://svn.open-mpi.org/trac/ompi/wiki/OnHostTopologyDescription
>
>
> Rayson Ho wrote:
> > Hello,
> >
> > I&
What is the license of the logo?? If it is under a free license, then
may be I can upload it to wikipedia and update the page:
http://en.wikipedia.org/wiki/Open_MPI
Rayson
On 3/13/08, Jeff Squyres wrote:
> On Mar 13, 2008, at 8:35 AM, Adrian Knoth wrote:
>
> >> We usually snip off the words a
e-
> > socket
> > relationship in order to address the multicore issue. I think there
> > are
> > other folks in the team who are actively working on it so they
> > probably
> > can address it better than I can. Here some descriptions on the wiki
> > for it:
26 matches
Mail list logo