On Tue, 2006-11-28 at 10:00 -0700, Li-Ta Lo wrote:
> On Mon, 2006-11-27 at 17:21 -0800, Matt Leininger wrote:
> > On Mon, 2006-11-27 at 16:45 -0800, Matt Leininger wrote:
> > > Has anyone testing OMPI's alltoall at > 2000 MPI tasks? I'm seeing each
> > > M
nt each time we send a message, forcing the OS to map the
> entire file at one point.
>
>george.
>
> On Nov 27, 2006, at 8:21 PM, Matt Leininger wrote:
>
> > On Mon, 2006-11-27 at 16:45 -0800, Matt Leininger wrote:
> >> Has anyone testing OMPI's alltoall at > 2
t one point.
I'll try playing with the mpool_sm_per_peer_size parameter.
Thanks,
- Matt
>
>george.
>
> On Nov 27, 2006, at 8:21 PM, Matt Leininger wrote:
>
> > On Mon, 2006-11-27 at 16:45 -0800, Matt Leininger wrote:
> >> Has anyone testing OMP
On Mon, 2006-11-27 at 16:45 -0800, Matt Leininger wrote:
> Has anyone testing OMPI's alltoall at > 2000 MPI tasks? I'm seeing each
> MPI task eat up > 1GB of memory (just for OMPI - not the app).
I gathered some more data using the alltoall benchmark in mpiBench.
mp
Has anyone testing OMPI's alltoall at > 2000 MPI tasks? I'm seeing each
MPI task eat up > 1GB of memory (just for OMPI - not the app).
Thanks,
- Matt
On Mon, 2006-11-27 at 15:57 -0800, Mark A. Grondona wrote:
> > On Mon, 2006-11-27 at 16:29 -0700, Brian W Barrett wrote:
> > > On Nov 27, 2006, at 4:19 PM, Matt Leininger wrote:
> > >
> > > > I've been running more tests of OpenMPI v1.2b. I've ru
On Mon, 2006-11-27 at 16:29 -0700, Brian W Barrett wrote:
> On Nov 27, 2006, at 4:19 PM, Matt Leininger wrote:
>
> > I've been running more tests of OpenMPI v1.2b. I've run into several
> > cases where the app+MPI use too much memory and the OOM handler kills
> &
Copying the Open MPI folks on this thread.
- Matt
On Wed, 2006-04-19 at 12:05 -0700, Sean Hefty wrote:
> I'd like to get some feedback regarding the following approach to supporting
> multicast groups in userspace, and in particular for MPI. Based on side
> conversations, I need to know if th
As discussed today here are the configure flags for MPQC that we
typically use to enable MPI+threads. Curt and I can point folks to
input decks that either require mpi thread level to be funneled or
multiple.
You can download mpqc at www.mpqc.org.
'--prefix=/install/dir' '--with-build-i
On Mon, 2005-07-18 at 11:44 -0400, Jeff Squyres wrote:
> Excellent. Seems like several people have thought of this at the same
> time (I was pinged about this by the IB vendors).
>
> I know that others on the team have more experience in this area than I
> do, so I personally welcome all inform
On Mon, 2005-07-18 at 08:28 -0400, Jeff Squyres wrote:
> On Jul 18, 2005, at 2:50 AM, Matt Leininger wrote:
>
> >> Generally speaking, if you launch <=N processes in a job on a node
> >> (where N == number of CPUs on that node), then we set processor
> >> affin
11 matches
Mail list logo