Re: [OMPI devel] OMPI alltoall memory footprint

2006-11-28 Thread Li-Ta Lo
On Mon, 2006-11-27 at 17:21 -0800, Matt Leininger wrote: > On Mon, 2006-11-27 at 16:45 -0800, Matt Leininger wrote: > > Has anyone testing OMPI's alltoall at > 2000 MPI tasks? I'm seeing each > > MPI task eat up > 1GB of memory (just for OMPI - not the app). > > I gathered some more data usin

Re: [OMPI devel] OMPI alltoall memory footprint

2006-11-28 Thread Li-Ta Lo
On Tue, 2006-11-28 at 09:28 -0800, Matt Leininger wrote: > On Tue, 2006-11-28 at 10:00 -0700, Li-Ta Lo wrote: > > On Mon, 2006-11-27 at 17:21 -0800, Matt Leininger wrote: > > > On Mon, 2006-11-27 at 16:45 -0800, Matt Leininger wrote: > > > > Has anyone testing OMPI&

Re: [OMPI devel] OMPI alltoall memory footprint

2006-11-28 Thread Li-Ta Lo
On Mon, 2006-11-27 at 17:21 -0800, Matt Leininger wrote: > On Mon, 2006-11-27 at 16:45 -0800, Matt Leininger wrote: > > Has anyone testing OMPI's alltoall at > 2000 MPI tasks? I'm seeing each > > MPI task eat up > 1GB of memory (just for OMPI - not the app). > > I gathered some more data usin

Re: [OMPI devel] 1.2b3 fails on bluesteel

2007-01-19 Thread Li-Ta Lo
On Fri, 2007-01-19 at 13:25 -0700, Greg Watson wrote: > Bluesteel is a 64bit bproc machine. I configured with: > > ./configure --with-devel-headers --disable-shared --enable-static > > When I attempt to run an MPI program: > > [bluesteel.lanl.gov:28663] [0,0,0] ORTE_ERROR_LOG: Not available in

Re: [OMPI devel] 1.2b3 fails on bluesteel

2007-01-19 Thread Li-Ta Lo
On Fri, 2007-01-19 at 14:42 -0700, Greg Watson wrote: > > The libraries required by the program are: > > $ ldd x > librt.so.1 => /lib64/tls/librt.so.1 (0x2abc1000) > libbproc.so.4 => /usr/lib64/libbproc.so.4 (0x2acdb000) > libdl.so.2 => /lib64/libdl.so.2

Re: [OMPI devel] Is it possible to get BTL transport work directly with MPI level

2007-04-03 Thread Li-Ta Lo
On Sun, 2007-04-01 at 13:12 -0600, Ralph Castain wrote: > > 2. I'm not sure what you mean by mapping MPI processes to "physical" > processes, but I assume you mean how do we assign MPI ranks to processes on > specific nodes. You will find that done in the orte/mca/rmaps framework. We > currently

Re: [OMPI devel] Is it possible to get BTL transport work directly with MPI level

2007-04-03 Thread Li-Ta Lo
On Tue, 2007-04-03 at 12:33 -0600, Ralph H Castain wrote: > > > On 4/3/07 9:32 AM, "Li-Ta Lo" wrote: > > > On Sun, 2007-04-01 at 13:12 -0600, Ralph Castain wrote: > > > >> > >> 2. I'm not sure what you mean by mapping MPI processes t

Re: [OMPI devel] Collectives interface change

2007-08-13 Thread Li-Ta Lo
On Thu, 2007-08-09 at 14:49 -0600, Brian Barrett wrote: > Hi all - > > There was significant discussion this week at the collectives meeting > about improving the selection logic for collective components. While > we'd like the automated collectives selection logic laid out in the > Collv2

Re: [OMPI devel] Maximum Shared Memory Segment - OK to increase?

2007-08-28 Thread Li-Ta Lo
On Mon, 2007-08-27 at 15:10 -0400, Rolf vandeVaart wrote: > We are running into a problem when running on one of our larger SMPs > using the latest Open MPI v1.2 branch. We are trying to run a job > with np=128 within a single node. We are seeing the following error: > > "SM failed to send messa

Re: [OMPI devel] Maximum Shared Memory Segment - OK to increase?

2007-08-28 Thread Li-Ta Lo
On Tue, 2007-08-28 at 10:12 -0600, Brian Barrett wrote: > On Aug 28, 2007, at 9:05 AM, Li-Ta Lo wrote: > > > On Mon, 2007-08-27 at 15:10 -0400, Rolf vandeVaart wrote: > >> We are running into a problem when running on one of our larger SMPs > >> using the latest

Re: [OMPI devel] SM BTL hang issue

2007-08-29 Thread Li-Ta Lo
On Wed, 2007-08-29 at 11:36 -0400, Terry D. Dontje wrote: > To run the code I usually do "mpirun -np 6 a.out 10" on a 2 core > system. It'll print out the following and then hang: > Target duration (seconds): 10.00 > # of messages sent in that time: 589207 > Microseconds per mess

Re: [OMPI devel] Maximum Shared Memory Segment - OK to increase?

2007-08-30 Thread Li-Ta Lo
On Thu, 2007-08-30 at 10:26 -0400, rolf.vandeva...@sun.com wrote: > Li-Ta Lo wrote: > > >On Tue, 2007-08-28 at 10:12 -0600, Brian Barrett wrote: > > > > > >>On Aug 28, 2007, at 9:05 AM, Li-Ta Lo wrote: > >> > >> > >> &

Re: [OMPI devel] SM BTL hang issue

2007-08-30 Thread Li-Ta Lo
On Wed, 2007-08-29 at 14:06 -0400, Terry D. Dontje wrote: > hmmm, interesting since my version doesn't abort at all. > Some problem with fortran compiler/language binding? My C translation doesn't have any problem. [ollie@exponential ~]$ mpirun -np 4 a.out 10 Target duration (seconds): 10.

Re: [OMPI devel] SM BTL hang issue

2007-08-30 Thread Li-Ta Lo
On Thu, 2007-08-30 at 12:25 -0400, terry.don...@sun.com wrote: > Li-Ta Lo wrote: > > >On Wed, 2007-08-29 at 14:06 -0400, Terry D. Dontje wrote: > > > > > >>hmmm, interesting since my version doesn't abort at all. > >> > >> > >&g

Re: [OMPI devel] SM BTL hang issue

2007-08-30 Thread Li-Ta Lo
On Thu, 2007-08-30 at 12:45 -0400, terry.don...@sun.com wrote: > Li-Ta Lo wrote: > > >On Thu, 2007-08-30 at 12:25 -0400, terry.don...@sun.com wrote: > > > > > >>Li-Ta Lo wrote: > >> > >> > >> &

Re: [OMPI devel] Any info regarding sm availible?

2007-10-03 Thread Li-Ta Lo
On Wed, 2007-10-03 at 12:43 +0200, Torje Henriksen wrote: > Hi everyone, > > > I'm a student at the University of Tromso, and I'm trying to > modify the shared memory component in the bit transfer layer > (ompi/mca/btl/sm), and also the queues that this > component uses. > > I was wondering i

Re: [OMPI devel] Moving fragments in btl sm

2007-11-08 Thread Li-Ta Lo
On Thu, 2007-11-08 at 13:38 +0100, Torje Henriksen wrote: > Hi, > > I have a question that I shouldn't need to ask, but I'm > kind of lost in the code. > > The btl sm component is using the circular buffers to write and read > fragments (sending and receiving). > > In the write_to_head and rea