Re: [OMPI devel] Intel C (icc) 11.0.083 compile problem

2009-06-17 Thread Paul H. Hargrove
Jeff Squyres wrote: [snip] Erm -- that's weird. So when you extract the tarballs, atomic-amd64-linux.s is non-empty (as it should be), but after a failed build, it's file length 0? Notice that during the build process, we sym link atomic-amd64-linux.s to atomic-asm.S (I see that happening in

Re: [OMPI devel] Intel C (icc) 11.0.083 compile problem

2009-06-17 Thread Jeff Squyres
I am trying to build Open MPI 1.3.2 with ifort 11.0.074 and icc/icpc 11.0.083 (the Intel compilers) on a quad-core AMD Opteron workstation running CentOS 4.4. I have no problems on this same machine if I use ifort with gcc/g++ instead of icc/icpc. Configure seems to work ok even though icc and

[OMPI devel] Intel C (icc) 11.0.083 compile problem

2009-06-17 Thread David Robertson
Hello, I am trying to build Open MPI 1.3.2 with ifort 11.0.074 and icc/icpc 11.0.083 (the Intel compilers) on a quad-core AMD Opteron workstation running CentOS 4.4. I have no problems on this same machine if I use ifort with gcc/g++ instead of icc/icpc. Configure seems to work ok even though

Re: [OMPI devel] connect management for multirail (Open-)MX

2009-06-17 Thread George Bosilca
Yes, in Open MPI the connections are usually created on demand. As far as I know there are few devices that do not abide to this "law", but MX is not one of them. To be more precise on how the connections are established, if we say that each node has two rails and we're doing a ping-pong, t

Re: [OMPI devel] connect management for multirail (Open-)MX

2009-06-17 Thread Brice Goglin
Thanks for the answer. So if I understand correctly, the connection order is decided dynamically depending on when each peer has some messages to send and how the upper level load-balances them. There shouldn't be anything preventing (1) and (2) from happening at the same time then. And I wonder wh

Re: [OMPI devel] connect management for multirail (Open-)MX

2009-06-17 Thread George Bosilca
Brice, The connection mechanism in the MX BTL suffers from a big problem on multi-rail (if all NICS are identical). If the rails are connected using the same mapper, they will have identical ID. Unfortunately, these ID are supposed to be unique in order to guarantee the connection orderin

Re: [OMPI devel] 1.3.3 Release Schedule

2009-06-17 Thread Brad Benton
On Wed, Jun 17, 2009 at 6:45 AM, Jeff Squyres wrote: > Looks good to me. Brad -- can you add this to the wiki in the 1.3 series > page? done: https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.3.3 --brad > > > On Jun

Re: [OMPI devel] Just a suggestion about a formation of new openMPI student mailing list

2009-06-17 Thread Leo P.
Hi Eugene, I was just thinking about Ubuntu's MOTU initiative. [https://wiki.ubuntu.com/MOTU/Mentoring] when i talked about mentoring program for openMPI. Also i thought the user mailing list was for talking about user's level program not the things related to core openMPI functions and soon

Re: [OMPI devel] Just a suggestion about a formation of new openMPI student mailing list

2009-06-17 Thread Eugene Loh
Leo P. wrote: I found openMPI community filled with co-operative and helpful people. And would like to thank them through this email [Nik, Eugene, Ralph, Mitchel and others]. You are very gracious. Also I would like to suggest one or may be two things. 1. First of all i would l

[OMPI devel] Just a suggestion about a formation of new openMPI student mailing list

2009-06-17 Thread Leo P.
Hi everyone, I found openMPI community filled with co-operative and helpful people. And would like to thank them through this email [Nik, Eugene, Ralph, Mitchel and others]. Also I would like to suggest one or may be two things. 1. First of all i would like to suggest a different mailing lis

[OMPI devel] Fault Tolerant OpenMPI

2009-06-17 Thread 刚 王
Hi All, I'm studying fault tolerant MPI. Does OpenMPI support failure autodetecting, notifying and MPI library rebuilding like Harness+FT-MPI? Many thanks. Gang Wang ___ 好玩贺卡等你发,邮箱贺卡全新上线! http://card.mail.cn.yahoo.com/

Re: [OMPI devel] [RFC] Low pressure OPAL progress

2009-06-17 Thread Ashley Pittman
On Tue, 2009-06-09 at 07:28 -0400, Terry Dontje wrote: > The biggest issue is coming up with a > way to have blocks on the SM btl converted to the system poll call > without requiring a socket write for every packet. For what it's worth you don't need a socket write every (local) packet, all you

[OMPI devel] connect management for multirail (Open-)MX

2009-06-17 Thread Brice Goglin
Hello, I am debugging some sort of deadlock when doing multirail over Open-MX. What I am seeing with 2 processes and 2 boards per node with *MX* is: 1) process 0 rail 0 connects to process 1 rail 0 2) p1r0 connects back to p0r0 3) p0 rail 1 connects to p1 rail 1 4) p1r1 connects back to p0r1 For s

Re: [OMPI devel] 1.3.3 Release Schedule

2009-06-17 Thread Jeff Squyres
Looks good to me. Brad -- can you add this to the wiki in the 1.3 series page? On Jun 16, 2009, at 10:37 PM, Brad Benton wrote: All: We are close to releasing 1.3.3. This is the current plan: - Evening of 6/16: collect MTT runs on the current branch w/the current 1.3.3 features & fixes