On Dec 17, 2009, at 5:55 PM, <kevin.buck...@ecs.vuw.ac.nz> wrote: > I am happy to be able to inform you that the problems we were > seeing would seem to have been arising down at the OpenMPI > level.
Happy for *them*, at least. ;-) > If I remove any acknowledgement of IPv6 within the OpenMPI > code, then both the PETSc examples and PISM application > have been seen to be running upon my initial 8-processor > parallel environment when submitted as an Sun Grid Engine > job. Ok, that's good. > I guess this means that the PISM and PETSc guys can "stand easy" > whilst the OpenMPI community needs to follow up on why there's > a "addr.sa_len=0" creeping through the interface inspection > code (upon NetBSD at least) when it passes thru the various > IPv6 stanzas. Ok. We're still somewhat at a loss here, because we don't have any NetBSD to test on. :-( We're happy to provide any help that we can, and just like you, we'd love to see this problem resolved -- but NetBSD still isn't on any of our core competency lists. :-( FWIW, we might want to move this discussion to the de...@open-mpi.org mailing list... -- Jeff Squyres jsquy...@cisco.com