Re: [OMPI users] INSTALL bug in 64-bit build of OpenMPI Release build on Windows - has workaround

2010-02-05 Thread Marcus G. Daniels
Shiqing, Damien, > If you already have an x86 solution, and you want to have > another for x64, you have to start over from the CMake-GUI, select the > 64 bit generator, i.e. "Visual Studio 9 2008 64bit", so that to generate > the a new solution in a different directory. That was the source of my

Re: [OMPI users] INSTALL bug in 64-bit build of OpenMPI Release build on Windows - has workaround

2010-02-04 Thread Marcus G. Daniels
Hi, I have another problem with building x86_64 on 64 bit Windows 7. I set CMAKE_CL_64 to true in cmake-gui, and and set the VS2008 config to x64 (`New' platform adapted from WIN32). However, I get this during the build: 1>-- Build started: Project: libopen-pal, Configuration: Debug x64 --

Re: [OMPI users] INSTALL bug in 64-bit build of OpenMPI Release build on Windows - has workaround

2010-02-04 Thread Marcus G. Daniels
> Hmmm. I did try setting release and I think I still got pdbs. I'll try > again from a totally clean source tree and post back. Another datapoint: I tried Cmake's Generate after setting CMAKE_BUILD_TYPE and building. I have the same sort of build problems with setting x64 in the VS 2008 config

Re: [OMPI users] Axon BTL for OpenMPI?

2008-12-12 Thread Marcus G. Daniels
Hi Jeff, I'd like to use MPI for features like derived types, and moving around non-flat data. You should be able to do that today. The Axon PCIe-to-PCIe RDMA interface has a raw device and an ethernet device in the Linux kernel, and I can indeed using the latter as a workaround. It's mor

[OMPI users] Axon BTL for OpenMPI?

2008-12-11 Thread Marcus G. Daniels
Hello, I've heard some experimental work has been done to run OpenMPI over the Axon driver as found in IBM triblades. Seems like that should work fine, as it's just another RDMA interface, no? I'd like to use MPI for features like derived types, and moving around non-flat data. Regards,

Re: [OMPI users] Cell EIB support for OpenMPI

2007-03-23 Thread Marcus G. Daniels
George Bosilca wrote: All in all we end up with a multi-hundreds KB library which in most of the applications will be only used at 10%. Seems like it ought to be possible to do some coverage analysis for a particular application and figure out what parts of the library (and user code) to ma

Re: [OMPI users] Cell EIB support for OpenMPI

2007-03-23 Thread Marcus G. Daniels
Marcus G. Daniels wrote: Mike Houston wrote: The main issue with this, and addressed at the end of the report, is that the code size is going to be a problem as data and code must live in the same 256KB in each SPE. They mention dynamic overlay loading, which is also how we deal with

Re: [OMPI users] Cell EIB support for OpenMPI

2007-03-22 Thread Marcus G. Daniels
Mike Houston wrote: The main issue with this, and addressed at the end of the report, is that the code size is going to be a problem as data and code must live in the same 256KB in each SPE. Just for reference, here are the stripped shared library sizes for OpenMPI 1.2 as built on a Mercury Ce

[OMPI users] Cell EIB support for OpenMPI

2007-03-22 Thread Marcus G. Daniels
Hi, Has anyone investigated adding intra chip Cell EIB messaging to OpenMPI? It seems like it ought to work. This paper seems pretty convincing: http://www.cs.fsu.edu/research/reports/TR-061215.pdf

Re: [OMPI users] For Open MPI + BPROC users

2006-11-30 Thread Marcus G. Daniels
Galen Shipman wrote: We have found a potential issue with BPROC that may effect Open MPI. Open MPI by default uses PTYs for I/O forwarding, if PTYs aren't setup on the compute nodes, Open MPI will revert to using pipes. Recently (today) we found a potential issue with PTYs and BPROC. A simp

Re: [OMPI users] crash inside mca_btl_tcp_proc_remove

2006-04-28 Thread Marcus G. Daniels
oint->endpoint_addr)" before the decrement, apparently things work... Marcus G. Daniels wrote: Hi all, I built 1.0.2 on Fedora 5 for x86_64 on a cluster setup as described below and I witness the same behavior when I try to run a job. Any ideas on the cause? Jeff Squyres wrote:

[OMPI users] crash inside mca_btl_tcp_proc_remove

2006-04-27 Thread Marcus G. Daniels
Hi all, I built 1.0.2 on Fedora 5 for x86_64 on a cluster setup as described below and I witness the same behavior when I try to run a job. Any ideas on the cause? Jeff Squyres wrote: > One additional question: are you using TCP as your communications > network, and if so, do either of the n