Shiqing, Damien,
> If you already have an x86 solution, and you want to have
> another for x64, you have to start over from the CMake-GUI, select the
> 64 bit generator, i.e. "Visual Studio 9 2008 64bit", so that to generate
> the a new solution in a different directory.
That was the source of my
Hi,
I have another problem with building x86_64 on 64 bit Windows 7. I set
CMAKE_CL_64 to true in cmake-gui, and and set the VS2008 config to x64
(`New' platform adapted from WIN32). However, I get this during the
build:
1>-- Build started: Project: libopen-pal, Configuration: Debug x64 --
> Hmmm. I did try setting release and I think I still got pdbs. I'll try
> again from a totally clean source tree and post back.
Another datapoint:
I tried Cmake's Generate after setting CMAKE_BUILD_TYPE and building.
I have the same sort of build problems with setting x64 in the VS 2008
config
Hi Jeff,
I'd like to use MPI for features like derived types, and moving
around non-flat data.
You should be able to do that today.
The Axon PCIe-to-PCIe RDMA interface has a raw device and an ethernet
device in the Linux kernel, and I can indeed using the latter as a
workaround. It's mor
Hello,
I've heard some experimental work has been done to run OpenMPI over the
Axon driver as found in IBM triblades. Seems like that should work
fine, as it's just another RDMA interface, no? I'd like to use MPI for
features like derived types, and moving around non-flat data.
Regards,
George Bosilca wrote:
All in all we end up with a multi-hundreds KB library which in most
of the applications will be only used at 10%.
Seems like it ought to be possible to do some coverage analysis for a
particular application and figure out what parts of the library (and
user code) to ma
Marcus G. Daniels wrote:
Mike Houston wrote:
The main issue with this, and addressed at the end
of the report, is that the code size is going to be a problem as data
and code must live in the same 256KB in each SPE. They mention dynamic
overlay loading, which is also how we deal with
Mike Houston wrote:
The main issue with this, and addressed at the end
of the report, is that the code size is going to be a problem as data
and code must live in the same 256KB in each SPE.
Just for reference, here are the stripped shared library sizes for
OpenMPI 1.2 as built on a Mercury Ce
Hi,
Has anyone investigated adding intra chip Cell EIB messaging to OpenMPI?
It seems like it ought to work. This paper seems pretty convincing:
http://www.cs.fsu.edu/research/reports/TR-061215.pdf
Galen Shipman wrote:
We have found a potential issue with BPROC that may effect Open MPI.
Open MPI by default uses PTYs for I/O forwarding, if PTYs aren't
setup on the compute nodes, Open MPI will revert to using pipes.
Recently (today) we found a potential issue with PTYs and BPROC. A
simp
oint->endpoint_addr)" before the decrement, apparently
things work...
Marcus G. Daniels wrote:
Hi all,
I built 1.0.2 on Fedora 5 for x86_64 on a cluster setup as described
below and I witness the same behavior when I try to run a job. Any
ideas on the cause?
Jeff Squyres wrote:
Hi all,
I built 1.0.2 on Fedora 5 for x86_64 on a cluster setup as described
below and I witness the same behavior when I try to run a job. Any
ideas on the cause?
Jeff Squyres wrote:
> One additional question: are you using TCP as your communications
> network, and if so, do either of the n
12 matches
Mail list logo