On Wed, 22 Jul 2009, Lee Amy wrote:
> Thanks. I have use your Makefile to recompile. However, I still
> encounter some odd problem.
>
> I have attached the make output and Makefile.
I see nothing wrong with the make output?
Daniël Mantione
on fault).
>
> Could you tell me how to fix that?
That error message gives very little information to diagnose the problem.
Maybe you can recompile with debug information, then it will print a more
meaningfull backtrace.
Also, please compare your Makefile with the attached one.
Da
On Wed, 22 Jul 2009, Lee Amy wrote:
> Hi,
>
> I'm going to compile HPL by using OpenMPI-1.2.4. Here's my
> Make.Linux_ATHLON_CBLAS file.
GotoBLAS needs to be called as Fortran BLAS, so you need to switch from
CBLAS to FBLAS.
Daniël Mantione
On Fri, 15 Aug 2008, Kozin, I \(Igor\) wrote:
> Hello, I would really appreciate any advice on troubleshooting/tuning
> Open MPI over ConnectX. More details about our setup can be found here
> http://www.cse.scitech.ac.uk/disco/database/search-machine.php?MID=52
> Single process per node (ppn
On Tue, 12 Aug 2008, Gus Correa wrote:
> Hello Daniel and list
>
> Could it be a problem with memory bandwidth / contention in multi-core?
Yes, I believe we are somehow limited by memory performance. Here are
some numbers from a dual Opteron 2352 system, which has much more memory
bandwidth:
On Wed, 13 Aug 2008, George Bosilca wrote:
> Daniel,
>
> Open IB is one of the few devices that allow local communications (instead of
> using shared memory). As the latency looks OK, I supposed that small messages
> always use shared memory, while large ones get stripped over sm and openib.
>
be faster than the sm btl. Does anyone know wether
something can be tuned here?
Best regards,
Daniël Mantione
in your
environment.
Alternatively, you can also ask ClusterVision support to install an
Infinipath-psm capable OpenMPI on your cluster, so you can continue using
OpenMPI.
Best regards,
Daniël Mantione
On Wed, 6 Feb 2008, Christian Bell wrote:
> Hi Daniel --
>
> PSM should determine your node setup and enable shared contexts
> accordingly, but it looks like something isn't working right. You
> can apply the patch I've attached to this e-mail and things should
> work again.
Alas, it
when this limit is exceeded is:
No free InfiniPath contexts available on /dev/ipath
"
It looks like OpenMPI is running into the context limit, apparently 4
inthis case. Can I do the context sharing mentioned with OpenMPI?
Best regards,
Daniël Mantione
On Mon, 22 Oct 2007, Jeff Squyres wrote:
> On Oct 22, 2007, at 6:44 PM, Lourival Mendes wrote:
>
> >Hy everybody, I'm interested in use the MPI on the Pascal
> > environment. I tryed the MPICH2 list but no success. On the Free
> > Pascal Compiler list, Daniël invited me to subscribe this li
On Thu, 26 Jul 2007, Jeff Squyres wrote:
> On Jul 26, 2007, at 3:18 PM, Daniël Mantione wrote:
>
> > Problematic is the very poor job the openmpi team does at binary
> > backwards
> > compatibility, applications broke between 1.0 and 1.1, and again
> > betw
On Thu, 26 Jul 2007, Lourival Mendes wrote:
> C:\Program Files\MPICH2\include
??? What cluster are you using. Windows 2003?
Daniël
tions broke between 1.0 and 1.1, and again between
1.1 and 1.2. With such breakage, it is next to impossible to maintain an
mpi.pas.
> Such as Delphi and Lazarus
Lazarus isn't a Pascal compiler, but an IDE.
Daniël Mantione
ary expects them to be $1c8 bytes.
Conclusion: Opaque pointers should not be declared with .comm, they should
just be referenced.
I didn't tell my system details yet: I'm using OpenSuSE 10 on the x86_64
architecture. The compiler does not seem to be of any influence: the
result is the same with Gcc, Intel C and Pathscale.
Daniël mantione
On Tue, 27 Jun 2006, PeterKjellström wrote:
> On Monday 26 June 2006 16:55, Daniël Mantione wrote:
> > Hi!
> >
> > Just tried out OpenMPI 1.1. First impression is that it doesn't seem to
> > be able to run OpenMPI 1.0.2 executables. The result of such
have been increased?
Daniël Mantione
ham1:/usr/local/Cluster-Apps/openmpi/intel # cat /tmp/x
stfile hostfile -np 4 ./yafs.bin 1.2M
./yafs.bin: Symbol `ompi_mpi_comm_world' has different size in shared
object, consider re-linking
./yafs.bin: Symbol `ompi_mpi_comm_world' has different si
17 matches
Mail list logo