Hi all,
My thanks to all those involved for putting together this Windows binary
release of OpenMPI! I am hoping to use it in a small Windows based OpenMPI
cluster at home.
Unfortunately my experience so far has not exactly been trouble free. It seems
that, due to the fact that this relea
Okay, I finally had time to parse this and fix it. Thanks!
On May 16, 2011, at 1:02 PM, Peter Thompson wrote:
> Hmmm? We're not removing the putenv() calls. Just adding a strdup()
> beforehand, and then calling putenv() with the string duplicated from env[j].
> Of course, if the strdup fails
I'm no Windozer, and our developer in that area is away for awhile. However,
looking over the code, I can see where this might be failing.
The Win allocator appears to be trying to connect to some cluster server -
failing that, it aborts.
If you just want to launch local, I would suggest adding
Well I have a new wrench into this situation.
We have a power failure at our datacenter took down our entire system
nodes,switch,sm.
Now I am unable to produce the error with oob default ibflags etc.
Does this shed any light on the issue? It also makes it hard to now debug the
issue without b
Okay cool, mine already breaks with P=2, so I'll try this soon. Thanks
for the impatient-idiot's-guide :)
On 18 May 2011 14:15, Jeff Squyres wrote:
> If you're only running with a few MPI processes, you might be able to get
> away with:
>
> mpirun -np 4 valgrind ./my_mpi_application
>
> If you r
If you're only running with a few MPI processes, you might be able to get away
with:
mpirun -np 4 valgrind ./my_mpi_application
If you run any more than that, the output gets too jumbled and you should
output each process' valgrind stdout to a different file with the --log-file
option (IIRC).
Hi Jeff,
Thanks for the response.
On 18 May 2011 13:30, Jeff Squyres wrote:
> *Usually* when we see segv's in calls to alloc, it means that there was
> previously some kind of memory bug, such as an array overflow or something
> like that (i.e., something that stomped on the memory allocation
*Usually* when we see segv's in calls to alloc, it means that there was
previously some kind of memory bug, such as an array overflow or something like
that (i.e., something that stomped on the memory allocation tables, causing the
next alloc to fail).
Have you tried running your code through a
Any comment / suggestion on how to resolve this?
Thank you.
-Hiral
On 5/12/11, hi wrote:
> Hi,
>
> Clarifications:
> - I have downloaded pre-build OpenMPI_v1..5.3-x64 from open-mpi.org
> - installed it on Window 7
> - and then copied OpenMPI_v1..5.3-x64 directory from Windows 7 to
> Windows Serv