Hi Ralph,
On Jul 25, 2011, at 11:05 AM, Ralph Castain wrote:
On Jul 25, 2011, at 10:16 AM, Samuel K. Gutierrez wrote:
Hi Ralph,
It seems as if this issue is related to a missing shm_unlink
wrapper within Valgrind. I'm going to disable posix by default and
commit later today.
Is that the right solution?
No, not really.
If the problem is something in valgrind, then let's not disable
something just for their problem. Is there a way we can wrap it
ourselves so the error doesn't cause the message?
I think so. They outline the procedure in
README_MISSING_SYSCALL_OR_IOCTL, so I'll take a look.
Stay tuned,
Sam
Like I said, everything worked just fine - the message just implied
the proc would die, and it doesn't.
Thanks,
--
Samuel K. Gutierrez
Los Alamos National Laboratory
On Jul 23, 2011, at 8:54 PM, Samuel K. Gutierrez wrote:
Hi Ralph,
That's mine - I'll take a look.
Thanks,
Sam
Whenever I run valgrind on orterun (or any OMPI tool), I get the
following
error msg:
--------------------------------------------------------------------------
A system call failed during shared memory initialization that
should
not have. It is likely that your MPI job will now either abort or
experience performance degradation.
Local host: Ralph
System call: shm_unlink(2)
Error: Function not implemented (errno 78)
--------------------------------------------------------------------------
It's coming out of open-rte/help-opal-shmem-posix.txt.
Everything continues, so I'm not sure what this is all about.
Anyone
recognize this???
It's on the trunk, running on a Mac, vanilla configure.
Ralph
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel