Jeff and Samuel,

Thanks for your responses.

-Hamid

Jeff Squyres wrote:
If you need per-job settings, then a wrapper is probably your best bet.


On Sep 10, 2008, at 5:08 AM, Samuel Sarholz wrote:

Hi Jeff,

I think setting global limits will not help in this case as the limits like stacksize need to be program specific.


So far I am using wrappers, however the solution is a bit nasty.
If there is another way it would be great.

Hoever I doubt that there is a way as the FAQ states:

More specifically -- it may not be sufficient to simply execute the following, because the ulimit may not be in effect on all nodes where Open MPI processes will be run:
shell$ ulimit -l unlimited
shell$ mpirun -np 2 my_mpi_application

But this case is what is needed as any global or user global (bashrc zshrc .. ) setting will only work if you run one kind of jobs at the same time.

And wrapping:
wrap.sh:
ulimit -s 300000
a.out

mpirun -np 2 zsh -c wrap.sh

works but is not nice.

best regards,
Samuel

Jeff Squyres wrote:
There are several factors that can come into play here. See this FAQ entry about registered memory limits (the same concepts apply to the other limits): http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages-more
On Sep 9, 2008, at 7:04 PM, Amidu Oloso wrote:
mpirun under OpenMPI is not picking the limit settings from the user environment. Is there a way to do this, short of wrapping my executable in a script where my limits are set and then invoking mpirun on that script?

Thanks.

-Hamid
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Reply via email to