Hmmm…I’m not entirely sure what might be specifically causing the differences
you cite. We didn’t make any changes to the LSF components, so that wouldn’t be
it. The main things I can recall involved how we handle hostfile and -host
specifications, and when we directly sense the available cpus o
Just an update for the list. Really only impacts folks running Open MPI
under LSF.
The LSB_PJL_TASK_GEOMETRY changes what lbs_getalloc() returns regarding the
allocation. It adjusts it to the mapping/ordering specified in that
environment variable. However, since it is not set by LSF when the job
Farid,
I have access to the same cluster inside IBM. I can try to help you track
this down and maybe work up a patch with the LSF folks. I'll contact you
off-list with my IBM address and we can work on this a bit.
I'll post back to the list with what we found.
-- Josh
On Tue, Apr 19, 2016 at 5
On Apr 18, 2016, at 7:08 PM, Farid Parpia wrote:
>
> I will try to put you in touch with someone in LSF development immediately.
FWIW: It would be great if IBM could contribute the fixes to this. None of us
have access to LSF resources, and IBM is a core contributor to Open MPI.
--
Jeff Squy