Thank you, Ralph!
I didn't know that function of cups-per-proc.
As fas as I know, it didn't work in openmpi-1.6.x like that.
It was just 4 cores binding...
Today I don't have much time and I'll check it tomorrow.
And thank you again for checking oversubscription problem.
tmishima
> Guess I d
Also, you need to tell mpirun that the nodes aren't the same - add
--hetero-nodes to your cmd line
On Nov 13, 2013, at 10:14 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Thank you, Ralph!
>
> I didn't know that function of cups-per-proc.
> As fas as I know, it didn't work in openmpi-1.6.x lik
FWIW: I verified that this works fine under a slurm allocation of 2 nodes, each
with 12 slots. I filled the node without getting an "oversbuscribed" error
message
[rhc@bend001 svn-trunk]$ mpirun -n 3 --bind-to core --cpus-per-proc 4
--report-bindings -hostfile hosts hostname
[bend001:24318] MCW
Hi Ralph,
I checked -cpus-per-proc in openmpi-1.7.4a1r29646.
It works well as I want to do, which can adjust nprocs
of each nodes dividing by number of threads.
I think my problem is solved so far using -cpus-per-proc,
thank you very mush.
Regarding oversbuscribed problem, I checked NPROCS was
On Nov 14, 2013, at 3:25 PM, tmish...@jcity.maeda.co.jp wrote:
>
>
> Hi Ralph,
>
> I checked -cpus-per-proc in openmpi-1.7.4a1r29646.
> It works well as I want to do, which can adjust nprocs
> of each nodes dividing by number of threads.
>
> I think my problem is solved so far using -cpus-per
Hi Ralph,
It's no problem that you let it lie until the problem becomes serious
again.
So, this is just an information for you.
I agree with your opinion that the problem will lie in the modified
hostfile.
But exactly speaking, it's related to just adding -hostfile option to
mpirun
in Torque s