-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello Nah--

>    Though I was aware of the non-linearity in scaling ( i wasn't expecting
>    8X speed in processing), why should 8 different xplor processes be not
>    using the 8 different processors fully ( though limited by memory bus)
>    while other programs (MPI as well as non-MPI) do utilize ~100% of each
>    of the cores.

Different applications will display different behavior depending on how
much memory bandwidth is required. Xplor-NIH has large bandwidth
requirements. So, there just isn't enough data getting to the cores to
keep them busy- they're mostly waiting.

>    Anyways, do you suggest doing calculations in batches - like 12 or 24
>    (1 or 2 processors per node) at a time to maximize the resource utility
>    ?

- From what you describe, it sounds like 2 jobs per processor, or four per
node should be right. So, if 1 process per node computes a structure in x
secs, 2 processes per node should calculate two structures in x secs,
and 4 processes per node should calculate four structures in x secs. You
might want to check this. I expect that 6 processes per node will reduce
overall throughput.

>    I'm curious about the 64 vs. 96 processor results: In one case you ran
>    on 8 processors and the other on 12? Or was there some other
>    difference?
>    Yeah, you are right. In one case it was on 8 nodes (8*8 = 64
>    processors) and other in 12 of them. But still, the difference was so
>    much so that cannot be explained simply by non-linear scaling !

I suspect that something else was going on in this case, like another
jobs running on some nodes. Anyway, please run the experiment to find
the sweet spot for your processors and use that in future calculations.

best regards--
Charles
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8+ <http://mailcrypt.sourceforge.net/>

iD8DBQFIILBXPK2zrJwS/lYRAofmAJ9AuSzke4lwIZtggzu9CirY9pNlTwCfcSjw
xDwLgtWH/OIeyzrX0bRN24c=
=aRAe
-----END PGP SIGNATURE-----

Reply via email to