Hi Roman

Just a guess.
Is this a domain decomposition code?
(I never heard about "cpmd 32 waters" before, sorry.)
Is it based on finite differences, finite volume, finite element?

If it is, once the size of the subdomains becomes too small compared to
the size of the halo around them, the overhead required to calculate
your solution for the halo swamps the whole calculation,
and scaling degrades.
This is not an MPI scaling problem, this is intrinsic to the domain
decomposition technique.

Typically this happens as the number of processors reach some high number (which depends on the size of the problem).
So, what you are seeing may not be a problem with OpenMPI scaling,
but just that your problem is not large enough to require the use of, say, 48 or 64 processors.

For instance, imagine a 1D problem with a grid with 1024 points,
that require a 2 grid point overlap (halo) on the left and right
of any subdomain to be calculated in parallel (i.e. decomposing the domain in parts).
If you divide the domain across two processors only, each processor
has to work not on 1024/2=512 points, but on 512+2+2=516 points.
The calculation on the two processors gets an overhead of 2*(2+2)=8 grid
points,w.r.t. the same calculation done on a single processor.
This is an overhead of 8/1024=0.8% only, so using 2 processors
will speedup the calculation by a factor close to 2 (but slightly lower).

However, if you divide the same problem across 64 subdomains (i.e 64 processors), the size of each subdomain is 1024/64=16,
plus 2 halo grid point on each side, i.e. 20 grid points.
So the overhead is much higher now,  4/16=25%.
Dividing the problem across 64 processors will not speed it up by
a factor of 64, but by much less.

Every domain decomposition program that we have here shows this
effect.  If we give them more processors they scale well, up
to a point (say 16 or 32 processors, for a reasonably sized problem). However, beyond that point the scaling slowly flattens out.
When you go and look at the grid size and the
large number of processors,
you realize that most of the effort is being done to calculate halos,
i.e. on overhead.

On top of that, there is the overhead due to MPI communication, of
course, but it is likely that the halo overhead is the dominant factor.

I would guess other classes of problems and parallel methods of solution
also have the same problem that domain decomposition shows.

Is this perhaps what is going on with your test code?
Take a look at the code to see what it is doing,
and in particularly, what is the problem size.
See if it really makes sense to distribute it over 64 processors,
of if a smaller number would be the right choice.

Also, if the program allows you to change the problem size,
try the test again with a larger problem size
(say, two or four times bigger),
and then go up to a large number of processors also.
With a larger problem size the scaling may be better too
(but the runtimes will grow as well).


Finally, since you are using Infiniband, and I wonder if all the
nodes connect to each other with the same latency, or if some
pairs of nodes have higher latency to communicate.
On a single switch hopefully the latency is the same for all pairs of nodes.
However, if you connect two switches, for instance, nodes that
are on switch A will probably have a larger latency to talk
to nodes on switch B, I suppose.

I hope it helps.
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------


Roman Martonak wrote:
Hello,

I observe very poor scaling with openmpi on HP blade system consisting
of 8 blades (each having 2 quad-core AMD Barcelona 2.2 GHz CPU) and
interconnected with Infiniband fabric. When running the standard cpmd
32 waters test, I observe the following scaling (the numbers are
elapsed time)

openmpi-1.2.6:

using full blades (8 cores per blade)
np8            7 MINUTES 26.40 SECONDS
np16        4 MINUTES 19.91 SECONDS
np32        2 MINUTES 55.51 SECONDS
np48            2 MINUTES 38.18 SECONDS
np64            3 MINUTES 19.78 SECONDS

I tried also openmpi-1.2.8 and openmpi-1.3 and it is about the same,
openmpi-1.3 is somewhat better for 32 cores but in all cases there is
practically no scaling beyond 4 blades (32 cores) and running on 64
cores is a disaster. With Intel MPI, however, I get the following
numbers

Intel MPI-3.2.1.009

using full blades (8 cores per blade)
np8    7 MINUTES 23.19 SECONDS
np16    4 MINUTES 22.17 SECONDS
np32    2 MINUTES 50.07 SECONDS
np48    1 MINUTES 42.87 SECONDS
np64    1 MINUTES 23.76 SECONDS

so there is reasonably good scaling up to 64 cores. I am running with
the option
--mca mpi_paffinity_alone 1, I have tried also -mca btl_openib_use_srq
1 but it had only marginal effect. With mvapich I get similar scaling
as with Intel MPI. The system is running the Rocksclusters
distribution 5.1 with the mellanox ofed-1.4 roll. I would be grateful
if somebody could suggest me what could be the origin of the problem
and how to tune openmpi to get better scaling.

Many thanks in advance.

Best regards

Roman
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to