On Mon, Dec 07, 2009 at 08:21:46AM -0500, Richard Treumann wrote:

> The need for a "better" timeout depends on what else there is for the CPU
> to do.
> 
> If you get creative and shift from {99% MPI_WAIT , 1% OS_idle_process} to
> {1% MPI_Wait, 99% OS_idle_process} at a cost of only a few extra
> microseconds added lag on MPI_Wait, you may be pleased by the CPU load
> statistic but still have have only hurt yourself. Perhaps you have not hurt
> yourself much but for what? The CPU does not get tired of spinning in
> MPI_Wait rather than in the OS_idle_process
> 
> Most MPI applications run with an essentially dedicated CPU per process.

Not true in our case.  The computer in question (Intel Core i7, one
cpu, four cores) has several other uses.

It is a general purpose desktop/server for myself, and potential other
users.  I edit and compile the MPI application on it.  I read and
write email from it.  My subversion repositories and server will soon
be on it.  My Trac server (and Apache2) will soon be on it.

Now that MPI does not do busy waits, it can do all that, and run 4
copies of our MPI application.

> In most MPI applications if even one task is sharing its CPU with
> other processes, like users doing compiles, the whole job slows down
> too much.

I have not found that to be the case.

Regards,
Douglas.
-- 
  Douglas Guptill                       voice: 902-461-9749
  Research Assistant, LSC 4640          email: douglas.gupt...@dal.ca
  Oceanography Department               fax:   902-494-3877
  Dalhousie University
  Halifax, NS, B3H 4J1, Canada

Reply via email to