Thank you for your reply. In our previous publication, we have figured it out 
that run more than one processes on cores and balancing the computational load 
considerably reduces the total execution time. You know the MPI_Graph_create 
function, we created another function MPI_Load_create that maps the processes 
on cores such that balance of computational load can be achieved on cores. We 
were having some issues with increase in communication cost due to ranks 
rearrangements (due to MPI_Comm_split, with color=0), so in this research work 
we will see how can we balance both computation load on each core and 
communication load on each node. Those processes that communicate more will 
reside on the same node keeping the computational load balance over the cores. 
I solved this problem using ILP but ILP takes time and can't be used
 in run time so I am thinking about an heuristic. That's why I want to see if 
it is possible to migrate a process from one core to another or not. Then I 
will see how good my heuristic will be.

thanks
Mudassar



________________________________
From: Jeff Squyres <jsquy...@cisco.com>
To: Mudassar Majeed <mudassar...@yahoo.com>; Open MPI Users <us...@open-mpi.org>
Cc: Ralph Castain <r...@open-mpi.org>
Sent: Thursday, November 10, 2011 2:19 PM
Subject: Re: [OMPI users] Process Migration

On Nov 10, 2011, at 8:11 AM, Mudassar Majeed wrote:

> Thank you for your reply. I am implementing a load balancing function for 
> MPI, that will balance the computation load and the communication both at a 
> time. So my algorithm assumes that all the cores may at the end get different 
> number of processes to run.

Are you talking about over-subscribing cores?  I.e., putting more than 1 MPI 
process on each core?

In general, that's not a good idea.

> In the beginning (before that function will be called), each core will have 
> equal number of processes. So I am thinking either to start more processes on 
> each core (than needed) and run my function for load balancing and then block 
> the remaining processes (on each core). In this way I will be able to achieve 
> different number of processes per core.

Open MPI spins aggressively looking for network progress.  For example, if you 
block in an MPI_RECV waiting for a
 message, Open MPI is actively banging on the CPU looking for network progress. 
 Because of this (and other reasons), you probably do not want to 
over-subscribe your processors (meaning: you probably don't want to put more 
than 1 process per core).

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to