Chewping,

Let me start with a nomenclature normalization.  What you call master/slave 
nodes we call client/server nodes.  We make this distinction because the client 
needs to be nowhere near the server nodes (I have attached a client to a server 
running on the other side of the country).

Your question is difficult to answer succinctly because it depends on the type 
of data.  In general, load balancing does not happen through MPI but through 
file I/O.  The parallel readers in ParaView each read in a partition of data, 
and that partition is fixed to the process.  For structured types of data (data 
with a regular grid topology), this partitioning can be very even.  For 
unstructured data, the partitioning is usually limited to how the data is lain 
out in the data files, so even partitioning is not guaranteed.  There is a 
filter, D3, that will use MPI to redistribute an unstructured data set.  No 
data is passed through the client in this process.

For more information, you can consult The ParaView Guide, which has a couple of 
chapters on parallel processing and rendering: 
http://www.kitware.com/products/books.html

You may also take a look at the Supercomputing tutorial, which talks about the 
parallel visualization from a User's perspective: 
http://www.paraview.org/Wiki/SC08_ParaView_Tutorial

-Ken


On 2/5/09 9:51 PM, "chew ping" <[email protected]> wrote:

Hi,



My name is chewping and I'm doing my master's research in the University of 
Malaya, Malaysia. I'm using ParaView and MPI to visualize a relatively large 
medical data set in a homogeneous cluster environment. It consists of one 
master node and 9 slave nodes in a Local Area Network with class C IP address. 
Most nodes are similar to each other (32 bits processor, running on Linux). I 
would like to manipulate MPI to make the performance better (faster 
distributions of data and collections of results). As I am new to MPI and 
parallel processing system, there are a few questions that I would like to ask:



1.      How MPI distribute the work load?

2.      Does the master node divide the work load evenly first, then distribute 
to each slave node?

3.      Or the master node pass the whole chunk of work load to every slave 
node, then the slave node 'take' their own piece of work to process?

4.      Can the work load being distributed according to priority (load 
balancing)? Or give a bigger portion of work load to the node with higher 
processing speed?



Can MPI do the above and how? Where can I learn more on how to use MPI to tune 
for best performance for distributed processing and rendering?



Thanks in advance for your feedback!



   ****      Kenneth Moreland
    ***      Sandia National Laboratories
***********
*** *** ***  email: [email protected]
**  ***  **  phone: (505) 844-8919
    ***      web:   http://www.cs.unm.edu/~kmorel

_______________________________________________
ParaView mailing list
[email protected]
http://www.paraview.org/mailman/listinfo/paraview

Reply via email to