On 11-nov-10, at 15:16, Fawzi Mohamed wrote:

On 11-nov-10, at 09:58, Russel Winder wrote:

MPI and all the SPMD approaches have a severely limited future, but I
bet the HPC codes are still using Fortran and MPI in 50 years time.

well whole array operations are a generalization of the SPMD approach, so I this sense you said that that kind of approach will have a future (but with a more difficult optimization as the hardware is more complex.

sorry I translated that as SIMD, not SPMD, but the answer below still holds in my opinion, if one has a complex parallel problem mpi is a worthy contender, the thing is that in many occasions one doesn't need all its power. If a client server, a distributed or a map/reduce approach work, then simpler and more flexible solutions are superior. That (and its reliability problem, that PGAS also shares) is, in my opinion, the reason MPI is not very used outside the computational community. Being able to tackle also MPMD in a good way can be useful, and that is what the rpc level does between computers, and the event based scheduling within a single computer (ensuring that one processor can do meaningful work while the other waits.

About MPI I think that many don't see what MPI really does, mpi offers a simplified parallel model. The main weakness of this model is that it assumes some kind of reliability, but then it offers a clear computational model with processors ordered in a linear of higher dimensional structure and efficient collective communication primitives. Yes MPI is not the right choice for all problems, but when usable it is very powerful, often superior to the alternatives, and programming with it is *simpler* than thinking about a generic distributed system. So I think that for problems that are not trivially parallel, or easily parallelizable MPI will remain as the best choice.

Reply via email to