Hello All, A software project that I am working on currently has a C++ library that runs on a Beowulf cluster for parallel computation. The library code uses MPI for its implementation. We are currently developing python wrappers for the library. Our current design uses one Python process as a front end with processes created from the library code on the back end (it is processes of this type that run on all the nodes of the cluster).
What I'm considering is delving into the possibility of having Python processes on all of the nodes of the cluster. These processes would wrap our C++ library. What I'm wondering is: 1) Would setting up an environment like this require modifying the Python interpreter or the C++ module that is being wrapped? What I'm hoping is that the C++ module can go on happily doing MPI operations despite the fact that it is actually being called by the Python interpreter. 2) Would it be possible to spawn these Python processes using mpiexec (or something similar), or would I need to use some of the MPI-2 features to dynamically set up the MPI environment? 3) Has anyone accomplished something like this already? I know there are extensions and modules that add MPI functionality to Python, but I'm hoping they could be avoided, since the Python code itself should never really have to be aware of MPI, only the C++ module that has already been written. Let me apologize up front if this has already been discussed before. I've done a few searches in this group and it seems that most of the discussion on this front took place many years ago, so the relevance is quite low. Thank you in advance for your attention. ~doug -- http://mail.python.org/mailman/listinfo/python-list