It has been a lot of years since I worked with MPI, but IIRC one "host" has
to be the master (usually called mpirun or mpiexec) that distributes the
tasks to the "dependent hosts" and then collects the processed results.  If
this is true, then using one machine as a dedicated front end makes sense
to me.  Are the dependent hosts connected to the master via a LAN (perhaps
a Beowulf cluster) or all of them distributed and receiving/returning data
over the web as well?  If co-located, then a gigabit LAN can handle the com
between hosts at fairly high speed.  If this is not fast enough then you
need to go to something faster (and more expensive) like the PCI-Express
system from Dolphin.  Going over the web for "huge amounts" of data is
going to be limited to the bandwidth of the internet connection, i.e. much
slower than a LAN.  It may be possible to have the individual tasks sent to
the processing hosts individually, but again, it seems to me that this is
the function of the master host.

Are the applications you run all using data from a single big collection of
data like a database.  Perhaps if widely distributed you could supply the
data set to all the hosts using something like a backup/restore model.
Then these copies of the data set could be transferred to the individual
hosts using something like a flash drive or SD card.  A briefcase full of
these memory devices in a briefcase on an airplane has a hell of a lot of
bandwidth, even though we've become more accustomed to just dumping
everything on the net, when it comes to terabytes, petabytes, exabytes,
zettabytes, or yottabytes of data, the web isn't the answer.

If the whole thing has to come over the internet, would something like
Linda or Rinda software help you?

As I say, it has been many years since I worked with MPI and with the rate
of change in this business, I may have it all wrong.  I hope I'm being
helpful rather than just cluttering up your mailbox.

What MPI software are you using?  Are the applications written primarily in
FORTRAN with a mixture of other languages?

Good Luck!

John Matlock

On Sun, Nov 29, 2015 at 10:24 AM, Martijn Slouter <martijnslou...@gmail.com>
wrote:

> Thanks for your reply, comments below:
>
> On Fri, Nov 27, 2015 at 10:15 AM, Konstantin Kolinko
> <knst.koli...@gmail.com> wrote:
> > What is your goal, your expectation of Tomcat? What these n instances
> > should do that 1 instance cannot?
>
> They are running cpu-intensive calculations on distributed hosts
> ("high perfomance computing"), so that all hosts share the CPU and RAM
> requirements. Tomcat will allow interaction with the MPI application
> through the internet.
>
> > Is is possible to start several Tomcats with the same CATALINA_BASE in
> > parallel, but you have to
> > ...
> > A connector can be configured, reconfigured, started/stopped
> > programmatically via JMX.
>
> Any suggestion how I can accomplish the configuration, if I start
> tomcat with the MPI web application using "mpirun -n 2 java ..." so
> that only the first MPI process opens the tomcat communication ports,
> while all other MPI processes disable their communicators?
>
> As an alternative I can run the MPI application as a separate server
> (tested across 16 hosts already), and use tomcat as a (serial) client
> to this parallel server. The disadvantage is that huge amounts of data
> need to be processed another time instead of being served directly
> from the MPI application.
>
> Which solution do you suggest?
>
> Thank you
> Martijn
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to