Diego,

what you wont to do is parametric studies. There are specific software available to do this efficiently (ie reducing the number of runs). Software can then rely in a job scheduler (PBS, SLURM...) which can launch many parallel mpi applications at the same time depending on the results of previous runs.
Look at :
- Dakota https://dakota.sandia.gov/ (open source)
- Modefrontier https://www.esteco.com/modefrontier (commercial)

Patrick

Diego Avesani wrote:
Dear all,

thank you for your answers. I will try to explain better my situation.
I have written a code and I have parallelized it with openMPI. In particular I have a two level palatalization. The first takes care of a parallel code program and the second run the parallel code with different input in order to get the best solution. In the second level the different runs and output have to communicate in order to define the best solution and to modify accordingly the input data. These communications have to take place different times in the all simulation.

I have read some papers where some people do that with PBS or Microsoft  job scheduling.
I opted for openMPI.

What do you think? Can you give me reasons supporting my decision?

Thanks

Diego



On Sun, 26 Aug 2018 at 00:53, John Hearns via users <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote:

    Diego,
    I am sorry but you have different things here. PBS is a resource
    allocation system. It will reserve the use of a compute server, or several
    compute servers, for you to run your parallel job on. PBS can launch the
    MPI job - there are several mechanisms for launching parallel jobs.
    MPI is an API for parallel programming. I would rather say a library, but
    if I'm not wrong MPI is a standard for parallel programming and is
    technically an API.

    One piece of advice I would have is that you can run MPI programs from the
    command line. So Google for 'Hello World MPI'. Write your first MPI
    program then use mpirun from the command line.

    If you have a cluster which has the PBS batch system you can then use PBS
    to run your MPI program.
    IF that is not clear please let us know what help you need.











    On Sat, 25 Aug 2018 at 06:54, Diego Avesani <diego.aves...@gmail.com
    <mailto:diego.aves...@gmail.com>> wrote:

        Dear all,

        I have a philosophical question.

        I am reading a lot of papers where people use Portable Batch System or
        job scheduler in order to parallelize their code.

        What are the advantages in using MPI instead?

        I am writing a report on my code, where of course I use openMPI. So
        tell me please how can I cite you. You deserve all the credits.

        Thanks a lot,
        Thanks again,


        Diego

        _______________________________________________
        users mailing list
        users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
        https://lists.open-mpi.org/mailman/listinfo/users

    _______________________________________________
    users mailing list
    users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    https://lists.open-mpi.org/mailman/listinfo/users



_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


--
===================================================================
|  Equipe M.O.S.T.         |                                      |
|  Patrick BEGOU           | mailto:patrick.be...@grenoble-inp.fr |
|  LEGI                    |                                      |
|  BP 53 X                 | Tel 04 76 82 51 35                   |
|  38041 GRENOBLE CEDEX    | Fax 04 76 82 52 71                   |
===================================================================

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to