Thank you so much for the details.

1. while running the "Tile I/O" benchmark, I see the following message:

$ mpirun -np 28 ./mpi-tile-io --nr_tiles_x 7 --nr_tiles_y 4 --sz_tile_x 100
--sz_tile_y 100 --sz_element 32 --filename file1g
...
# collective I/O off

How do I enable collective I/O ?

2. I switched to using Open MPI v 2.0.0rc2 .  How do I know which IO is
being used ?  How do I switch between OMPIO and ROMIO ?


Please let me know.

Thanks,
- Sreenidhi.


On Tue, May 10, 2016 at 7:14 PM, Edgar Gabriel <egabr...@central.uh.edu>
wrote:

> in the 1.7, 1.8 and 1.10 series ROMIO remains the default. In the upcomgin
> 2.x series, OMPIO will be the default, except for Lustre file systems,
> where we will stick with ROMIO as the primary resource.
>
> Regarding performance comparison, we ran numerous tests late last year and
> early this year. It really depends on the application scenario and the
> platform that you are using. If you want to know which one should you use,
> I would definitely suggest to stick with ROMIO in the 1.10 series, since
> many of the bug fixes of OMPIO that we did in the last two years could not
> be back-ported to the 1.10 series for technical reasons. If you plan to
> switch to the 2.x series, it might be easiest to just run a couple of tests
> and compare the performance for your application on your platform, and base
> your decision on that.
>
> Edgar
>
> On 5/10/2016 6:32 AM, Sreenidhi Bharathkar Ramesh wrote:
>
> Hi,
>
> 1. During default build of OpenMPI, it looks like both ompio.la and
> romio.la are built.  Which I/O MCA library is used and based on what is
> the decision taken ?
>
> 2. Are there any statistics available to compare these two - OMPIO vs
> ROMIO ?
>
> I am using OpenMPI v1.10.1.
>
> Thanks,
> - Sreenidhi.
>
> --
>
>
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post:
> http://www.open-mpi.org/community/lists/devel/2016/05/18951.php
>

Reply via email to