On 1/9/06, Julian Martin Kunkel <[EMAIL PROTECTED]> wrote:
> Hi,
> > > > # mpirun -np 100 mpi-tile-io --nr_tiles_x 25 --nr_tiles_y 4
> > > > --sz_tile_x    100 --sz_tile_y 100 --sz_element 32 --filename
> > > > /mnt/pvfs2/foo
> > > > problem with execution of mpi-tile-io  on  cse-wang-server.unl.edu:
> > > > [Errno 2] No such file or directory
> This looks like mpi-tile-io is not in the path of the nodes, I have seen this
> often for my own tests :)  Maybe you can try to use the absolute filename to
> be sure, i.e.
> mpiexec -np 100 /home/XYZ/mpi-tile-io --nr_tiles_x  ....
I tried the absolute filename and succeeded, cool!

> I did some I/O performance tests with a continous I/O pattern (for small
> accessess using different block sizes and for large continous I/O). However,
> I used only simple programs for benchmarking like mpi-io-test....
>
> You may have a look at the recent website:
> http://www.clustermonkey.net/content/view/62/32/
> The author discusses a few parallel benchmarks and the problematics...
Thank you Julian for your valuable information!

Best regards
Peng

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to