On Mon, Jan 09, 2006 at 08:02:35AM -0600, Peng Gu wrote:
> Here is where I need your help:
> 1) When I was trying to build the flash_benchmark_io code, the
> compiler complained that hdf5.h file is missing. I want to know for
> sure if this file is necessary for building the code.
> # gmake -f Makefile.linux flash_benchmark_io
> mpicc -c -I /usr/include/ -DTFLOPS -DN_DIM=3 h5_file_interface.c
> h5_file_interface.c:6:18: hdf5.h: No such file or directory

Building flash-io requires a bit of work.  it's unlikley to work out
of the box.  You'll have to modify an existing makefile, but before
you do that you need to make sure you've got a properly working
software stack (described more below). 

> Can anybody explained to me what is the relationship between HDF and PVFS.
> Are they working on different levels or what?

HDF5 and PVFS2 are really quite different things.  PVFS2 is a file
system, and HDF5 is a library providing several useful abstractions
for scientific computing.  

HDF5 makes MPI-IO calls.  those MPI-IO calls (at least in ROMIO) are
the ones that talk to PVFS2.  The things you need to do are
- set up PVFS2 
- build, say, MPICH2 with PVFS2 support
- build HDF5 with support for parallel IO (the HDF5 documentation has
  the details)
- build FLASH-IO against your HDF5 library (modify Makefile.chiba so
  that everything points to the correct location)

> 2) I can not run mpi_tile_io correctly, the input and output are like:
> # mpirun -np 100 mpi-tile-io --nr_tiles_x 25 --nr_tiles_y 4
> --sz_tile_x    100 --sz_tile_y 100 --sz_element 32 --filename
> /mnt/pvfs2/foo
> problem with execution of mpi-tile-io  on  cse-wang-server.unl.edu: 
> [Errno 2] No such file or directory
> problem with execution of mpi-tile-io  on  cse-wang05b.unl.edu: 
> [Errno 2] No such file or directory
> problem with execution of mpi-tile-io  on  cse-wang-server.unl.edu: 
> [Errno 2] No such file or directory

looks like there's not a common file system between
cse-wang-server.unl.edu and cse-wang05b.unl.edu ?  Or maybe pvfs2
isn't mounted on those machines?

> 3) I could not build the DB for mpiBLAST, here is my input and output 
> messages:

I've never used mpiblast, so i'm not sure what to tell you here

==rob

-- 
Rob Latham
Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
Argonne National Labs, IL USA                B29D F333 664A 4280 315B
_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to