Can you try using a filename of pvfs2:/mnt/pvfs2/tmp/test-mpi-io?
-sam
On Apr 4, 2007, at 8:13 PM, Yong Chen wrote:
Hi,
I installed pvfs-2.6.2 on a 12 nodes cluster and it's working
great. Then I recompiled mpich2-1.0.5p3 with pvfs2 support
following pvfs2 quick start guide ROMIO support section. MPICH2 is
also working fine, but when I tested pvfs2 accesses through MPI-IO,
it keeps reporting the following error, seems the the integration
of ROMIO and pvfs2 is not successful, but I don't know where I did
wrong, can anyone help me figure out this problem? thanks a lot.
x86-0$ mpiexec -n 1 my-mpi-io-test
0: MPI file open error.: File does not exist, error stack:
0: ADIO_RESOLVEFILETYPE_FNCALL(291): Invalid file name /mnt/pvfs2/
tmp/test-mpi-io
0: [cli_0]: aborting job:
0: application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
rank 0 in job 1 x86-0_33550 caused collective abort of all ranks
exit status of rank 0: killed by signal 9
MPICH2 configuration options:
[EMAIL PROTECTED] mpich2-1.0.5p3]# export CFLAGS="-I/usr/local/pvfs-2.6.2/
include/"
[EMAIL PROTECTED] mpich2-1.0.5p3]# export LDFLAGS="-L/usr/local/pvfs-2.6.2/
lib/"
[EMAIL PROTECTED] mpich2-1.0.5p3]# export LIBS="-lpvfs2 -lpthread"
[EMAIL PROTECTED] mpich2-1.0.5p3]#./configure --prefix=/usr/local/
mpich2-1.0.5p3/ --enable-romio --with-pvfs2=/usr/local/pvfs-2.6.2/
--with-file-system="pvfs2+ufs+nfs" 2>&1 | tee configure.log
pvfs2 configuration options:
[EMAIL PROTECTED] mpich2-1.0.5p3]#./configure -prefix=/usr/local/
pvfs-2.6.2/ -with-mpi=/usr/local/mpich2-1.0.5p3/
my simple mpi-io testing code:
#include <stdio.h>
#include <mpi.h>
main(int argc, char **argv) {
int nprocs;
int myrank;
MPI_File mf;
int errcode;
char msg[MPI_MAX_ERROR_STRING];
int resultlen;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
errcode = MPI_File_open(MPI_COMM_WORLD, "/mnt/pvfs2/tmp/test-mpi-
io", MPI_MODE_RDONLY, MPI_INFO_NULL, &mf);
if (errcode != MPI_SUCCESS) {
MPI_Error_string(errcode, msg, &resultlen);
fprintf(stderr, "%s: %s\n", "MPI file open error.", msg);
MPI_Abort(MPI_COMM_WORLD, 1);
}
MPI_File_close(&mf);
MPI_Finalize();
return 0;
}
thanks,
Yong
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users