Hi,

I configureed my PVFS2.7.1 server on my IBM thinkpad laptop, which works under 
ubuntu8.10 and so does the client. Now I use pio-bench to take simple stride 
PVFS I/O trace and it works fine, the thing is I must set the number of process 
to 1 when I running mpiexec, if I set the number of process more than 1 it 
fails and tips like below:

gxwan...@wangdi:~/Desktop/pio-bench$ sudo mpiexec -n 1 ./pio-bench
[sudo] password for gxwangdi: 
File under test: /mnt/pvfs2/ftpaccess
Number of Processes: 1
Sync: off
Averaging: Off
the nested strided pattern needs to be run with an even amount of processes
gxwan...@wangdi:~/Desktop/pio-bench$ sudo mpiexec -n 4 ./pio-bench
Fatal error in MPI_Barrier: Other MPI error, error stack:
MPI_Barrier(406)..........................: MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier(77)..........................: 
MPIC_Sendrecv(126)........................: 
MPIC_Wait(270)............................: 
MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling an 
event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(420): 
MPIDU_Socki_handle_read(637)..............: connection failure 
(set=0,sock=1,errno=104:Connection reset by peer)[cli_0]: aborting job:
Fatal error in MPI_Barrier: Other MPI error, error stack:
MPI_Barrier(406)..........................: MPI_Barrier(MPI_COMM_WORLD) failed
MPIR_Barrier(77)..........................: 
MPIC_Sendrecv(126)........................: 
MPIC_Wait(270)............................: 
MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling an 
event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(420): 
MPIDU_Socki_handle_readFatal error in MPI_Bcast: Other MPI error, error stack:
MPI_Bcast(786)............................: MPI_Bcast(buf=0x1fd6ca78, count=20, 
MPI_BYTE, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast(198)...........................: 
MPIC_Recv(81).............................: 
MPIC_Wait(270)............................: 
MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling an 
event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(456): 
adjust_iov(973)...........................: ch3|sock|immedread 0x1e5a0d60 
0x1f329978 0x1f3258d0
MPIDU_Sock_readv(455).....................: the supplied buffer contains 
invalid memory (set=0,sock=1,errno=14:Bad address)[cli_1]: aborting job:
Fatal error in MPI_Bcast: Other MPI error, error stack:
MPI_Bcast(786)............................: MPI_Bcast(buf=0x1fd6ca78, count=20, 
MPI_BYTE, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast(198)...........................: 
MPIC_Recv(81).............................: 
MPIC_Wait(270)............................(637)..............: connection 
failure (set=0,sock=1,errno=104:Connection reset by peer)
: 
MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling an 
event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(456): 
adjust_iov(973)...........................: ch3|sock|immedread 0x1e5a0d60 
0x1f329978 0x1f3258d0
MPIDU_Sock_readv(455).....................: the supplied buffer contains 
invalid memory (set=0,sock=1,errno=14:Bad address)
Fatal error in MPI_Bcast: Other MPI error, error stack:
MPI_Bcast(786)............................: MPI_Bcast(buf=0x1fd6ca78, count=20, 
MPI_BYTE, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast(198)...........................: 
MPIC_Recv(81).............................: 
MPIC_Wait(270)............................: 
MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling an 
event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(456): 
adjust_iov(973)...........................: ch3|sock|immedread 0x1e5a0d60 
0x1eb9f978 0x1eb9b8d0
MPIDU_Sock_readv(455).....................: the supplied buffer contains 
invalid memory (set=0,sock=1,errno=14:Bad address)[cli_2]: aborting job:
Fatal error in MPI_Bcast: Other MPI error, error stack:
MPI_Bcast(786)............................: MPI_Bcast(buf=0x1fd6ca78, count=20, 
MPI_BYTE, root=0, MPI_COMM_WORLD) failed
MPIR_Bcast(198)...........................: 
MPIC_Recv(81).............................: 
MPIC_Wait(270)............................: 
MPIDI_CH3i_Progress_wait(215).............: an error occurred while handling an 
event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(456): 
adjust_iov(973)...........................: ch3|sock|immedread 0x1e5a0d60 
0x1eb9f978 0x1eb9b8d0
MPIDU_Sock_readv(455).....................: the supplied buffer contains 
invalid memory (set=0,sock=1,errno=14:Bad address)
rank 1 in job 9  WANGDI_59039   caused collective abort of all ranks
  exit status of rank 1: return code 1 
rank 0 in job 9  WANGDI_59039   caused collective abort of all ranks
  exit status of rank 0: return code 1 
gxwan...@wangdi:~/Desktop/pio-bench$ sudo mpiexec -n 4 hostname
WANGDI
WANGDI
WANGDI
WANGDI

The MPI works fine as I have verified it using hostname command.  My 
pio-bench.conf is like below:

 Testfile "/mnt/pvfs2/ftpaccess"
#TestFile "/home/gxwangdi/Desktop/ftpaccess"

OutputToFile "/home/gxwangdi/Desktop/pio-bench/results/result"

<ap_module>
    ModuleName "Nested Strided (read)"
    ModuleReps 3
    ModuleSettleTime 5
</ap_module>

<ap_module>
    ModuleName "Nested Strided (write)"
    ModuleReps 3
    ModuleSettleTime 5
</ap_module>

<ap_module>
    ModuleName "Nested Strided (read-modify-write)"
    ModuleReps 3
    ModuleSettleTime 5
</ap_module>

<ap_module>
    ModuleName "Nested Strided (re-read)"
    ModuleReps 3
    ModuleSettleTime 5
</ap_module>

<ap_module>
    ModuleName "Nested Strided (re-write)"
    ModuleReps 3
    ModuleSettleTime 5
</ap_module>

As I can not find the system log file for pio-bench somewhere in the directory, 
I do not understand what the problem is.

Appreciate your responses.



      ___________________________________________________________ 
  好玩贺卡等你发,邮箱贺卡全新上线! 
http://card.mail.cn.yahoo.com/
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to