Hello Jeff: I just finished rerunning mtt without --trial option for your checking. I suppose you can get the reports in database. If that's not the case, please correct me. My platform name is ibm_beijing_rhel5.2.
My server can not be accessed outside from IBM. If you need any inform to check those errors, for example, those txt files of containing errors, please let me know. I will send them to your mailbox. Thanks a lot! Wen Hao Wang Jeff Squyres <jsquyres@cisco.c om> To Sent by: General user list for the MPI mtt-users-bounces Testing Tool @open-mpi.org <mtt-us...@open-mpi.org> cc Brad Benton 2008-05-15 19:05 <brad.ben...@us.ibm.com>, mtt-users-boun...@open-mpi.org Subject Please respond to Re: [MTT users] MTT fail to require General user list MTT::Test::Specify::Simple for the MPI Testing Tool <mtt-users@open-m pi.org> On May 15, 2008, at 1:56 AM, Wen Hao Wang wrote: > Sorry for my delay. I deleted that line "specify_module = Simple" in > ini file, and the require failure diappeared. Thanks for your support. > Great! I just filed https://svn.open-mpi.org/trac/mtt/ticket/356 because it seems like this is a simple enough error that we should really be able to detect it pretty easily and print out a reasonable error message. > Here is my MTT arguments this time: > client/mtt -d --force -f samples/ompi-core-template.ini --trial --no- > section intel --no-section 1.1 > The MTT result contains totally 22 failure. Not sure which of them > are already known issues. I suppose the failed cases are one open > MPI issue, instead of MTT issue. If that's the case, please correct > me. I do not know which bug in https://svn.open-mpi.org/trac/ompi/report/6 > matches my mtt failure in running "mpirun -np 2 --mca btl tcp,self > --prefix /LTC/MTT/..." listed below. > > Is there any method to probe this failures for me? Or is there > anyone to check all my MTT failed cases? I need one method to get > which failed cases have been known and tracked. > Unfortunately we do not have such an automated mechanism -- most of the existing known issues/failures are just known by the developers. That being said, if anyone has any clue how to implement such an automated system, I'm all ears. We just could never figure out how to do it reliably because the stdout/stderr of known issue/failure X may be slightly different on any given machine. :-\ If you want to do a run in trial mode and send your results to the main OMPI MTT DB, if you could send us a permalink for your results, I'd be happy to look at them and help you classify the errors. -- Jeff Squyres Cisco Systems _______________________________________________ mtt-users mailing list mtt-us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users