Hello !
(CC'ed to the list)
Your data are paired so you should use this instead:
mpiexec -n 32 Ray \
-p SRR001665_1.fastq SRR001665_2.fastq \
-p SRR001666_1.fastq SRR001666_2.fastq \
-o test_coli
You should weed out all these paths that make your script difficult to
read.
mpiexec, Ray, and other tools of MPICH2 should be in your path instead.
For sequence files, you can do symbolic links.
ln -s \
/homenc/hpcmedicine/NGS_TOOLS/DATA/ecoli_MG1655/SRX000429/SRR001665_1.fastq \
SRR001665_1.fastq
Your submission script is really complex.
You should install a recent version of MPICH2 that does not require
calls to mpdboot, mpdtrace and mpdallexit. I think it is called the
Hydra launcher but I am not sure. Anyway, it is the default in
Or you can just use Open-MPI.
In my opinion, the preparation of the host file should be done by your
cluster scheduler, not by your script.
What was the content of the standard output and standard error ?
On Thu, 2012-03-01 at 03:50 -0500, Milner Kumar wrote:
> Dear Sébastien,
> Thanks for your e-mail. I tried to assemble Mycoplasma reads with Ray
> in 4 nodes (8 cores each) and it successfully assembled the contigs.
> When I tried for E.coli reads the job was not completed. It created
> the following files
> CoverageDistributionAnalysis.txt
> CoverageDistribution.txt
> degreeDistribution.txt
> LibraryStatistics.txt
> NetworkTest.txt
> NumberOfSequences.txt
> RayCommand.txt
> RayVersion.txt
> SeedLengthDistribution.txt
> SequencePartition.txt
>
> I used the following script to execute Ray (submit.sh)
>
> #!/bin/sh
>
> export LD_LIBRARY_PATH=/app/lib/mpi/mpich2/1.0.7/intel/lib:
> $LD_LIBRARY_PATH
> export PATH=/app/lib/mpi/mpich2/1.0.7/intel/bin:$PATH
>
> if [ ! -f ${HOME}/.mpd.conf ] ; then
> echo "MPD_SECRETWORD=${USER}-123" > ${HOME}/.mpd.conf
> chmod 600 ${HOME}/.mpd.conf
> fi
>
> MYHOSTS=/tmp/myhosts.${LSB_JOBID}
> for i in $LSB_HOSTS ;do
> echo $i >> $MYHOSTS
> done
>
> NP=`uniq $MYHOSTS | wc -l`
>
> /app/lib/mpi/mpich2/1.0.7/intel/bin/mpdboot -n $NP --rsh=ssh -v -f
> $MYHOSTS
> /app/lib/mpi/mpich2/1.0.7/intel/bin/mpdtrace
> /app/lib/mpi/mpich2/1.0.7/intel/bin/mpirun -np
> 32 /app/prod/bioinformatics/Ray/mpich/Ray-1.7/Ray
> -s /homenc/hpcmedicine/NGS_TO
> OLS/DATA/ecoli_MG1655/SRX000429/SRR001665_1.fastq
> -s /homenc/hpcmedicine/NGS_TOOLS/DATA/ecoli_MG1655/SRX000429/SRR001665_2.fas
> tq
> -s
> /homenc/hpcmedicine/NGS_TOOLS/DATA/ecoli_MG1655/SRX000429/SRR001666_1.fastq
> -s /homenc/hpcmedicine/NGS_TOOLS/DATA/ecoli_
> MG1655/SRX000429/SRR001666_2.fastq
> -o /homenc/hpcmedicine/milner/Mpich2_32proc_ecoli
> /app/lib/mpi/mpich2/1.0.7/intel/bin/mpdallexit
>
> rm -rf $MYHOSTS
>
> This script was run by the following command
> qsub -l nodes=4:ppn=8 -e err.log -o out.log ./submit.sh
>
> Can you please tell me what mistake I have done.
>
> With regards,
> M. Milner Kumar
>
>
> 2012/1/31 Sébastien Boisvert <[email protected]>
> Hi !
>
> Can you provide your command ?
>
> -seb
>
>
> On Wed, 2012-01-25 at 02:10 -0500, Milner Kumar wrote:
> > Dear denovoassembler-users,
> > When I submit jobs on multiple nodes using Ray it is running
> for a
> > long time without producing any output. However when I
> submit in one
> > node it works properly. Please solve my problem,
> > With regards,
> > M. Milner Kumar
>
>
>
>
------------------------------------------------------------------------------
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
_______________________________________________
Denovoassembler-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/denovoassembler-users