You should try with a lower k-mer length. Try -k 31.
k-mers of length 101 in reads that contain no errors will not be numerous.
> ________________________________________
> De : [email protected] [[email protected]] de la part de Nicolas
> Balcazar [[email protected]]
> Date d'envoi : 31 août 2011 22:10
> À : Sébastien Boisvert
> Objet : Re: RE : RE : RE : RE : RE : RAY just on LAM/MPI?
>
> Hi Sebastien,
>
> any news on why Ray is crushing with my long solexa reads?
>
> bests, Nicolas
>
>
>
>
> 2011/8/31 Nicolas Balcazar
> <[email protected]<mailto:[email protected]>>
>
> -k 101
> And i tried with 91 now, too, same error. When compiled i used 128 as
> described in the manual..
>
> On 31 Aug 2011 02:09, "Sébastien Boisvert"
> <[email protected]<mailto:[email protected]>> wrote:
>> From the log:
>>
>> Rank 0: Assembler panic: no peak observed in the k-mer coverage distribution.
>> Rank 0: to deal with the sequencing error rate, try to lower the k-mer
>> length (-k)
>>
>>
>> What value did you use for the k-mer length (-k) ?
>>
>>
>>> ________________________________________
>>> De : [email protected]<mailto:[email protected]>
>>> [[email protected]<mailto:[email protected]>] de la part de
>>> Nicolas Balcazar [[email protected]<mailto:[email protected]>]
>>> Date d'envoi : 30 août 2011 18:50
>>> À : Sébastien Boisvert
>>> Cc :
>>> [email protected]<mailto:[email protected]>
>>> Objet : Re: RE : RE : RE : RE : RAY just on LAM/MPI?
>>>
>>> ok, i tried that version, and now it already kind of crashes after "Step:
>>> K-mer counting":
>>>
>>> ***
>>> Step: K-mer counting
>>> Date: Wed Aug 31 00:23:57 2011
>>> Elapsed time: 20 minutes, 14 seconds
>>> Since beginning: 21 minutes, 1 seconds
>>> ***
>>>
>>> Rank 0 has 3196286 k-mers (completed)
>>> Rank 7 has 3200778 k-mers (completed)
>>> Rank 4 has 3198066 k-mers (completed)
>>> Rank 12 has 3202550 k-mers (completed)
>>> Rank 17 has 3198102 k-mers (completed)
>>> Rank 20 has 3195578 k-mers (completed)
>>> Rank 5 has 3199300 k-mers (completed)
>>> Rank 19 has 3202550 k-mers (completed)
>>> Rank 14 has 3199560 k-mers (completed)
>>> Rank 1 has 3202036 k-mers (completed)
>>> Rank 18 has 3201140 k-mers (completed)
>>> Rank 11 has 3196140 k-mers (completed)
>>> Rank 21 has 3205276 k-mers (completed)
>>> Rank 15 has 3197538 k-mers (completed)
>>> Rank 6 has 3203614 k-mers (completed)
>>> Rank 9 has 3199128 k-mers (completed)
>>> Rank 13 has 3196418 k-mers (completed)
>>> Rank 3 has 3202886 k-mers (completed)
>>> Rank 10 has 3197264 k-mers (completed)
>>> Rank 16 has 3198038 k-mers (completed)
>>> Rank 8 has 3195012 k-mers (completed)
>>>
>>>
>>> Rank 0: the minimum coverage is 2
>>> Rank 0: the peak coverage is 2
>>> Rank 0: Assembler panic: no peak observed in the k-mer coverage
>>> distribution.
>>> Rank 0: to deal with the sequencing error rate, try to lower the k-mer
>>> length (-k)
>>> Rank 21: sent 297225 messages, received 297226 messages.
>>> Rank 20: sent 299887 messages, received 299888 messages.
>>> Rank 19: sent 300109 messages, received 300110 messages.
>>> Rank 18: sent 299799 messages, received 299800 messages.
>>> Rank 17: sent 299766 messages, received 299767 messages.
>>> Rank 16: sent 299239 messages, received 299240 messages.
>>> Rank 15: sent 298580 messages, received 298581 messages.
>>> Rank 14: sent 300802 messages, received 300803 messages.
>>> Rank 13: sent 297063 messages, received 297064 messages.
>>> Rank 12: sent 298839 messages, received 298840 messages.
>>> Rank 11: sent 296100 messages, received 296101 messages.
>>> Rank 10: sent 364948 messages, received 364949 messages.
>>> Rank 9: sent 438765 messages, received 438766 messages.
>>> Rank 8: sent 488448 messages, received 488449 messages.
>>> Rank 7: sent 518148 messages, received 518149 messages.
>>> Rank 6: sent 442586 messages, received 442587 messages.
>>> Rank 5: sent 485924 messages, received 485925 messages.
>>> Rank 4: sent 526806 messages, received 526807 messages.
>>> Rank 3: sent 553545 messages, received 553546 messages.
>>> Rank 2: sent 567784 messages, received 567785 messages.
>>> Rank 1: sent 491188 messages, received 491189 messages.
>>> Rank 0: sent 547613 messages, received 547591 messages.
>>>
>>>
>>> i tried feeding him more reads, but it doent seem to have any influence on
>>> the errormessage " Assembler panic: no peak observed in the k-mer coverage
>>> distribution"
>>> what does this mean?
>>>
>>> THANKS!
>>> Nicolas
>>>
>>>
>>>
>>>
>>> 2011/8/30 Sébastien Boisvert
>>> <[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>
>>> Can you try with the last development version:
>>>
>>> https://github.com/sebhtml/ray/zipball/master
>>>
>>>
>>>
>>> (don't forget to CC the list.)
>>>
>>>> ________________________________________
>>>> De :
>>>> [email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>
>>>>
>>>> [[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>]
>>>> de la part de Nicolas Balcazar
>>>> [[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>]
>>>> Date d'envoi : 30 août 2011 12:34
>>>> À : Sébastien Boisvert
>>>> Objet : Re: RE : RE : RE : RAY just on LAM/MPI?
>>>>
>>>> Hi Sebastien,
>>>>
>>>> as always u were right, i was still using the wrong mpi version, because
>>>> it was not in the beginning of my $PATH. Now i corrected that (and the
>>>> rank nummeration started to work suddenly), but now i get the following
>>>> (very long) error message: (im just working on one server with 32 cores)
>>>>
>>>> Rank 18 is creating seeds [1/416914]
>>>> Rank 16 is creating seeds [1/418234]
>>>> Rank 5 is creating seeds [1/418420]
>>>> Rank 8 is creating seeds [1/417864]
>>>> Rank 20 is creating seeds [1/417350]
>>>>
>>>> ***
>>>> Step: Selection of optimal read markers
>>>> Date: Tue Aug 30 17:53:31 2011
>>>> Elapsed time: 1 minutes, 59 seconds
>>>> Since beginning: 36 minutes, 47 seconds
>>>> ***
>>>>
>>>> Rank 0 is creating seeds [1/418150]
>>>> Rank 21 is creating seeds [1/420012]
>>>> Rank 10 is creating seeds [1/417898]
>>>> Rank 13 is creating seeds [1/417902]
>>>> Rank 14 is creating seeds [1/415496]
>>>> Rank 17 is creating seeds [1/416416]
>>>> Rank 2 is creating seeds [1/418206]
>>>> Rank 3 is creating seeds [1/418230]
>>>> Rank 9 is creating seeds [1/418582]
>>>> Rank 19 is creating seeds [1/417974]
>>>> [floccinaucinihilipilification:24139] *** Process received signal ***
>>>> [floccinaucinihilipilification:24152] *** Process received signal ***
>>>> [floccinaucinihilipilification:24152] Signal: Segmentation fault (11)
>>>> [floccinaucinihilipilification:24152] Signal code: Address not mapped (1)
>>>> [floccinaucinihilipilification:24152] Failing at address: (nil)
>>>> [floccinaucinihilipilification:24154] *** Process received signal ***
>>>> [floccinaucinihilipilification:24154] Signal: Segmentation fault (11)
>>>> [floccinaucinihilipilification:24154] Signal code: Address not mapped (1)
>>>> [floccinaucinihilipilification:24154] Failing at address: (nil)
>>>> [floccinaucinihilipilification:24156] *** Process received signal ***
>>>> [floccinaucinihilipilification:24156] Signal: Segmentation fault (11)
>>>> [floccinaucinihilipilification:24156] Signal code: Address not mapped (1)
>>>> [floccinaucinihilipilification:24156] Failing at address: (nil)
>>>> [floccinaucinihilipilification:24139] Signal: Segmentation fault (11)
>>>> [floccinaucinihilipilification:24139] Signal code: Address not mapped (1)
>>>> [floccinaucinihilipilification:24139] Failing at address: (nil)
>>>> [floccinaucinihilipilification:24152] [ 0] /lib/libpthread.so.0
>>>> [0x7fbbd4a51020]
>>>> [floccinaucinihilipilification:24152] [ 1]
>>>> Ray(_ZN11SeedingData12computeSeedsEv+0x27e) [0x4781ce]
>>>> [floccinaucinihilipilification:24152] [ 2]
>>>> Ray(_ZN7Machine10runVanillaEv+0x76) [0x4430e6]
>>>> [floccinaucinihilipilification:24152] [ 3] Ray(_ZN7Machine5startEv+0xd27)
>>>> [0x448f27]
>>>> [floccinaucinihilipilification:24152] [ 4] Ray(main+0x3a) [0x496a8a]
>>>> [floccinaucinihilipilification:24152] [ 5]
>>>> /lib/libc.so.6(__libc_start_main+0xf4) [0x7fbbd481f4f4]
>>>> [floccinaucinihilipilification:24152] [ 6] Ray(__gxx_personality_v0+0x1f9)
>>>> [0x422ee9]
>>>> [floccinaucinihilipilification:24152] *** End of error message ***
>>>> ( [...some stuff several times...] )
>>>> [floccinaucinihilipilification:24151] *** Process received signal ***
>>>> [floccinaucinihilipilification:24151] Signal: Segmentation fault (11)
>>>> [floccinaucinihilipilification:24151] Signal code: Address not mapped (1)
>>>> [floccinaucinihilipilification:24151] Failing at address: (nil)
>>>> [floccinaucinihilipilification:24159] *** Process received signal ***
>>>> [floccinaucinihilipilification:24159] Signal: Segmentation fault (11)
>>>> [floccinaucinihilipilification:24159] Signal code: Address not mapped (1)
>>>> [floccinaucinihilipilification:24159] Failing at address: (nil)
>>>> [floccinaucinihilipilification:24151] [ 0] /lib/libpthread.so.0
>>>> [0x7fefb916a020]
>>>> [floccinaucinihilipilification:24151] [ 1]
>>>> Ray(_ZN11SeedingData12computeSeedsEv+0x27e) [0x4781ce]
>>>> [floccinaucinihilipilification:24151] [ 2]
>>>> Ray(_ZN7Machine10runVanillaEv+0x76) [0x4430e6]
>>>> [floccinaucinihilipilification:24151] [ 3] Ray(_ZN7Machine5startEv+0xd27)
>>>> [0x448f27]
>>>> [floccinaucinihilipilification:24151] [ 4] Ray(main+0x3a) [0x496a8a]
>>>> [floccinaucinihilipilification:24151] [ 5]
>>>> /lib/libc.so.6(__libc_start_main+0xf4) [0x7fefb8f384f4]
>>>> [floccinaucinihilipilification:24151] [ 6] Ray(__gxx_personality_v0+0x1f9)
>>>> [0x422ee9]
>>>> [floccinaucinihilipilification:24151] *** End of error message ***
>>>> [floccinaucinihilipilification:24159] [ 0] /lib/libpthread.so.0
>>>> [0x7f301b2df020]
>>>> [floccinaucinihilipilification:24159] [ 1]
>>>> Ray(_ZN11SeedingData12computeSeedsEv+0x27e) [0x4781ce]
>>>> [floccinaucinihilipilification:24159] [ 2]
>>>> Ray(_ZN7Machine10runVanillaEv+0x76) [0x4430e6]
>>>> [floccinaucinihilipilification:24159] [ 3] Ray(_ZN7Machine5startEv+0xd27)
>>>> [0x448f27]
>>>> [floccinaucinihilipilification:24159] [ 4] Ray(main+0x3a) [0x496a8a]
>>>> [floccinaucinihilipilification:24159] [ 5]
>>>> /lib/libc.so.6(__libc_start_main+0xf4) [0x7f301b0ad4f4]
>>>> [floccinaucinihilipilification:24159] [ 6] Ray(__gxx_personality_v0+0x1f9)
>>>> [0x422ee9]
>>>> [floccinaucinihilipilification:24159] *** End of error message ***
>>>> --------------------------------------------------------------------------
>>>> mpirun noticed that process rank 8 with PID 24147 on node
>>>> floccinaucinihilipilification.molgen.mpg.de<http://floccinaucinihilipilification.molgen.mpg.de><http://floccinaucinihilipilification.molgen.mpg.de><http://floccinaucinihilipilification.molgen.mpg.de>
>>>> exited on signal 11 (Segmentation fault).
>>>> --------------------------------------------------------------------------
>>>> [floccinaucinihilipilification:24141] *** Process received signal ***
>>>> [floccinaucinihilipilification:24141] Signal: Segmentation fault (11)
>>>> [floccinaucinihilipilification:24141] Signal code: Address not mapped (1)
>>>> [floccinaucinihilipilification:24141] Failing at address: (nil)
>>>> [floccinaucinihilipilification:24141] [ 0] /lib/libpthread.so.0
>>>> [0x7fdd46816020]
>>>> [floccinaucinihilipilification:24141] [ 1]
>>>> Ray(_ZN11SeedingData12computeSeedsEv+0x27e) [0x4781ce]
>>>> [floccinaucinihilipilification:24141] [ 2]
>>>> Ray(_ZN7Machine10runVanillaEv+0x76) [0x4430e6]
>>>> [floccinaucinihilipilification:24141] [ 3] Ray(_ZN7Machine5startEv+0xd27)
>>>> [0x448f27]
>>>> [floccinaucinihilipilification:24141] [ 4] Ray(main+0x3a) [0x496a8a]
>>>> [floccinaucinihilipilification:24141] [ 5]
>>>> /lib/libc.so.6(__libc_start_main+0xf4) [0x7fdd465e44f4]
>>>> [floccinaucinihilipilification:24141] [ 6] Ray(__gxx_personality_v0+0x1f9)
>>>> [0x422ee9]
>>>> [floccinaucinihilipilification:24141] *** End of error message ***
>>>> ( [crash/finish] )
>>>>
>>>>
>>>> is this a problem on the server's side?
>>>> Also i noticed that the error always shows up at the "Step: Selection of
>>>> optimal read markers" process..
>>>> I think we are getting close to making it work :) please bear with me
>>>> thank you very much,
>>>> Nicolas
>>>>
>>>>
>>>>
>>>>
>>>> 2011/8/30 Sébastien Boisvert
>>>> <[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>
>>>> ______________________________________
>>>>> De :
>>>>> [email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>
>>>>>
>>>>> [[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>]
>>>>> de la part de Nicolas Balcazar
>>>>> [[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>]
>>>>> Date d'envoi : 30 août 2011 00:03
>>>>> À : Sébastien Boisvert
>>>>> Objet : Re: RE : RE : RAY just on LAM/MPI?
>>>>>
>>>>> Hi Sébastien
>>>>>
>>>>> i installed openmpi and with that Ray started to work perfectly! :)
>>>>> thanks for the advice!
>>>>> But now maybe u can help me out again: After 13h of computation Ray
>>>>> suddenly stopped. Maybe i ran out of memory (130GB) but if so,
>>>>
>>>>
>>>> There is an option called -show-memory-usage.
>>>>
>>>> If you run out of memory, your whole system becomes unstable and the linux
>>>> kernel will give its out-of-memory killer (OOM killer)
>>>> full authority.
>>>>
>>>>
>>>>
>>>>> i would have expected another kind of error message. Here how i started
>>>>> Ray and the lines with the error message (in red):
>>>>>
>>>>> balcazar@floccinaucinihilipilification:/project/nicolas/run_RAY> mpirun
>>>>> -np 25 Ray -k 101 -p ../data_solexa/C_GCCAAT_L001_R1_001.fastq
>>>>> ../data_solexa/C_GCCAAT_L001_R2_001.fastq -s
>>>>> ../data_454/sff/GFHVXCR01.RL2.sff -s ../data_454/sff/GFHVXCR02.RL2.sff -o
>>>>> bal-Ray-SolAnd454-test2 | tee log_assembly_RAY_test2.txt
>>>>>
>>>>
>>>> 1. You are not using Open-MPI at all.
>>>>
>>>> "You can use the "lamexec" program" from the log. lamexec is a program in
>>>> LAM/MPI, not Open-MPI.
>>>>
>>>>
>>>> 2. Why do you have only log lines for rank 0 ?
>>>>
>>>>
>>>>
>>>>
>>>>> Rank 0 is selecting optimal read markers [850001/8725982]
>>>>> Rank 0 is selecting optimal read markers [860001/8725982]
>>>>> Rank 0 is purging edges [41350001/51238092]
>>>>> Rank 0 is selecting optimal read markers [8725982/8725982] (completed)
>>>>> Rank 0: peak number of workers: 186, maximum: 30000
>>>>> Rank 0: VirtualCommunicator: 302522250 pushed messages generated 2452837
>>>>> virtual messages (0.810796%)
>>>>>
>>>>> ***
>>>>> Step: Selection of optimal read markers
>>>>> Date: Tue Aug 30 04:17:16 2011
>>>>> Elapsed time: 23 minutes, 53 seconds
>>>>> Since beginning: 13 hours, 22 minutes, 13 seconds
>>>>> ***
>>>>>
>>>>> Rank 0 is creating seeds [1/51238092]
>>>>> Rank 0 is purging edges [39300001/51238092]
>>>>> Rank 0 is selecting optimal read markers [870001/8725982]
>>>>> Rank 0 is selecting optimal read markers [880001/8725982]
>>>>> Rank 0 is selecting optimal read markers [890001/8725982]
>>>>> [floccinaucinihilipilification:11454] *** Process received signal ***
>>>>> Rank 0 is selecting optimal read markers [900001/8725982]
>>>>> Rank 0 is purging edges [41400001/51238092]
>>>>> Rank 0 is purging edges [39350001/51238092]
>>>>> Rank 0 is selecting optimal read markers [910001/8725982]
>>>>> [floccinaucinihilipilification:11454] Signal: Segmentation fault (11)
>>>>> [floccinaucinihilipilification:11454] Signal code: Address not mapped (1)
>>>>> [floccinaucinihilipilification:11454] Failing at address: (nil)
>>>>> Rank 0 is selecting optimal read markers [920001/8725982]
>>>>> Rank 0 is selecting optimal read markers [930001/8725982]
>>>>> Rank 0 is selecting optimal read markers [940001/8725982]
>>>>> Rank 0 is purging edges [41450001/51238092]
>>>>> Rank 0 is selecting optimal read markers [950001/8725982]
>>>>> Rank 0 is purging edges [39400001/51238092]
>>>>> Rank 0 is selecting optimal read markers [960001/8725982]
>>>>> Rank 0 is selecting optimal read markers [970001/8725982]
>>>>> Rank 0 is selecting optimal read markers [980001/8725982]
>>>>> Rank 0 is selecting optimal read markers [990001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1000001/8725982]
>>>>> Rank 0 is purging edges [41500001/51238092]
>>>>> Rank 0 is purging edges [39450001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1010001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1020001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1030001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1040001/8725982]
>>>>> Rank 0 has 31000000 vertices
>>>>> Rank 0 is selecting optimal read markers [1050001/8725982]
>>>>> Rank 0 is purging edges [41550001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1060001/8725982]
>>>>> Rank 0 is purging edges [39500001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1070001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1080001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1090001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1100001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1110001/8725982]
>>>>> Rank 0 is purging edges [41600001/51238092]
>>>>> Rank 0 is purging edges [39550001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1120001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1130001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1140001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1150001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1160001/8725982]
>>>>> Rank 0 is purging edges [41650001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1170001/8725982]
>>>>> Rank 0 is purging edges [39600001/51238092]
>>>>> [floccinaucinihilipilification:11454] [ 0] /lib/libpthread.so.0
>>>>> [0x7fd87790c020]
>>>>> [floccinaucinihilipilification:11454] [ 1]
>>>>> Ray(_ZN11SeedingData12computeSeedsEv+0x27e) [0x4781ce]
>>>>> [floccinaucinihilipilification:11454] [ 2]
>>>>> Ray(_ZN7Machine10runVanillaEv+0x76) [0x4430e6]
>>>>> [floccinaucinihilipilification:11454] [ 3] Ray(_ZN7Machine5startEv+0xd27)
>>>>> [0x448f27]
>>>>> [floccinaucinihilipilification:11454] [ 4] Ray(main+0x3a) [0x496a8a]
>>>>> [floccinaucinihilipilification:11454] [ 5]
>>>>> /lib/libc.so.6(__libc_start_main+0xf4) [0x7fd8776da4f4]
>>>>> [floccinaucinihilipilification:11454] [ 6]
>>>>> Ray(__gxx_personality_v0+0x1f9) [0x422ee9]
>>>>> [floccinaucinihilipilification:11454] *** End of error message ***
>>>>> Rank 0 is selecting optimal read markers [1180001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1190001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1200001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1210001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1220001/8725982]
>>>>> Rank 0 is purging edges [41700001/51238092]
>>>>> Rank 0 is purging edges [39650001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1230001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1240001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1250001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1260001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1270001/8725982]
>>>>> Rank 0 is purging edges [41750001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1280001/8725982]
>>>>> Rank 0 is purging edges [39700001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1290001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1300001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1310001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1320001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1330001/8725982]
>>>>> Rank 0 is purging edges [41800001/51238092]
>>>>> Rank 0 is purging edges [39750001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1340001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1350001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1360001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1370001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1380001/8725982]
>>>>> Rank 0 is purging edges [41850001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1390001/8725982]
>>>>> Rank 0 is purging edges [39800001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1400001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1410001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1420001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1430001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1440001/8725982]
>>>>> Rank 0 is purging edges [41900001/51238092]
>>>>> Rank 0 is purging edges [39850001/51238092]
>>>>> Rank 0 is selecting optimal read markers [1450001/8725982]
>>>>> Rank 0 is selecting optimal read markers [1460001/8725982]
>>>>> -----------------------------------------------------------------------------
>>>>> It seems that [at least] one of the processes that was started with
>>>>> mpirun did not invoke MPI_INIT before quitting (it is possible that
>>>>> more than one process did not invoke MPI_INIT -- mpirun was only
>>>>> notified of the first one, which was on node n0).
>>>>>
>>>>> mpirun can *only* be used with MPI programs (i.e., programs that
>>>>> invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
>>>>> to run non-MPI programs over the lambooted nodes.
>>>>> -----------------------------------------------------------------------------
>>>>> balcazar@floccinaucinihilipilification:/project/nicolas/run_RAY>
>>>>>
>>>>>
>>>>> THANKS for your help!
>>>>> Nicolas
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 2011/8/17 Sébastien Boisvert
>>>>> <[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>>
>>>>> Your problem is with LAM/MPI, not with Ray.
>>>>>
>>>>> I never used LAM/MPI.
>>>>>
>>>>>
>>>>> Can you verify that lamboot, mpic++ and mpirun are from the LAM/MPI.
>>>>>
>>>>>
>>>>> Try searching the LAM/MPI mailing list for similar issues.
>>>>>
>>>>>
>>>>>
>>>>> Sébastien
>>>>> ________________________________________
>>>>> De :
>>>>> [email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>
>>>>>
>>>>> [[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>]
>>>>> de la part de Nicolas Balcazar
>>>>> [[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>]
>>>>> Date d'envoi : 16 août 2011 20:20
>>>>> À : Sébastien Boisvert
>>>>> Objet : Re: RE : RAY just on LAM/MPI?
>>>>>
>>>>> Thank you for your quick replay.
>>>>> In deed i had to start lamboot first. Luckily this was installed already
>>>>> on the server, open-mpi is not installed :( and i cannot install anything
>>>>> there. mpic++ is in my path, so i didnt change the MPICXX option.
>>>>>
>>>>> But now still i get this error message:
>>>>>
>>>>> cannot execute binary file
>>>>> -----------------------------------------------------------------------------
>>>>> It seems that [at least] one of the processes that was started with
>>>>> mpirun did not invoke MPI_INIT before quitting (it is possible that
>>>>> more than one process did not invoke MPI_INIT -- mpirun was only
>>>>> notified of the first one, which was on node n0).
>>>>>
>>>>> mpirun can *only* be used with MPI programs (i.e., programs that
>>>>> invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
>>>>> to run non-MPI programs over the lambooted nodes.
>>>>> -----------------------------------------------------------------------------
>>>>>
>>>>> Is there something i can do without having to install something
>>>>> complicated on the server?
>>>>> I'm really looking forward to use RAY!
>>>>>
>>>>> Thanks again,
>>>>> Nicolas
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 2011/8/16 Sébastien Boisvert
>>>>> <[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>>>
>>>>> MPI is a requirement for Ray.
>>>>>
>>>>> The messages will just transit in shared memory if all processes are on
>>>>> the same host.
>>>>>
>>>>> You can run Ray on 1 single core too. The single MPI rank will then send
>>>>> messages to itself.
>>>>>
>>>>>
>>>>>
>>>>> Sébastien
>>>>> ________________________________________
>>>>> De :
>>>>> [email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>>
>>>>>
>>>>> [[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>>]
>>>>> de la part de Nicolas Balcazar
>>>>> [[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>>]
>>>>> Date d'envoi : 15 août 2011 23:57
>>>>> À :
>>>>> [email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>>
>>>>> Objet : Re: RAY just on LAM/MPI?
>>>>>
>>>>> Hi Sebastien,
>>>>>
>>>>> sorry, its me again,
>>>>> i just wanted to specify my question. I work on a machine with some other
>>>>> people that has 32 cores and 132GB RAM.
>>>>> I dont think i really need a Lamboot/LAM/MPI.
>>>>> So: can i make RAY work, without that?
>>>>>
>>>>> THANKS!! :)
>>>>>
>>>>> Nicolas
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Aug 16, 2011 at 5:47 AM, Nicolas Balcazar
>>>>> <[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>><mailto:[email protected]<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>>>>>>
>>>>> wrote:
>>>>> Hi Sébastien,
>>>>>
>>>>> i tryed to use RAY and after having all my data nicely lined up, i get
>>>>> this error message just after starting the program from the commandline:
>>>>>
>>>>> mpirun -np 25 ../../RAY/Ray-1.6.1/Ray-Large-kmers-gz/Ray -k 101 -p
>>>>> ../data_solexa/C_GCCAAT_L001_R1_001.fastq
>>>>> ../data_solexa/C_GCCAAT_L001_R2_001.fastq
>>>>> -----------------------------------------------------------------------------
>>>>> It seems that there is no lamd running on the host
>>>>> tiaotiao.molgen.mpg.de<http://tiaotiao.molgen.mpg.de><http://tiaotiao.molgen.mpg.de><http://tiaotiao.molgen.mpg.de><http://tiaotiao.molgen.mpg.de><http://tiaotiao.molgen.mpg.de><http://tiaotiao.molgen.mpg.de>.
>>>>>
>>>>> This indicates that the LAM/MPI runtime environment is not operating.
>>>>> The LAM/MPI runtime environment is necessary for the "mpirun" command.
>>>>>
>>>>> Please run the "lamboot" command the start the LAM/MPI runtime
>>>>> environment. See the LAM/MPI documentation for how to invoke
>>>>> "lamboot" across multiple machines.
>>>>> -----------------------------------------------------------------------------
>>>>>
>>>>> If u could tell me if i can make it work somehow,
>>>>> i would be very thankful!!
>>>>>
>>>>> Thanks,
>>>>> Nicolas
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>
>>>
>
>
------------------------------------------------------------------------------
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
_______________________________________________
Denovoassembler-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/denovoassembler-users