Hi Steve,

Sorry, no -- you'll have to comment out those calls, or the equivalent.

        -- Joe

On 9/25/2015 11:26 AM, Steven Franke wrote:
> Joe -
>
> Is there an external way to turn off the attempts to decode the average in 
> WSJT10? Not a big deal - I’m sure that I can figure out how to change the 
> code to turn it off. As it stands, the program is calling the decoder to 
> attempt to decode the averaged signal immediately after it tries to decode 
> the current file - and that over-writes the kvasd.dat file.
>
> Steve
>
>
>> On Sep 25, 2015, at 9:37 AM, Joe Taylor<j...@princeton.edu>  wrote:
>>
>> Hi Steve,
>>
>> Just a quick reply, after reading your thoughtful comments.  Your
>> thinking about what to try next is very similar to mine -- in
>> particular, making use of the second-best symbol value as a substitute
>> for simple erasures.
>>
>> I should have mentioned that I in my tests with ntrials=10^5 and 10^6 I
>> (aomewhat arbitrarily) changed the test on ncandidates from an upper
>> limit of 5000 to an upper limit of ntrials/2.
>>
>> I will continue to play, as time permits, and will report any
>> interesting results.
>>
>>      -- Joe
>>
>> On 9/25/2015 9:34 AM, Steven Franke wrote:
>>> Hi Joe -
>>> Thanks for the new results! Getting ready for class here - so just a quick 
>>> note. Your results, though more extensive than mine, point in the same 
>>> direction. As it stands, sfrsd seems to be around 0.5 dB shy of kvasd. I 
>>> have a long list of ideas to try to do better - but it will take time to 
>>> sort through them.
>>>
>>> To answer your question on optimizing metrics - the short answer is that 
>>> they haven’t been optimized at all. Feel free to play with them. 
>>> Intuitively, I like the one that I was using - but last night I played with 
>>> yours and see that it’s not bad either. One task that I have on my list is 
>>> to collect statistics on the probability of the second-best-symbol estimate 
>>> being the correct symbol, as a function of the two probability metrics. I 
>>> expect there to be a sweet spot where p2/p1 is near 0.5 and p1 is not too 
>>> small and not too large where mr2sym is going to be useful. Collecting 
>>> these statistics would help us decide which symbol metrics (yours or mine) 
>>> are “better” for this purpose. This would also point the way toward a true 
>>> Chase-type algorithm where we actually use the second-most-reliable 
>>> symbols. At the moment, the only metric that is used is the probability 
>>> associated with mrsym - and that is used in a crude way, as you pointed 
>>> out. The idea would be to have 
so
>> me finite probability of replacing a symbol with mr2sym instead of marking 
>> it as an erasure - but only for symbols where mr2sym is reasonably likely to 
>> be the correct symbol. If/when we replace a symbol with mr2sym instead of 
>> marking it as an erasure, we’ll need to tell BM to re-calculate the 
>> syndromes - which is done using the last argument that I added to KA9Q’s 
>> routine.
>>>
>>> Last night, I found a potentially powerful metric to use as a stopping 
>>> criterion. Note that the algorithm runs all ntrials before stopping at 
>>> present. Hence, it is dog-slow, as you found out. I found that a good 
>>> stopping criterion can be created by calculating nhard-n2ndbest where nhard 
>>> is the number of symbols where the received symbol vector differs from the 
>>> decoded codeword (the number of “hard” errors) and n2ndbest is the number 
>>> of places where the second most reliable symbol is the *same* as the symbol 
>>> in the decoded codeword. I found that a bad codeword rarely has n2ndbest 
>>> larger than 2. I also found that a threshold like nhard-n2ndbest<45 (I’m 
>>> not sure if 45 is the best - but you get the idea) works well.
>>>
>>> With this in mind - the average running time can be dramatically reduced if 
>>> we simply stop (break out of the ntrials loop) when we find a codeword that 
>>> meets this type of criterion.
>>>
>>> BTW, as currently configured, the ntrial loop breaks out after it finds 
>>> 5000 codewords. This limits the effectiveness of increasing ntrials to 
>>> large number (like 100000). To get the full benefit of large ntrials, you 
>>> would probably need to increase that threshold to 20k or more - or just 
>>> eliminate the threshold and implement something like the stopping criterion 
>>> that I mentioned above.
>>>
>>> It is instructive to watch the results in sfrsd.log scroll by - you can see 
>>> how sensitive the number of “found” codewords is to the selected erasure 
>>> probabilities and the metrics…
>>>
>>> If you have time to play, feel free to mess around with sfrsd - or create 
>>> an jtrsd and we can merge them later.
>>> Steve k9an
>>>
>>>> On Sep 25, 2015, at 8:11 AM, Joe Taylor<j...@princeton.edu>   wrote:
>>>>
>>>> Hi Steve and all,
>>>>
>>>> I've added more lines to the table summarizing my tests of decoding
>>>> weak, isolated  JT65A signals.  As before, the final number on each line
>>>> is the number of valid decodes from a thousand files at
>>>> S/N=-24 dB.
>>>>
>>>> 1. WSJT-X (BM only)                       2
>>>> 2. WSJT (BM only)                         5
>>>> 3. WSJT-X + kvasd                       189
>>>> 4. WSJT-X + kvasd (thresh0=1, ntest>0)  207
>>>> 5. WSJT-X + sfrsd (Linux)               302
>>>> 6. WSJT-X + sfrsd (Win32)               309
>>>> 7. WSJT-X + sfrsd (Linux, thresh0=1)    348
>>>> 8. WSJT-X + sfrsd (Win32, thresh0=1)    350
>>>> 9. WSJT + kvasd (Linux)                 809
>>>> 10.WSJT + kvasd (Win32)                 809
>>>>
>>>> 11.WSJT + sfrsd (10000)                 464
>>>> 12.WSJT + sfrsd (SFM no ntest 10000)    519
>>>> 13.WSJT + sfrsd (SFM no ntest 20000)    543
>>>> 14.WSJT + sfrsd (SFM no ntest 1000)     342
>>>> 15.WSJT + sfrsd (SFM no ntest 100000)   706
>>>> 16.WSJT + sfrsd (SFM no ntest 1000000)  786  (took 11 hours!)
>>>>
>>>> 17.WSJT + kvasd (SFM no ntest)          897
>>>>
>>>> Test 11 simply replaced kvasd with sfrsd, with no other changes.  Tests
>>>> 12-16 used Steve's metrics for the symbol probabilities in demod64a.f90
>>>> and commented out the following lines in extract.F90:
>>>>
>>>> !  if(ntest.lt.50 .or. nlow.gt.20) then
>>>> !     ncount=-999                         !Flag bad data
>>>> !     go to 900
>>>> !  endif
>>>>
>>>> The number of random erasure vectors is specified for each of these
>>>> runs.  Test 17 is a final run using kvasd.
>>>>
>>>> With "reasonable" numbers of random erasure trials, sfrsd seems to be
>>>> something like 0.5 dB shy of the sensitivity of kvasd.  Steve, I know
>>>> you have already done some parameter-tuning for sfrsd, but possibly
>>>> there's still room for improvement?  How did you choose the values 0.5
>>>> 0.6 0.6 0.6 0.8 and the step locations 32 128 196 256, lines 197-207 in
>>>> sfrsd.c ?  Have you thought about other possible ways to speed up the
>>>> search by eliminating some candidate erasure vectors?
>>>>
>>>>    -- Joe, K1JT
>>>>
>>>> ------------------------------------------------------------------------------
>>>> _______________________________________________
>>>> wsjt-devel mailing list
>>>> wsjt-devel@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/wsjt-devel
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> _______________________________________________
>>> wsjt-devel mailing list
>>> wsjt-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/wsjt-devel
>>
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> wsjt-devel mailing list
>> wsjt-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/wsjt-devel
>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> wsjt-devel mailing list
> wsjt-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/wsjt-devel

------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
wsjt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to