Joe - 

I hope that you have received the zip file containing the versions of the key 
routines that produced the results that I reported yesterday. I sent it to your 
Princeton email because the attached zip file caused the list to bounce the 
message.

For the record, just now I found that lowering the nsoft+nhard threshold to 79 
(i.e. (nsoft+nhard)<79) gives me 853 good decodes and 0 bad decodes on 
s3_1000.bin using ntrials=10000 and my version of rsdtest.

Steve k9an

> On Oct 16, 2015, at 3:33 PM, Joe Taylor <j...@princeton.edu> wrote:
> 
> Hi Steve,
> 
> Sorry to say, my progress has been slow today.  I wanted to start by 
> reproducing your good-looking results using rsdtest.  So far I have not 
> really managed to do so; I can get as many good decodes as you reported, 
> but not (yet?) with the sfrsd2.c attached to your email.
> 
> Could you please give me details on exactly how you built rsdtest?  In 
> what directory, in our SVN tree?  With what Makefile?
> 
>       -- Joe
> 
> 
> On 10/15/2015 10:03 PM, Steven Franke wrote:
>> Joe,
>> Reporting on results of this evening’s tests on -24db gaussian noise 
>> no-fading (gnnf) data. As always in these tests, the number of test files is 
>> 1000.
>> 
>> I started with sfrsd2 from the current r5970 and opened up the acceptance 
>> criterion to nhard+nsoft<81. The purpose of doing this is to find out how 
>> many potentially good decodes are in the set of candidates that are 
>> presented to the decoder.
>> 
>> I ran this sfrsd2 in rsdtest using matched sf metrics and sf gnnf erasure 
>> probabilities. I used your s3_1000.bin file.
>> 
>> ntrials ngood
>> 0          5
>> 1          26
>> 10        206
>> 100      511
>> 1000    736
>> 10000  854 + 3bad
>> 
>> I’d call this very good performance.
>> 
>> Next, I dropped the sfrsd2.c that was used with rsdtest back into the 
>> current wsjt-x, which I set up to use 10000 trials. I zero’d the ntest 
>> threshold. Using the sf metrics and using my batch of -24db files, I get 
>> only 735 decodes - about the same as I was getting with ntrials=1000 in 
>> rsdtest.
>> 
>> So this seems to support my notion that something may not be completely 
>> right with the syncing or final peakup of dt and f0, or some other thing 
>> upstream from demod64a in this latest version. Maybe the next step should be 
>> for me to drop the same sfrsd2.c into whatever version you used to generate 
>> the s3_1000.bin file. Do you remember what version that was?
>> 
>> Steve k9an
>> 
>>> On Oct 15, 2015, at 6:32 PM, Steven Franke<s.j.fra...@icloud.com>  wrote:
>>> 
>>> Joe,
>>> 
>>>> I conclude that for these files the candidate selection is OK
>>>> (preferably with a somewhat higher threshold for ntest), but sfrsd is
>>>> not decoding as many as it "should".  I suspect that for marginal
>>>> signals either different metrics or different values in the probability
>>>> matrix will yield better results.
>>> 
>>> Hmm.
>>> 
>>> I was totally focused on hf performance and the differences between the 
>>> number of BM only decodes between the old and new sync schemes. I see now 
>>> that I have broken something for the -24dB gaussian-noise no-fading case… 
>>> I’ll investigate.
>>> 
>>> Steve k9an
>>> 
>> 
>> 
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> wsjt-devel mailing list
>> wsjt-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/wsjt-devel
> 
> ------------------------------------------------------------------------------
> _______________________________________________
> wsjt-devel mailing list
> wsjt-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/wsjt-devel


------------------------------------------------------------------------------
_______________________________________________
wsjt-devel mailing list
wsjt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to