On 24/05/2015 00:03, Steven Franke wrote:
> Hi Joe et al,
Hi Steve,

...
> Here’s a summary of some comparisons between old and new:
>
> 100 test files with -30 dB SNR:
> old “new” wsprd: 66 successful decodes in 30 seconds
> new “new” wsprd: 75 decodes, 2 bad decodes (73 successful decodes) in 26 
> seconds.
> 10% more decodes, 13% reduction in computation time.
>
> On my test suite of 214 “off-the-air” 20m files the comparison between the 
> old and new decoders is as follows:
> old “new” wsprd:  1306 successful decodes (no bad decodes) in 164s
> new “new” wsprd: 1330 successful decodes (1331 decodes, 1 bad decode) in 138s
> 2.2% more decodes, 16% reduction in computation time.
IMHO any setting that increases the number of bad decodes is not worth 
the time saved unless that time is in danger of being longer than the 
available time before the next decode cycle. I say this because 
unfiltered bad decodes cause erroneous outliers, e.g. open path plots, 
that will easily mislead naive users trying to interpret result data.

I wonder if it might even be better to dynamically obtain a setting of 
this trade off that uses the maximum time available to ensure best accuracy.

Another possibility might be to attempt to detect bad decodes either at 
source on on the server with some callsign and grid validation 
heuristic, I do not know the internals of either so this may already be 
being done.

...

73
Bill
G4WJS.

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
wsjt-devel mailing list
wsjt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/wsjt-devel

Reply via email to