On 10/4/16 6:26 AM, Graham / KE9H wrote:
Larry:

You have multiple problems, with the way you are trying to define
"time-error."

I think you are defining it as the time error of the signal coming out of
your receivers/decoders.

You are blending all the error/delay sources together, and you need to
break them apart, since each one will have a different solution, or method
of management

First, the reflector has already jumped in and helped you with the
definition of absolute time.  You can get single digit millisecond accuracy
(with some caviats and bewares) from NTP, for stations at different
locations.  You should be able to get single digit microsecond accuracy (or
better) with an appropriate GPS based timing systems. You can not get these
levels of accuracy out of the native time system on Windows. That is more
like single digit seconds.

Second, you have some serious signal processing latency delays in your
receivers/demodulators/decoders.  Depending on how the designer has dealt
with streaming and buffering, particularly with the (buffered) connections
between the stages, these processing latency delays may be constant or
variable, or perhaps adjustable.  The Windows Sound system is horrible from
a latency/stability standpoint. You are probably feeding your back-end
demodulators/decoders through it. You will need to break apart your system
(transmit and receive) into modules or stages, and characterize each one
for latency. Beware of (uncontrolled) buffers at the interfaces. You
generally need to pick a reference point, such as the antenna port, and
correct everything to that reference point.

Third, you seem to be running a portion of the (SDR) receivers and the
demodulators/decoders on computers with Operating Systems. (Like WIndows
OS, which is NOT a real-time operating system.)  That means that the
response time of the system to a request for computing resources can be
quite variable. (microseconds to tens of milliseconds typical, with rare
excursions into the single digit seconds.)  The solution to this problem is
to either run on a very lightly loaded computer, or switch to a real time
OS, such as Linux with a real-time kernel. This does not cure the problem,
but does put bounds on it.

--- Graham / KE9H




One way that seems to work fairly well, as long as you can post process (or your "turnaround latency" can be in the "seconds" bucket).. is to record a time reference signal that is added to the original RF signal. A pilot tone, or a modulated tone, somewhat away from your desired signal, generated by a high quality time reference (maybe you have an XO, and you phase modulate it with the 1pps from your GPS receiver).

Then, in your downconverted, digitized, and filtered data, you look for the time reference and use that as what you need, rather than trying to back out all the (non-deterministic) delays through the audio processing chain.

By the way, Windows *can* be very good on synchronizing audio: otherwise audio and video wouldn't play together in media, high performance gaming wouldn't work, etc. The problem is that it is a royal, giant, pain to get it to work. You need a lot of knowledge and experience with exactly how Windows handles the media streams, all those countless APIs, etc. And it is different for each version of Windows. (Realistically, Linux and Mac OS X are no better: the nature of the difficulties is different, but they're still there)



The typical amateur radio spectrogram program or demodulator probably doesn't do that: they use a simple FIFO pipeline, and make no claim that what's on the screen matches what's at the antenna at a particular instant, as long as the order and duration is correct eventually. If you're decoding CW with CW skimmer or decoding PSK31, you probably aren't doing full break-in between the dots and dashes or characters: a few tenths of a second random lag doesn't make any difference (especially since the signal you're receiving is over a non-deterministic delay skywave path)


My philosophy (and the one that wound up embedded in our design of a Software Defined Radio framework for space radios) is that software handles the *management* of timing critical processing implemented in a FPGA. Sample accurate timing is done basically in hardware, and software (running on whatever OS - we use RTEMS which is pretty good Real Time, but...) talks to the hardware using a model which says, in effect, don't try to synchronize across this interface at resolutions finer than 1 millisecond. (For non Real Time OS, I'd try for 10s of milliseconds)

For a simple instance, we latch a a free running counter driven from an OCXO with the GPS 1pps. That's in hardware. The software just has to guarantee that it reads the latch at least once a second and we can collect the data we're interested in. If it can't guarantee that, that it can figure out what happened if we miss a latch event (i.e. the count looks like it increased by twice the expected increment) or we read too often (the count didn't change).

Then, if we need an event to occur at a precise time, we can calculate (in our own sweet non-deterministic time) what counter value that time would correspond to. We load that into another register, which is compared with the free running counter, and the equality triggers the event.

Obviously, we need to schedule the future event sufficiently far into the future that we have time to do the calculations. But figuring out a "worst case bounded maximum time" is a lot easier than "must calculate everytime within 1 millisecond" hard real time kind of constraint.

We can also be very fancy in our model of "time vs counter reading". We can, for instance, use the last 10,000 1pps tick latched values to estimate the future clock rate and drift (essentially what GPSDO algorithms do). Or, we can be simple: use the delta between two successive ticks as the estimate of the clock rate, assume it's unchanging, and go from there.

One of the fascinating problems that arises for us (and will arise for you) is that you might need to synchronize a good clock against a bad one. That is, in order to have events occur in multiple places at the same "time", you essentially need to predict the behavior of the worst clock in the system, and then have everyone else follow that.


I'd also comment that testing of such a system is challenging: you need to generate a test signal that occurs at a precisely known time, and see if your system captures it, and it shows up at the right sample time. So the latency of your test equipment (hardware again) needs to be understood and known.



_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to