[time-nuts] Re: PPS latency? User vs kernel mode

2021-12-12 Thread Javier Herrero

Hello,

I have not tried with Raspberry (and would not...), but in order to 
avoid interrupt latencies and jitter, my approach, using a Zynq, is:


- To implement a counter in the FPGA for use as the Linux clock source, 
instead of the ARM timer
- Implement harware timestamping on the PPS, and generate the interrupt 
(and since I was there, I use an external clock source for the counter 
like the GPSO that gives also the PPS signal, instead of the usually 
crappy XO that drives the Zynq clocks)
- And then have a lot of fun convincing the kernel to use the FPGA 
counter as clock source, and converting raw PPS timestamp times to wall 
clock in the kernel, to be able to give a good timestamp value to ntp/chrony


I have implemented this approach both using a u-blox M8F (using the 
30.72MHz signal as source for the timer clock), and using a 10MHz GPSDO.


Best regards,

Javier, EA1CRB

On 12/12/21 22:55, Poul-Henning Kamp wrote:



Interrupts are no longer hardware phenomena, but bus transactions
which must lay claim to one or more busses, send a formatted message
which is received by some kind of "interrupt prioritizer" which
again, may or may not send another message on another kind of bus
to the instruction sequencer in one or more CPU cores.

Both of these message transmissions will very likely involve
clock-domain-crossings.

The good news is the per-interrupt overhead is lower, thanks to
interrupts being 'gently woven into' the instruction stream, instead
of hitting it with a sledgehammer.

But the latency and jitter is literally all over the place...

Fortunately a lot of "counter-module" hardware can be used
to hardware-timestamp signals, even if the design does not
exactly support it.

For instance, the code I wrote for the Soekris 4501 uses two
hardware counters:

The first one, free-running, is the "timecounter" which the system
clock is based on.

The second one starts counting at the same rate as the first
when the PPS signal comes in.

By the time the CPU comes around to read both counters, it subtracts
the second from the first, to figure out what time the hardware
signal happened.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: PPS latency? User vs kernel mode

2021-12-12 Thread Poul-Henning Kamp


Interrupts are no longer hardware phenomena, but bus transactions
which must lay claim to one or more busses, send a formatted message
which is received by some kind of "interrupt prioritizer" which
again, may or may not send another message on another kind of bus
to the instruction sequencer in one or more CPU cores.

Both of these message transmissions will very likely involve
clock-domain-crossings.

The good news is the per-interrupt overhead is lower, thanks to
interrupts being 'gently woven into' the instruction stream, instead
of hitting it with a sledgehammer.

But the latency and jitter is literally all over the place...

Fortunately a lot of "counter-module" hardware can be used
to hardware-timestamp signals, even if the design does not
exactly support it.

For instance, the code I wrote for the Soekris 4501 uses two
hardware counters:

The first one, free-running, is the "timecounter" which the system
clock is based on.

The second one starts counting at the same rate as the first
when the PPS signal comes in.

By the time the CPU comes around to read both counters, it subtracts
the second from the first, to figure out what time the hardware
signal happened.

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: PPS latency? User vs kernel mode

2021-12-12 Thread Jürgen Appel
Hej, 

On Sunday, 12 December 2021 02.48.53 CET Trent Piepho wrote:

> 3. The jitter in the latency of the timestamp's creation after the pulse.
 
> Of these, I think it's safe to assume that 3 is by far the greatest.  And
> at the very least we get an upper bound for that error.
 
> I think you can find some graphs I made in the list archives.  Switching
> from kernel GPIO PPS timestamping to a kernel driver for a hardware
> timestamper was a vast improvement.  I didn't even bother with userspace
> timestamping, it would surely be far worse than kernel mode, having all the
> same sources of error kernel mode does plus several significant other
> sources.

I have given up on using Raspberry Pi for time-stamping external signals:

I also tried using pps-gpio, but with a graphical user interface present, I 
never managed to limit the worst case timing jitter to below 500 µs reliably.
Especially starting up a Firefox-instance and playing a youtube video provoked 
excessive delays and non-mask-able interrupts. Without a graphical interface, 
these events were much more rare.

I recall, I tried also to limit timing relevant tasks to a separate CPU core 
(this helped a bit, but not really enough), and I am not completely sure 
whether fiddling around with the timer IRQ registers of the interrupt 
controller really disabled all timer IRQs on that core completely, I just 
remember that the documentation on how to write the correct dts-files for 
doing so was close to non-existent, so I might have done that incorrectly...

Without a hardware time stamper for inputs and a programmable hardware timer 
for outputs, I would not trust the RPi for timing purposes.
 
Cheers,
Jürgen
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.