Hi!

From: "Magnus Danielson" <mag...@rubidium.dyndns.org>
You build two sums C and D, one is the phase-samples and the other is
phase-samples scaled with their index n in the block. From this you can
then using the formulas I provided calculate the least-square phase and
frequency, and using the least square frequency measures you can do
PDEV. The up-front processing is thus cheap, and there is meathods to
combine measurement blocks into longer measurement blocks, thus
decimation, using relatively simple linear processing on the block sums
C and D, with their respective lengths. The end result is that you can
very cheaply decimate data in HW/FW and then extend the properties to
arbitrary long observation intervals using cheap software processing and
create unbiased least square measurements this way. Once the linear
algebra of least square processing has vanished in a puff of logic, it
is fairly simple processing with very little memory requirements at
hand. For multi-tau, you can reach O(N log N) type of processing rather
than O(N^2), which is pretty cool.

I had some free time today to study the document you suggested and do
some experiments in matlab - it was very useful reading and experiments,
thanks!

Thanks for the kind words!

It looks like the proposed method of decimation can be
efficiently realized on the current HW.

I had some free time yesterday and today, so I decided to test the new algorithms on the real hardware (the HW is still an old "ugly construction" one, but I hope I will have some time to make normal HW - I have already got almost all components I need).

I had to modify the original decimation scheme you propose in the paper, so it better fits my HW, also the calculation precision and speed should be higher now. The nice side effect - I do not need to care about phase unwrapping anymore. I can prepare a short description of the modifications and post it here, if it is interesting.

It works like a charm!

The new algorithm (base on C and D sums calculation and decimation) uses much less memory (less than 256KB for any gaiting time/sampling speed, the old one (direct LR calculation) was very memory hungry - it used 4xSampling_Rate bytes/s - 20MB per second of the gate time for 5MSPS). Now I can fit all data into the internal memory and have a single chip digital part of the frequency counter, well, almost single chip ;) The timestamping speed has increased and is limited now by the bus/bus matrix switch/DMA unit at a bit more then 24MSPS with continuous real time data processing. It looks like it is the limit for the used chip (I expected a bit higher numbers). The calculation speed is also much higher now (approx 23ns per one timestamp, so up to 43MSPS can be processed in realtime). I plan to stay at 20MSPS rate or 10MSPS with the double time resolution (1.25ns). It will leave a plenty of CPU time for the UI/communication/GPS/statistics stuff.

I will probably throw out the power hungry and expensive SDRAM chip or use much smaller one :).

I have some plans to experiment with doubling the one shoot resolution down to 1.25ns. I see no much benefits from it, but it can be made with just a piece of coax and a couple of resistors, so it is interesting to try :).

All the best!
Oleg UR3IQO


_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to