Hi,
This is a very useful input and it may very well explain the situation (some one has also suggested that NTP is not the right protocol for 1-2 ms accuracy. This is a separate problem from what we are discussing here) and the variability. There has been a growing dissatisfaction with the GRC generated python approach and I would like to move to the rx_sample_to_file command approach and start modifying the associated cpp files. Would you say that the performance will be more stable with the latter approach? However, I don't how to do the command line method. I suppose there would be many arguments given to the "rx_sample_to_file" command to make it do what I want. But the help menu info on this command is very limited, does not touch on needs with set_time_now and set_start_time parameters in the data gathering process. Basically I suppose if I can do the rx_sample_to_file command and correctly impose the set_time_now and set_start_time option, there should be much less variability than the python approach. Does this look right? Thanks, LD There's a profoundly-variable and "jittery" amount of time that it takes to start a Python interpreter and "get things going" between any two serial invocations on the *same machine*, let alone on two different machines. They may well agree on what time it is (to a first order approximations) when they both say "go", but after that, I can easily imagine the behaviour to be not entirely deterministic. -- Marcus Leech Principal Investigator Shirleys Bay Radio Astronomy Consortium http://www.sbrac.org
_______________________________________________ Discuss-gnuradio mailing list Discuss-gnuradio@gnu.org https://lists.gnu.org/mailman/listinfo/discuss-gnuradio