[ 
https://issues.apache.org/jira/browse/CASSANDRA-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14305547#comment-14305547
 ] 

Benedict commented on CASSANDRA-8732:
-------------------------------------

The simplest approach I was thinking of to bound this is to send the time 
remaining, as well as the expected wall clock expiry. These can both be used on 
the remote node to do something sensible, e.g. pick the one closest to half the 
timeout interval, so that we're conservative in both directions (i.e. never 
keeping the message too long, nor expiring them too aggressively).

My biggest concern here is nodes being seen as down because clock skew 
temporarily got large enough to have messages get dropped much too aggressively 
for the response to be returned.

I also agree it's not super duper pressing, I just wanted to log the ticket for 
discussion. But it's also pretty easy to introduce. Just send a delta along 
with the wall clock, and have some simple machinery on the other end to select 
which one to use.

> Make inter-node timeouts tolerate clock skew and drift
> ------------------------------------------------------
>
>                 Key: CASSANDRA-8732
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8732
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Ariel Weisberg
>
> Right now internode timeouts rely on currentTimeMillis() (and NTP) to make 
> sure that tasks don't expire before they arrive.
> Every receiver needs to deduce the offset between its nanoTime and the remote 
> nanoTime. I don't think currentTimeMillis is a good choice because it is 
> designed to be manipulated by operators and NTP. I would probably be 
> comfortable assuming that nanoTime isn't going to move in significant ways 
> without something that could be classified as operator error happening.
> I suspect the one timing method you can rely on being accurate is nanoTime 
> within a node (on average) and that a node can report on its own scheduling 
> jitter (on average).
> Finding the offset requires knowing what the network latency is in one 
> direction.
> One way to do this would be to periodically send a ping request which 
> generates a series of ping responses at fixed intervals (maybe by UDP?). The 
> responses should corrected for scheduling jitter since the fixed intervals 
> may not be exactly achieved by the sender. By measuring the time deviation 
> between ping responses and their expected arrival time (based on the 
> interval) and correcting for the remotely reported scheduling jitter, you 
> should be able to measure latency in one direction.
> A weighted moving average (only correct for drift, not readjustment) of these 
> measurements would eventually converge on a close answer and would not be 
> impacted by outlier measurements. It may also make sense to drop the largest 
> N samples to improve accuracy.
> One you know network latency you can add that to the timestamp of each ping 
> and compare to the local clock and know what the offset is.
> These measurements won't calculate the offset to be too small (timeouts fire 
> early), but could calculate the offset to be too large (timeouts fire late). 
> The conditions where you the offset won't be accurate are the conditions 
> where you also want them firing reliably. This and bootstrapping in bad 
> conditions is what I am most uncertain of.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to