[ 
https://issues.apache.org/jira/browse/CASSANDRA-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13163423#comment-13163423
 ] 

Sylvain Lebresne commented on CASSANDRA-3569:
---------------------------------------------

A coouple of things:
* Streaming doesn't use the same threshold than 'live query' as far as failure 
detection is concerned. So 'using a failure detector tuned to detecting when 
not to send real-time sensitive request' is *not* what we do as far as 
streaming is concerned. Now maybe the threshold is still not good for 
streaming, I'm happy to discuss that.
* The initial goal was to fail repairs when a remote end died (or was 
restarted) because there have been boat load of user complaining about repair 
hanging doing nothing and that is one case where that would happen. Please note 
that it is a real pain point for users. The proposition of this ticket doesn't 
solve that at all. This ticket proposes to get back to the preceding situation, 
only maybe with the slight optimisation of adding a timeout to close the socket 
after a few hour of inactivity, but honestly nobody ever complained about that. 
CASSANDRA-2433 has never been about releasing a OS socket.
* That being said, if we really don't trust our FD, I could be convinced to 
remove the 'FD breaks streams' behavior. As long as we keep the behavior of 
failing repair when we know a node has been restarted (which we know without 
doubt). But I still don't understand why we wouldn't trust the FD as long as we 
correctly tune it for long-running process.

                
> Failure detector downs should not break streams
> -----------------------------------------------
>
>                 Key: CASSANDRA-3569
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3569
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Peter Schuller
>            Assignee: Peter Schuller
>
> CASSANDRA-2433 introduced this behavior just to get repairs to don't sit 
> there waiting forever. In my opinion the correct fix to that problem is to 
> use TCP keep alive. Unfortunately the TCP keep alive period is insanely high 
> by default on a modern Linux, so just doing that is not entirely good either.
> But using the failure detector seems non-sensicle to me. We have a 
> communication method which is the TCP transport, that we know is used for 
> long-running processes that you don't want to incorrectly be killed for no 
> good reason, and we are using a failure detector tuned to detecting when not 
> to send real-time sensitive request to nodes in order to actively kill a 
> working connection.
> So, rather than add complexity with protocol based ping/pongs and such, I 
> propose that we simply just use TCP keep alive for streaming connections and 
> instruct operators of production clusters to tweak 
> net.ipv4.tcp_keepalive_{probes,intvl} as appropriate (or whatever equivalent 
> on their OS).
> I can submit the patch. Awaiting opinions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to