[ 
https://issues.apache.org/jira/browse/CASSANDRA-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13163764#comment-13163764
 ] 

Peter Schuller commented on CASSANDRA-3569:
-------------------------------------------

{quote}
Streaming doesn't use the same threshold than 'live query' as far as failure 
detection is concerned. So 'using a failure detector tuned to detecting when 
not to send real-time sensitive request' is not what we do as far as streaming 
is concerned. Now maybe the threshold is still not good for streaming, I'm 
happy to discuss that.
{quote}

It uses a different threshold yes, but it uses the same failure detection 
algorithm which is as far as I can tell about as orthogonal as you can get from 
the concerns of streaming.

{quote}
The initial goal was to fail repairs when a remote end died (or was restarted) 
because there have been boat load of user complaining about repair hanging 
doing nothing and that is one case where that would happen. Please note that it 
is a real pain point for users. The proposition of this ticket doesn't solve 
that at all. This ticket proposes to get back to the preceding situation, only 
maybe with the slight optimisation of adding a timeout to close the socket 
after a few hour of inactivity, but honestly nobody ever complained about that. 
CASSANDRA-2433 has never been about releasing a OS socket.
{quote}

Like I said, if you do not do one of (1) use keep-alive, (2) use a socket 
timeout, (3) use a per-i/o operation timeout, TCP connections *will* hang, so 
it is not surprising that this was a problem. Since we did neither of those 
three, we were re were utterly broken. Note also that in the normal case of a 
process crashing or whatnot, the TCP connection will die immediately. This is a 
problem when there is either a network/firewalling glitch causing a silent 
death of the connection, or e.g. the machine panicing and getting restarted.

Since I am suggesting moving to using keep-alive, I am suggesting fixing the 
utterly and obviously broken old version, to a new version which does one of 
1,2 and 3 instead of neither.

I am *very* much concerned about user behavior. If I have made a factual 
mistake please point it out, but my comments thus far seem to adequately 
already address what you are arguing here.

In what way specifically do you claim that my proposed solution would cause 
repairs not to fail?

Other than that there will be a delay (around two hours by default on Linux if 
you don't change it).

And *again*, how big a deal are these two hours compared to having sever 
production problems with a cluster because you've exploded node sizes up to 
levels where compaction won't even run, or having to wait another day or two 
for a large long-running repair to complete as you're trying to do cluster 
maintenance?

{quote}
That being said, if we really don't trust our FD, I could be convinced to 
remove the 'FD breaks streams' behavior. As long as we keep the behavior of 
failing repair when we know a node has been restarted (which we know without 
doubt). But I still don't understand why we wouldn't trust the FD as long as we 
correctly tune it for long-running process
{quote}

I think the our way of doing failure detection is fundamentally broken in many 
ways, but that's kind of a different and wider discussion. Trying to tune it 
for long-running processes feels like a lot of patchwork to take something 
fundamentally not suitable and try to make it work, instead of just using fully 
working well-known things provided by the OS.

I mean really, having a TCP connection open and using it in a way that it 
doesn't get forever stuck is not rocket science. Lots of software do this all 
the time, and Cassandra should be able to as well. The unfortunate situation 
for us is just that we can't slap a socket timeout on it (if it's not clear why 
I can go into details), so we either have to make significant changes to the 
protocol to allow use of times, or use the transport level option (which is 
keep-alive).


                
> Failure detector downs should not break streams
> -----------------------------------------------
>
>                 Key: CASSANDRA-3569
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3569
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Peter Schuller
>            Assignee: Peter Schuller
>
> CASSANDRA-2433 introduced this behavior just to get repairs to don't sit 
> there waiting forever. In my opinion the correct fix to that problem is to 
> use TCP keep alive. Unfortunately the TCP keep alive period is insanely high 
> by default on a modern Linux, so just doing that is not entirely good either.
> But using the failure detector seems non-sensicle to me. We have a 
> communication method which is the TCP transport, that we know is used for 
> long-running processes that you don't want to incorrectly be killed for no 
> good reason, and we are using a failure detector tuned to detecting when not 
> to send real-time sensitive request to nodes in order to actively kill a 
> working connection.
> So, rather than add complexity with protocol based ping/pongs and such, I 
> propose that we simply just use TCP keep alive for streaming connections and 
> instruct operators of production clusters to tweak 
> net.ipv4.tcp_keepalive_{probes,intvl} as appropriate (or whatever equivalent 
> on their OS).
> I can submit the patch. Awaiting opinions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to