[ 
https://issues.apache.org/jira/browse/CASSANDRA-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543158#comment-17543158
 ] 

Bret McGuire commented on CASSANDRA-17411:
------------------------------------------

Hey [~brandon.williams] , I had to dig into this a bit for a related issue with 
the Python driver and I came up with some additional info which might be 
useful.  After looking into this it sure looks like this might be a server-side 
issue after all... but obviously I'll defer to you on that part. :)

 

Based on my testing I confirmed the following things about the LBPs in use by 
app.py when running against Cassandra 4.0.3:

 
 * When iptables is used to shut down access to one of the IP addresses the 
load balancing policy (LBP) in use by the client is notified that the node has 
gone down

 

 

 * As expected that LBP then removes the down node from it’s collection of 
“live” nodes for the given DC

 

 * The LBP no longer returns the down node when make_query_plan() is called

 

Great, but by itself that doesn't prove anything.  So I decided to take a look 
at the exceptions that were actually getting thrown in this case.  With the 
following change to app.py:

 

 
{code:java}
55c55,59
<         except (cassandra.OperationTimedOut, cassandra.WriteTimeout):
---
>         except cassandra.OperationTimedOut as exc:
>             LOG.info(exc)
>             timeout_counter += 1
>         except cassandra.WriteTimeout as exc:
>             LOG.info(exc) {code}
 

 

I see a sequence of two different error messages when I drop the iptables 
hammer.  The first is a set of "Client request timeout" messages (which is 
entirely expected since the LBP doesn't know about the down node yet) while the 
second looks to be a server-side issue:

 

 
{code:java}
2022-05-27 22:01:38,629 - target-dc1 - __main__ - INFO - 
errors={'127.0.0.1:9042': 'Client request timeout. See 
Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042
2022-05-27 22:01:48,734 - target-dc1 - __main__ - INFO - 
errors={'127.0.0.1:9042': 'Client request timeout. See 
Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042
2022-05-27 22:01:58,845 - target-dc1 - __main__ - INFO - 
errors={'127.0.0.1:9042': 'Client request timeout. See 
Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042
2022-05-27 22:02:03,954 - target-dc1 - __main__ - INFO - Error from server: 
code=1100 [Coordinator node timed out waiting for replica nodes' responses] 
message="Operation timed out - received only 0 responses." info={'consistency': 
'ONE', 'required_responses': 1, 'received_responses': 0, 'write_type': 
'COUNTER'}
2022-05-27 22:02:14,172 - target-dc1 - __main__ - INFO - 
errors={'127.0.0.1:9042': 'Client request timeout. See 
Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042
2022-05-27 22:02:22,545 - target-dc3 - __main__ - INFO - Got 0/579 (0.000000) 
timeouts/total_rqs in the last 1 minute
2022-05-27 22:02:22,548 - target-dc2 - __main__ - INFO - Got 0/579 (0.000000) 
timeouts/total_rqs in the last 1 minute
2022-05-27 22:02:24,275 - target-dc1 - __main__ - INFO - 
errors={'127.0.0.1:9042': 'Client request timeout. See 
Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042
2022-05-27 22:02:24,376 - target-dc1 - __main__ - INFO - Got 6/66 (9.090909) 
timeouts/total_rqs in the last 1 minute
2022-05-27 22:02:34,387 - target-dc1 - __main__ - INFO - 
errors={'127.0.0.1:9042': 'Client request timeout. See 
Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042
2022-05-27 22:02:44,695 - target-dc1 - __main__ - INFO - 
errors={'127.0.0.1:9042': 'Client request timeout. See 
Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042 {code}
 

 

When the LBP gets notified of the down node things change; the "Client request 
timeout" disappears and all I see is the server-side issue:

 
{code:java}
2022-05-27 22:02:52,482 - target-dc1 - cassandra.cluster - WARNING - Host 
127.0.0.1:9042 has been marked down                                             
                        
2022-05-27 22:02:52,483 - target-dc1 - cassandra.policies - INFO - on_down for 
host 127.0.0.1:9042                                                             
                   
2022-05-27 22:02:52,485 - target-dc1 - cassandra.policies - INFO - Setting 
hosts for dc dc1 to (<Host: 127.0.0.2:9042 dc1>,)                               
                       
2022-05-27 22:02:52,486 - target-dc1 - cassandra.policies - INFO - on_down for 
host 127.0.0.1:9042                                                             
                   
2022-05-27 22:02:52,488 - target-dc1 - cassandra.policies - INFO - on_down for 
host 127.0.0.1:9042                                                             
                   
2022-05-27 22:02:52,488 - target-dc1 - cassandra.policies - INFO - on_down for 
host 127.0.0.1:9042                                                             
                   
2022-05-27 22:02:57,507 - target-dc1 - cassandra.cluster - WARNING - [control 
connection] Error connecting to 127.0.0.1:9042:                                 
                    
Traceback (most recent call last):                                              
                                                                                
                  
  File "cassandra/cluster.py", line 3522, in 
cassandra.cluster.ControlConnection._reconnect_internal                         
                                                     
  File "cassandra/cluster.py", line 3544, in 
cassandra.cluster.ControlConnection._try_connect                                
                                                     
  File "cassandra/cluster.py", line 1620, in 
cassandra.cluster.Cluster.connection_factory                                    
          
  File "cassandra/connection.py", line 831, in 
cassandra.connection.Connection.factory                                         
                                                   
  File 
"/home/automaton/les23/lib/python3.6/site-packages/cassandra/io/libevreactor.py",
 line 267, in __init__
    self._connect_socket()                                                      
                                                                                
                  
  File "cassandra/connection.py", line 898, in 
cassandra.connection.Connection._connect_socket                                 
        
OSError: [Errno None] Tried connecting to [('127.0.0.1', 9042)]. Last error: 
timed out
2022-05-27 22:02:57,609 - target-dc1 - __main__ - INFO - Error from server: 
code=1100 [Coordinator node timed out waiting for replica nodes' responses] 
message="Operation timed out - received only 0 responses." info={'consistency': 
'ONE', 'required_responses': 1, 'received_responses': 0, 'write_type': 
'COUNTER'}                                           
2022-05-27 22:02:58,615 - target-dc1 - cassandra.pool - WARNING - Error 
attempting to reconnect to 127.0.0.1:9042, scheduling retry in 2.22 seconds: 
[Errno None] Tried connecting to [('127.0.0.1', 9042)]. Last error: timed out   
                                                                                
                                               
2022-05-27 22:03:02,712 - target-dc1 - __main__ - INFO - Error from server: 
code=1100 [Coordinator node timed out waiting for replica nodes' responses] 
message="Operation timed out - received only 0 responses." info={'consistency': 
'ONE', 'required_responses': 1, 'received_responses': 0, 'write_type': 
'COUNTER'}                                           
2022-05-27 22:03:05,925 - target-dc1 - cassandra.pool - WARNING - Error 
attempting to reconnect to 127.0.0.1:9042, scheduling retry in 4.24 seconds: 
[Errno None] Tried connectin$
 to [('127.0.0.1', 9042)]. Last error: timed out
2022-05-27 22:03:07,917 - target-dc1 - __main__ - INFO - Error from server: 
code=1100 [Coordinator node timed out waiting for replica nodes' responses] 
message="Operation timed out - received only 0 responses." info={'consistency': 
'ONE', 'required_responses': 1, 'received_responses': 0, 'write_type': 
'COUNTER'}{code}
It appears that it's these server-side exceptions which are accounting for the 
non-zero error counts returned by the app.

 

If I can do anything to help repro or debug this further please let me know!

 

 

> Network partition causes write ONE timeouts when using counters in Cassandra 4
> ------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-17411
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-17411
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Consistency/Coordination
>            Reporter: Pere Balaguer
>            Assignee: Brandon Williams
>            Priority: Normal
>             Fix For: 4.0.x
>
>         Attachments: app.py
>
>
> h5. Affected versions:
>  * 4.x
> h5. Observed behavior:
> When executing CL=ONE writes on a table with a counter column, if one of the 
> nodes is network partitioned from the others, clients keep sending requests 
> to it.
> Even though this may be a "driver" problem, I've been able to reproduce it 
> with both java and python datastax drivers using their latest available 
> versions and given the behavior only changes depending on the Cassandra 
> version, well, here I am.
> h5. Expected behavior:
> In Cassandra 3 after all inflight requests fail (expected), no new requests 
> are sent to the partitioned node. The expectation is that Cassandra 4 behaves 
> the same way.
> h5. How to reproduce:
> {noformat}
> # Create a cluster with the desired version, will go with 4.x for this example
> ccm create bug-report -v 4.0.3
> ccm populate -n 2:2:2
> ccm start
> # Create schemas and so on
> CQL=$(cat <<END
> CONSISTENCY ALL;
> DROP KEYSPACE IF EXISTS demo;
> CREATE KEYSPACE demo WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 2, 'dc2': 2, 'dc3': 2};
> CREATE TABLE demo.demo (pk uuid PRIMARY KEY, count counter) WITH compaction = 
> {'class': 'LeveledCompactionStrategy'};
> END
> )
> ccm node1 cqlsh --verbose --exec="${CQL}"
> # Launch the attached app.py
> # requires cassandra-driver
> python3 app.py "127.0.0.1" "9042"
> # Wait a bit for the app to settle, proceed to next step once you see 3 
> messages in stdout like:
> # 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 
> (0.000000) timeouts/total_rqs in the last 1 minute
> # Partition one node with iptables
> iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 7000 -j 
> DROP; iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 
> 9042 -j DROP
> {noformat}
> Some time after executing the iptables command in cassandra-3 the output 
> should be similar to:
> {noformat}
> 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:51,576 - target-dc3 - __main__ - INFO - Got 0/572 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:58,032 - target-dc1 - __main__ - INFO - Got 6/252 (2.380952) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,560 - target-dc2 - __main__ - INFO - Got 0/570 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,620 - target-dc3 - __main__ - INFO - Got 0/570 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:58,101 - target-dc1 - __main__ - INFO - Got 2/354 (0.564972) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,602 - target-dc2 - __main__ - INFO - Got 0/571 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,672 - target-dc3 - __main__ - INFO - Got 0/571 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:58,153 - target-dc1 - __main__ - INFO - Got 0/572 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> {noformat}
> as the timeouts/rqs shows, in about 2 minutes the partitioned node stops 
> receiving traffic
> while as in cassandra-4
> {noformat}
> 2022-03-01 15:49:39,068 - target-dc3 - __main__ - INFO - Got 0/566 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:39,107 - target-dc2 - __main__ - INFO - Got 0/566 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:41,206 - target-dc1 - __main__ - INFO - Got 2/444 (0.450450) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,095 - target-dc3 - __main__ - INFO - Got 0/569 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,148 - target-dc2 - __main__ - INFO - Got 0/569 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:42,589 - target-dc1 - __main__ - INFO - Got 7/13 (53.846154) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,125 - target-dc3 - __main__ - INFO - Got 0/567 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,159 - target-dc2 - __main__ - INFO - Got 0/567 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:44,902 - target-dc1 - __main__ - INFO - Got 12/22 
> (54.545455) timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:52:39,188 - target-dc2 - __main__ - INFO - Got 0/567 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:52:39,202 - target-dc3 - __main__ - INFO - Got 0/568 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:52:46,994 - target-dc1 - __main__ - INFO - Got 12/20 
> (60.000000) timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:53:39,202 - target-dc2 - __main__ - INFO - Got 0/568 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:53:39,238 - target-dc3 - __main__ - INFO - Got 0/568 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:53:48,765 - target-dc1 - __main__ - INFO - Got 12/17 
> (70.588235) timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:54:39,203 - target-dc2 - __main__ - INFO - Got 0/568 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:54:39,291 - target-dc3 - __main__ - INFO - Got 0/568 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:54:51,291 - target-dc1 - __main__ - INFO - Got 12/24 
> (50.000000) timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:55:39,258 - target-dc2 - __main__ - INFO - Got 0/570 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:55:39,327 - target-dc3 - __main__ - INFO - Got 0/570 (0.000000) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:55:53,365 - target-dc1 - __main__ - INFO - Got 12/20 
> (60.000000) timeouts/total_rqs in the last 1 minute
> {noformat}
> as the timeouts/rqs shows, the partitioned node keeps receving traffic until 
> the iptables rules are removed



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to