[jira] [Commented] (CASSANDRA-15092) Add a new Snitch for Alibaba Cloud Platform

2019-05-23 Thread maxwellguo (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847183#comment-16847183
 ] 

maxwellguo commented on CASSANDRA-15092:


[~djoshi3] hi ,can you help me to look at this issue ? :)

> Add a new Snitch for Alibaba Cloud Platform
> ---
>
> Key: CASSANDRA-15092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15092
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/Core, Local/Config
>Reporter: maxwellguo
>Assignee: maxwellguo
>Priority: Normal
>  Labels: pull-request-available
> Attachments: trunk-15092-V1.txt, trunk-15092.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add snitch for Alibaba cloud platform, for we have saw cloud platform snitch 
> for aws and google cloud ,for we can ge alibaba ecs (Elastic Compute Service) 
> meta data from here : 
> https://help.aliyun.com/document_detail/108460.html?spm=a2c4g.11186623.6.675.36684f8bLQrIMY



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15139) Live traffic capture: Allow spiky replaces

2019-05-23 Thread Sumeet (JIRA)
Sumeet created CASSANDRA-15139:
--

 Summary: Live traffic capture: Allow spiky replaces
 Key: CASSANDRA-15139
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15139
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sumeet
Assignee: Vinay Chella


For the Live traffic replay, Please consider allowing the DML/Queries to be 
replayed within temporal proximity to their neighboring queries/DML. This could 
be harder to do since the original queries may be coming on different 
coordinator nodes. But the value for this is immense since it will allow us to 
recreate a spiky query/DML behavior in our non-prod environments. 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15086) Illegal column names make legacy sstables unreadable in 3.0/3.x

2019-05-23 Thread Cameron Zemek (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847125#comment-16847125
 ] 

Cameron Zemek commented on CASSANDRA-15086:
---

[~samt] appears I misread the patch on 15086. I thought I detected an overlap 
of code change. My bad.

> Illegal column names make legacy sstables unreadable in 3.0/3.x
> ---
>
> Key: CASSANDRA-15086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15086
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/SSTable
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 3.0.19, 3.11.5
>
>
> CASSANDRA-10608 adds extra validation when decoding a bytebuffer representing 
> a legacy cellname. If the table is not COMPACT and the column name component 
> of the cellname refers to a primary key column, an IllegalArgumentException 
> is thrown. It looks like the original intent of 10608 was to prevent Thrift 
> writes from inserting these invalid cells, but the same code path is 
> exercised on the read path. The problem is that this kind of cells may exist 
> in pre-3.0 sstables, either due to Thrift writes or through side loading of 
> externally generated SSTables. Following an upgrade to 3.0, these partitions 
> become unreadable, breaking both the read and compaction paths (and so also 
> upgradesstables). Scrub in 2.1 does not help here as it blindly reproduces 
> the invalid cells.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15138) A cluster (RF=3) not recovering after two nodes are stopped

2019-05-23 Thread Hiroyuki Yamada (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846779#comment-16846779
 ] 

Hiroyuki Yamada commented on CASSANDRA-15138:
-

[~jmeredithco] Yes, that is correct. Sorry for not stating it.

> A cluster (RF=3) not recovering after two nodes are stopped
> ---
>
> Key: CASSANDRA-15138
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15138
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Membership
>Reporter: Hiroyuki Yamada
>Priority: Normal
>
> I faced a weird issue when recovering a cluster after two nodes are stopped.
>  It is easily reproduce-able and looks like a bug or an issue to fix.
>  The following is a step to reproduce it.
> === STEP TO REPRODUCE ===
>  * Create a 3-node cluster with RF=3
>     - node1(seed), node2, node3
>  * Start requests to the cluster with cassandra-stress (it continues
>  until the end)
>     - what we did: cassandra-stress mixed cl=QUORUM duration=10m
>  -errors ignore -node node1,node2,node3 -rate threads\>=16
>  threads\<=256
>  - (It doesn't have to be this many threads. Can be 1)
>  * Stop node3 normally (with systemctl stop or kill (without -9))
>     - the system is still available as expected because the quorum of nodes is
>  still available
>  * Stop node2 normally (with systemctl stop or kill (without -9))
>     - the system is NOT available as expected after it's stopped.
>     - the client gets `UnavailableException: Not enough replicas
>  available for query at consistency QUORUM`
>     - the client gets errors right away (so few ms)
>     - so far it's all expected
>  * Wait for 1 mins
>  * Bring up node2 back
>     - {color:#ff}The issue happens here.{color}
>     - the client gets ReadTimeoutException` or WriteTimeoutException
>  depending on if the request is read or write even after the node2 is
>  up
>     - the client gets errors after about 5000ms or 2000ms, which are
>  request timeout for write and read request
>     - what node1 reports with `nodetool status` and what node2 reports
>  are not consistent. (node2 thinks node1 is down)
>     - It takes very long time to recover from its state
> === STEPS TO REPRODUCE ===
> Some additional important information to note:
>  * If we don't start cassandra-stress, it doesn't cause the issue.
>  * Restarting node1 and it recovers its state right after it's restarted
>  * Setting lower value in dynamic_snitch_reset_interval_in_ms (to 6
>  or something) fixes the issue
>  * If we `kill -9` the nodes, then it doesn't cause the issue.
>  * Hints seems not related. I tested with hints disabled, it didn't make any 
> difference.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15138) A cluster (RF=3) not recovering after two nodes are stopped

2019-05-23 Thread Jon Meredith (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846736#comment-16846736
 ] 

Jon Meredith commented on CASSANDRA-15138:
--

Just spotted your email to the user mailing list - looks like C* 3.11.4

> A cluster (RF=3) not recovering after two nodes are stopped
> ---
>
> Key: CASSANDRA-15138
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15138
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Membership
>Reporter: Hiroyuki Yamada
>Priority: Normal
>
> I faced a weird issue when recovering a cluster after two nodes are stopped.
>  It is easily reproduce-able and looks like a bug or an issue to fix.
>  The following is a step to reproduce it.
> === STEP TO REPRODUCE ===
>  * Create a 3-node cluster with RF=3
>     - node1(seed), node2, node3
>  * Start requests to the cluster with cassandra-stress (it continues
>  until the end)
>     - what we did: cassandra-stress mixed cl=QUORUM duration=10m
>  -errors ignore -node node1,node2,node3 -rate threads\>=16
>  threads\<=256
>  - (It doesn't have to be this many threads. Can be 1)
>  * Stop node3 normally (with systemctl stop or kill (without -9))
>     - the system is still available as expected because the quorum of nodes is
>  still available
>  * Stop node2 normally (with systemctl stop or kill (without -9))
>     - the system is NOT available as expected after it's stopped.
>     - the client gets `UnavailableException: Not enough replicas
>  available for query at consistency QUORUM`
>     - the client gets errors right away (so few ms)
>     - so far it's all expected
>  * Wait for 1 mins
>  * Bring up node2 back
>     - {color:#ff}The issue happens here.{color}
>     - the client gets ReadTimeoutException` or WriteTimeoutException
>  depending on if the request is read or write even after the node2 is
>  up
>     - the client gets errors after about 5000ms or 2000ms, which are
>  request timeout for write and read request
>     - what node1 reports with `nodetool status` and what node2 reports
>  are not consistent. (node2 thinks node1 is down)
>     - It takes very long time to recover from its state
> === STEPS TO REPRODUCE ===
> Some additional important information to note:
>  * If we don't start cassandra-stress, it doesn't cause the issue.
>  * Restarting node1 and it recovers its state right after it's restarted
>  * Setting lower value in dynamic_snitch_reset_interval_in_ms (to 6
>  or something) fixes the issue
>  * If we `kill -9` the nodes, then it doesn't cause the issue.
>  * Hints seems not related. I tested with hints disabled, it didn't make any 
> difference.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15138) A cluster (RF=3) not recovering after two nodes are stopped

2019-05-23 Thread Jon Meredith (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846734#comment-16846734
 ] 

Jon Meredith commented on CASSANDRA-15138:
--

Thanks for the detailed steps in the report. Which versions of Cassandra have 
you reproduced the issue with?

> A cluster (RF=3) not recovering after two nodes are stopped
> ---
>
> Key: CASSANDRA-15138
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15138
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Membership
>Reporter: Hiroyuki Yamada
>Priority: Normal
>
> I faced a weird issue when recovering a cluster after two nodes are stopped.
>  It is easily reproduce-able and looks like a bug or an issue to fix.
>  The following is a step to reproduce it.
> === STEP TO REPRODUCE ===
>  * Create a 3-node cluster with RF=3
>     - node1(seed), node2, node3
>  * Start requests to the cluster with cassandra-stress (it continues
>  until the end)
>     - what we did: cassandra-stress mixed cl=QUORUM duration=10m
>  -errors ignore -node node1,node2,node3 -rate threads\>=16
>  threads\<=256
>  - (It doesn't have to be this many threads. Can be 1)
>  * Stop node3 normally (with systemctl stop or kill (without -9))
>     - the system is still available as expected because the quorum of nodes is
>  still available
>  * Stop node2 normally (with systemctl stop or kill (without -9))
>     - the system is NOT available as expected after it's stopped.
>     - the client gets `UnavailableException: Not enough replicas
>  available for query at consistency QUORUM`
>     - the client gets errors right away (so few ms)
>     - so far it's all expected
>  * Wait for 1 mins
>  * Bring up node2 back
>     - {color:#ff}The issue happens here.{color}
>     - the client gets ReadTimeoutException` or WriteTimeoutException
>  depending on if the request is read or write even after the node2 is
>  up
>     - the client gets errors after about 5000ms or 2000ms, which are
>  request timeout for write and read request
>     - what node1 reports with `nodetool status` and what node2 reports
>  are not consistent. (node2 thinks node1 is down)
>     - It takes very long time to recover from its state
> === STEPS TO REPRODUCE ===
> Some additional important information to note:
>  * If we don't start cassandra-stress, it doesn't cause the issue.
>  * Restarting node1 and it recovers its state right after it's restarted
>  * Setting lower value in dynamic_snitch_reset_interval_in_ms (to 6
>  or something) fixes the issue
>  * If we `kill -9` the nodes, then it doesn't cause the issue.
>  * Hints seems not related. I tested with hints disabled, it didn't make any 
> difference.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15105) Flaky unit test AuditLoggerTest

2019-05-23 Thread Sumanth Pasupuleti (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846531#comment-16846531
 ] 

Sumanth Pasupuleti commented on CASSANDRA-15105:


[~eperott] Thanks for the latest patch; I have merged your patch on my branch.

[Patch|https://github.com/apache/cassandra/pull/323]

> Flaky unit test AuditLoggerTest
> ---
>
> Key: CASSANDRA-15105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/CQL
>Reporter: Per Otterström
>Assignee: Per Otterström
>Priority: Normal
> Fix For: 4.0
>
>
> Depending on execution order some tests will fail in the AuditLoggerTest 
> class. Any test case that happens to execute after 
> testExcludeSystemKeyspaces() will typically fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15136) Incorrect error message in legacy reader

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846525#comment-16846525
 ] 

Per Otterström commented on CASSANDRA-15136:


+1

> Incorrect error message in legacy reader
> 
>
> Key: CASSANDRA-15136
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15136
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Logging
>Reporter: Vincent White
>Assignee: Vincent White
>Priority: Normal
>
> Just fixes the order in the exception message.
> ||3.0.x||3.11.x||
> |[Patch|https://github.com/vincewhite/cassandra/commits/readLegacyAtom30]|[Patch|https://github.com/vincewhite/cassandra/commits/readLegacyAtom]|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15105) Flaky unit test AuditLoggerTest

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846511#comment-16846511
 ] 

Per Otterström commented on CASSANDRA-15105:


[~sumanth.pasupuleti], thanks for taking the time to review and test.

When looking at the logs of the 
[failing|https://circleci.com/gh/sumanth-pasupuleti/cassandra/508#tests/containers/14]
 test I don't think that run was affected by the setup in 
{{StorageServiceServerTest}} as these were the only test classes executed in 
that particular container:
{noformat}
org/apache/cassandra/audit/AuditLoggerTest.java
org/apache/cassandra/db/commitlog/CommitLogFailurePolicyTest.java
org/apache/cassandra/dht/RandomPartitionerTest.java
org/apache/cassandra/locator/NetworkTopologyStrategyTest.java
org/apache/cassandra/service/NativeTransportServiceTest.java
org/apache/cassandra/utils/NativeLibraryTest.java
{noformat}
Still, I think that disabling the audit logger whenever it has been enabled is 
a good strategy.

But regarding the, still flaky, {{testExcludeSystemKeyspaces}}. I was able to 
reproduce the remaining issue consistently by adding a a few seconds delay in 
the test case just before checking the size of the 
{{InMemoryAuditLogger.inMemQueue}}. When printing the content I get this:
{noformat}
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36856|timestamp:1558592418723|type:SELECT|category:QUERY|ks:system_schema|scope:keyspaces|operation:SELECT
 * FROM system_schema.keyspaces
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36848|timestamp:1558592418722|type:SELECT|category:QUERY|ks:system_schema|scope:keyspaces|operation:SELECT
 * FROM system_schema.keyspaces
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36852|timestamp:1558592418723|type:SELECT|category:QUERY|ks:system_schema|scope:keyspaces|operation:SELECT
 * FROM system_schema.keyspaces
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36856|timestamp:1558592418732|type:SELECT|category:QUERY|ks:system_schema|scope:types|operation:SELECT
 * FROM system_schema.types
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36848|timestamp:1558592418732|type:SELECT|category:QUERY|ks:system_schema|scope:types|operation:SELECT
 * FROM system_schema.types
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36852|timestamp:1558592418732|type:SELECT|category:QUERY|ks:system_schema|scope:types|operation:SELECT
 * FROM system_schema.types
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36852|timestamp:1558592418736|type:SELECT|category:QUERY|ks:system_schema|scope:tables|operation:SELECT
 * FROM system_schema.tables
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36848|timestamp:1558592418736|type:SELECT|category:QUERY|ks:system_schema|scope:tables|operation:SELECT
 * FROM system_schema.tables
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36856|timestamp:1558592418735|type:SELECT|category:QUERY|ks:system_schema|scope:tables|operation:SELECT
 * FROM system_schema.tables
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36848|timestamp:1558592418751|type:SELECT|category:QUERY|ks:system_schema|scope:columns|operation:SELECT
 * FROM system_schema.columns
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36852|timestamp:1558592418750|type:SELECT|category:QUERY|ks:system_schema|scope:columns|operation:SELECT
 * FROM system_schema.columns
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36856|timestamp:1558592418752|type:SELECT|category:QUERY|ks:system_schema|scope:columns|operation:SELECT
 * FROM system_schema.columns
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36848|timestamp:1558592418766|type:SELECT|category:QUERY|ks:system_schema|scope:indexes|operation:SELECT
 * FROM system_schema.indexes
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36852|timestamp:1558592418767|type:SELECT|category:QUERY|ks:system_schema|scope:indexes|operation:SELECT
 * FROM system_schema.indexes
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36856|timestamp:1558592418767|type:SELECT|category:QUERY|ks:system_schema|scope:indexes|operation:SELECT
 * FROM system_schema.indexes
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36848|timestamp:1558592418767|type:SELECT|category:QUERY|ks:system_schema|scope:views|operation:SELECT
 * FROM system_schema.views
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36852|timestamp:1558592418768|type:SELECT|category:QUERY|ks:system_schema|scope:views|operation:SELECT
 * FROM system_schema.views
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36856|timestamp:1558592418768|type:SELECT|category:QUERY|ks:system_schema|scope:views|operation:SELECT
 * FROM system_schema.views
user:anonymous|host:127.0.0.1:7010|source:/127.0.0.1|port:36848|timestamp:1558592418769|type:SELECT|category:QUERY|ks:system_schema|scope:functions|operation:SELECT
 * FROM 

[jira] [Updated] (CASSANDRA-15120) Nodes that join the ring while another node is MOVING build an invalid view of the token ring

2019-05-23 Thread Sam Tunnicliffe (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-15120:

Status: Review In Progress  (was: Patch Available)

> Nodes that join the ring while another node is MOVING build an invalid view 
> of the token ring
> -
>
> Key: CASSANDRA-15120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip, Cluster/Membership
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
>
> Gossip only updates the token metadata for nodes in the NORMAL, SHUTDOWN or 
> LEAVING* statuses.  MOVING and REMOVING_TOKEN nodes do not have their ring 
> information updated (nor do others, but these other states _should_ only be 
> taken by nodes that are not members of the ring).  
> If a node missed the most recent token-modifying events because they were not 
> a member of the ring when they happened (or because Gossip was delayed to 
> them), they will retain an invalid view of the ring until the node enters the 
> one of the NORMAL, SHUTDOWN or LEAVING states.
> *LEAVING is populated differently, however, and in a probably unsafe manner 
> that this work will also address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15120) Nodes that join the ring while another node is MOVING build an invalid view of the token ring

2019-05-23 Thread Sam Tunnicliffe (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-15120:

Status: Changes Suggested  (was: Review In Progress)

> Nodes that join the ring while another node is MOVING build an invalid view 
> of the token ring
> -
>
> Key: CASSANDRA-15120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip, Cluster/Membership
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
>
> Gossip only updates the token metadata for nodes in the NORMAL, SHUTDOWN or 
> LEAVING* statuses.  MOVING and REMOVING_TOKEN nodes do not have their ring 
> information updated (nor do others, but these other states _should_ only be 
> taken by nodes that are not members of the ring).  
> If a node missed the most recent token-modifying events because they were not 
> a member of the ring when they happened (or because Gossip was delayed to 
> them), they will retain an invalid view of the ring until the node enters the 
> one of the NORMAL, SHUTDOWN or LEAVING states.
> *LEAVING is populated differently, however, and in a probably unsafe manner 
> that this work will also address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org