[jira] [Updated] (CASSANDRA-13900) Massive GC suspension increase after updating to 3.0.14 from 2.1.18

2020-11-18 Thread Thomas Steinmaurer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Steinmaurer updated CASSANDRA-13900:
---
Resolution: Duplicate
Status: Resolved  (was: Open)

DUP of CASSANDRA-16201

> Massive GC suspension increase after updating to 3.0.14 from 2.1.18
> ---
>
> Key: CASSANDRA-13900
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13900
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
>Reporter: Thomas Steinmaurer
>Priority: Urgent
> Attachments: cassandra2118_vs_3014.jpg, cassandra3014_jfr_5min.jpg, 
> cassandra_3.11.0_min_memory_utilization.jpg
>
>
> In short: After upgrading to 3.0.14 (from 2.1.18), we aren't able to process 
> the same incoming write load on the same infrastructure anymore.
> We have a loadtest environment running 24x7 testing our software using 
> Cassandra as backend. Both, loadtest and production is hosted in AWS and do 
> have the same spec on the Cassandra-side, namely:
> * 9x m4.xlarge
> * 8G heap
> * CMS (400MB newgen)
> * 2TB EBS gp2
> * Client requests are entirely CQL
> per node. We have a solid/constant baseline in loadtest at ~ 60% CPU cluster 
> AVG with constant, simulated load running against our cluster, using 
> Cassandra 2.1 for > 2 years now.
> Recently we started to upgrade to 3.0.14 in this 9 node loadtest environment, 
> and basically, 3.0.14 isn't able to cope with the load anymore. No particular 
> special tweaks, memory settings/changes etc., all the same as in 2.1.18. We 
> also didn't upgrade sstables yet, thus the increase mentioned in the 
> screenshot is not related to any manually triggered maintenance operation 
> after upgrading to 3.0.14.
> According to our monitoring, with 3.0.14, we see a *GC suspension time 
> increase by a factor of > 2*, of course directly correlating with an CPU 
> increase > 80%. See: attached screen "cassandra2118_vs_3014.jpg"
> This all means that our incoming load against 2.1.18 is something, 3.0.14 
> can't handle. So, we would need to either scale up (e.g. m4.xlarge => 
> m4.2xlarge) or scale out for being able to handle the same load, which is 
> cost-wise not an option.
> Unfortunately I do not have Java Flight Recorder runs for 2.1.18 at the 
> mentioned load, but can provide JFR session for our current 3.0.14 setup. The 
> attached 5min JFR memory allocation area (cassandra3014_jfr_5min.jpg) shows 
> compaction being the top contributor for the captured 5min time-frame. Could 
> be by "accident" covering the 5min with compaction as top contributor only 
> (although mentioned simulated client load is attached), but according to 
> stack traces, we see new classes from 3.0, e.g. BTreeRow.searchIterator() 
> etc. popping up as top contributor, thus possibly new classes / data 
> structures are causing much more object churn now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15563) Backport removal of OpenJDK warning log

2020-11-18 Thread Thomas Steinmaurer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Steinmaurer updated CASSANDRA-15563:
---
Fix Version/s: (was: 2.2.x)

> Backport removal of OpenJDK warning log
> ---
>
> Key: CASSANDRA-15563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15563
> Project: Cassandra
>  Issue Type: Task
>Reporter: Thomas Steinmaurer
>Priority: Normal
> Fix For: 3.0.x
>
>
> As requested on Slack, creating this ticket for a backport of 
> CASSANDRA-13916, potentially to 2.2 and 3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15563) Backport removal of OpenJDK warning log

2020-11-18 Thread Thomas Steinmaurer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Steinmaurer updated CASSANDRA-15563:
---
Description: As requested on Slack, creating this ticket for a backport of 
CASSANDRA-13916 for 3.0.  (was: As requested on Slack, creating this ticket for 
a backport of CASSANDRA-13916, potentially to 2.2 and 3.0.)

> Backport removal of OpenJDK warning log
> ---
>
> Key: CASSANDRA-15563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15563
> Project: Cassandra
>  Issue Type: Task
>Reporter: Thomas Steinmaurer
>Priority: Normal
> Fix For: 3.0.x
>
>
> As requested on Slack, creating this ticket for a backport of CASSANDRA-13916 
> for 3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15584) 4.0 quality testing: Tooling - External Ecosystem

2020-11-18 Thread Alexander Dejanovski (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Dejanovski updated CASSANDRA-15584:
-
Description: 
Reference [doc from 
NGCC|https://docs.google.com/document/d/1uhUOp7wpE9ZXNDgxoCZHejHt5SO4Qw1dArZqqsJccyQ/edit#]
 for context.

*Shepherd: Benjamin Lerer*

Many users of Apache Cassandra employ open source tooling to automate Cassandra 
configuration, runtime management, and repair scheduling. Prior to release, we 
need to confirm that popular third-party tools function properly. 

Current list of tools:
|| Name || Status || Contact ||
| [Priam|http://netflix.github.io/Priam/] |{color:#00875A} *DONE WITH 
ALPHA*{color} (need to be tested with beta) | [~sumanth.pasupuleti]| 
| [sstabletools|https://github.com/instaclustr/cassandra-sstable-tools] | *NOT 
STARTED* | [~stefan.miklosovic]| 
| [cassandra-exporter|https://github.com/instaclustr/cassandra-exporter]| *NOT 
STARTED* | [~stefan.miklosovic]|
| [Instaclustr Cassandra 
operator|https://github.com/instaclustr/cassandra-operator]|  
{color:#00875A}*DONE*{color} | [~stefan.miklosovic]|
| [Instaclustr Esop | 
https://github.com/instaclustr/instaclustr-esop]|{color:#00875A}*DONE*{color} | 
[~stefan.miklosovic]|
| [Instaclustr Icarus | 
https://github.com/instaclustr/instaclustr-icarus]|{color:#00875A}*DONE*{color} 
| [~stefan.miklosovic]|
| [Cassandra SSTable generator | 
https://github.com/instaclustr/cassandra-sstable-generator]|{color:#00875A}*DONE*{color}|
 [~stefan.miklosovic]|
| [Cassandra TTL Remover | https://github.com/instaclustr/TTLRemover] | 
{color:#00875A}*DONE*{color} |  [~stefan.miklosovic]|
| [Cassandra Everywhere Strategy | 
https://github.com/instaclustr/cassandra-everywhere-strategy] | 
{color:#00875A}*DONE*{color} | [~stefan.miklosovic]|
| [Cassandra LDAP Authenticator | 
https://github.com/instaclustr/cassandra-ldap] | {color:#00875A}*DONE*{color} | 
[~stefan.miklosovic]|
| [Instaclustr Minotaur | https://github.com/instaclustr/instaclustr-minotaur] 
| {color:#00875A}*DONE*{color} | [~stefan.miklosovic]|
| [Reaper|http://cassandra-reaper.io/]| {color:#00875A}*AUTOMATIC*{color} | 
[~adejanovski]|
| [Medusa|https://github.com/thelastpickle/cassandra-medusa]|  
{color:#00875A}*DONE*{color}| [~adejanovski]|
| [Casskop|https://orange-opensource.github.io/casskop/]| *NOT STARTED*| Franck 
Dehay|
| 
[spark-cassandra-connector|https://github.com/datastax/spark-cassandra-connector]|
 {color:#00875A}*DONE*{color}| [~jtgrabowski]|
| [cass operator|https://github.com/datastax/cass-operator]| 
{color:#00875A}*DONE*{color}| [~jimdickinson]|
| [metric 
collector|https://github.com/datastax/metric-collector-for-apache-cassandra]| 
{color:#00875A}*DONE*{color}| [~tjake]|
| [managment 
API|https://github.com/datastax/management-api-for-apache-cassandra]| 
{color:#00875A}*DONE*{color}| [~tjake]|  

Columns descriptions:
* *Name*: Name and link to the tool official page
* *Status*: {{NOT STARTED}}, {{IN PROGRESS}}, {{BLOCKED}} if you hit any issue 
and have to wait for it to be solved, {{DONE}}, {{AUTOMATIC}} if testing 4.0 is 
part of your CI process.
* *Contact*: The person acting as the contact point for that tool. 

  was:
Reference [doc from 
NGCC|https://docs.google.com/document/d/1uhUOp7wpE9ZXNDgxoCZHejHt5SO4Qw1dArZqqsJccyQ/edit#]
 for context.

*Shepherd: Benjamin Lerer*

Many users of Apache Cassandra employ open source tooling to automate Cassandra 
configuration, runtime management, and repair scheduling. Prior to release, we 
need to confirm that popular third-party tools function properly. 

Current list of tools:
|| Name || Status || Contact ||
| [Priam|http://netflix.github.io/Priam/] |{color:#00875A} *DONE WITH 
ALPHA*{color} (need to be tested with beta) | [~sumanth.pasupuleti]| 
| [sstabletools|https://github.com/instaclustr/cassandra-sstable-tools] | *NOT 
STARTED* | [~stefan.miklosovic]| 
| [cassandra-exporter|https://github.com/instaclustr/cassandra-exporter]| *NOT 
STARTED* | [~stefan.miklosovic]|
| [Instaclustr Cassandra 
operator|https://github.com/instaclustr/cassandra-operator]|  
{color:#00875A}*DONE*{color} | [~stefan.miklosovic]|
| [Instaclustr Esop | 
https://github.com/instaclustr/instaclustr-esop]|{color:#00875A}*DONE*{color} | 
[~stefan.miklosovic]|
| [Instaclustr Icarus | 
https://github.com/instaclustr/instaclustr-icarus]|{color:#00875A}*DONE*{color} 
| [~stefan.miklosovic]|
| [Cassandra SSTable generator | 
https://github.com/instaclustr/cassandra-sstable-generator]|{color:#00875A}*DONE*{color}|
 [~stefan.miklosovic]|
| [Cassandra TTL Remover | https://github.com/instaclustr/TTLRemover] | 
{color:#00875A}*DONE*{color} |  [~stefan.miklosovic]|
| [Cassandra Everywhere Strategy | 
https://github.com/instaclustr/cassandra-everywhere-strategy] | 
{color:#00875A}*DONE*{color} | [~stefan.miklosovic]|
| [Cassandra LDAP Authenticator | 
https://github.com/instaclustr/cassandra-ldap] | {colo

[jira] [Commented] (CASSANDRA-15584) 4.0 quality testing: Tooling - External Ecosystem

2020-11-18 Thread Alexander Dejanovski (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234355#comment-17234355
 ] 

Alexander Dejanovski commented on CASSANDRA-15584:
--

CASSANDRA-16280 was committed to fix the sstableloader issues in Cassandra and 
the Medusa PR fixing the tests was merged.
We're done with Medusa (table updated).

> 4.0 quality testing: Tooling - External Ecosystem
> -
>
> Key: CASSANDRA-15584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15584
> Project: Cassandra
>  Issue Type: Task
>  Components: Tool/external
>Reporter: Josh McKenzie
>Assignee: Benjamin Lerer
>Priority: Normal
> Fix For: 4.0-rc
>
>
> Reference [doc from 
> NGCC|https://docs.google.com/document/d/1uhUOp7wpE9ZXNDgxoCZHejHt5SO4Qw1dArZqqsJccyQ/edit#]
>  for context.
> *Shepherd: Benjamin Lerer*
> Many users of Apache Cassandra employ open source tooling to automate 
> Cassandra configuration, runtime management, and repair scheduling. Prior to 
> release, we need to confirm that popular third-party tools function properly. 
> Current list of tools:
> || Name || Status || Contact ||
> | [Priam|http://netflix.github.io/Priam/] |{color:#00875A} *DONE WITH 
> ALPHA*{color} (need to be tested with beta) | [~sumanth.pasupuleti]| 
> | [sstabletools|https://github.com/instaclustr/cassandra-sstable-tools] | 
> *NOT STARTED* | [~stefan.miklosovic]| 
> | [cassandra-exporter|https://github.com/instaclustr/cassandra-exporter]| 
> *NOT STARTED* | [~stefan.miklosovic]|
> | [Instaclustr Cassandra 
> operator|https://github.com/instaclustr/cassandra-operator]|  
> {color:#00875A}*DONE*{color} | [~stefan.miklosovic]|
> | [Instaclustr Esop | 
> https://github.com/instaclustr/instaclustr-esop]|{color:#00875A}*DONE*{color} 
> | [~stefan.miklosovic]|
> | [Instaclustr Icarus | 
> https://github.com/instaclustr/instaclustr-icarus]|{color:#00875A}*DONE*{color}
>  | [~stefan.miklosovic]|
> | [Cassandra SSTable generator | 
> https://github.com/instaclustr/cassandra-sstable-generator]|{color:#00875A}*DONE*{color}|
>  [~stefan.miklosovic]|
> | [Cassandra TTL Remover | https://github.com/instaclustr/TTLRemover] | 
> {color:#00875A}*DONE*{color} |  [~stefan.miklosovic]|
> | [Cassandra Everywhere Strategy | 
> https://github.com/instaclustr/cassandra-everywhere-strategy] | 
> {color:#00875A}*DONE*{color} | [~stefan.miklosovic]|
> | [Cassandra LDAP Authenticator | 
> https://github.com/instaclustr/cassandra-ldap] | {color:#00875A}*DONE*{color} 
> | [~stefan.miklosovic]|
> | [Instaclustr Minotaur | 
> https://github.com/instaclustr/instaclustr-minotaur] | 
> {color:#00875A}*DONE*{color} | [~stefan.miklosovic]|
> | [Reaper|http://cassandra-reaper.io/]| {color:#00875A}*AUTOMATIC*{color} | 
> [~adejanovski]|
> | [Medusa|https://github.com/thelastpickle/cassandra-medusa]|  
> {color:#00875A}*DONE*{color}| [~adejanovski]|
> | [Casskop|https://orange-opensource.github.io/casskop/]| *NOT STARTED*| 
> Franck Dehay|
> | 
> [spark-cassandra-connector|https://github.com/datastax/spark-cassandra-connector]|
>  {color:#00875A}*DONE*{color}| [~jtgrabowski]|
> | [cass operator|https://github.com/datastax/cass-operator]| 
> {color:#00875A}*DONE*{color}| [~jimdickinson]|
> | [metric 
> collector|https://github.com/datastax/metric-collector-for-apache-cassandra]| 
> {color:#00875A}*DONE*{color}| [~tjake]|
> | [managment 
> API|https://github.com/datastax/management-api-for-apache-cassandra]| 
> {color:#00875A}*DONE*{color}| [~tjake]|  
> Columns descriptions:
> * *Name*: Name and link to the tool official page
> * *Status*: {{NOT STARTED}}, {{IN PROGRESS}}, {{BLOCKED}} if you hit any 
> issue and have to wait for it to be solved, {{DONE}}, {{AUTOMATIC}} if 
> testing 4.0 is part of your CI process.
> * *Contact*: The person acting as the contact point for that tool. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16282) Fix STCS documentation (the header is currently LCS)

2020-11-18 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234374#comment-17234374
 ] 

Berenguer Blasi commented on CASSANDRA-16282:
-

LGTM +1

> Fix STCS documentation (the header is currently LCS)
> 
>
> Key: CASSANDRA-16282
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16282
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: Miles Garnsey
>Assignee: Miles Garnsey
>Priority: Normal
> Fix For: 4.0
>
>
> Currently, the header in the [documentation for 
> STCS|https://cassandra.apache.org/doc/latest/operating/compaction/stcs.html] 
> refers to LCS in the header, which also makes it hard to find the STCS 
> documentation via search.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16282) Fix STCS documentation (the header is currently LCS)

2020-11-18 Thread Berenguer Blasi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Berenguer Blasi updated CASSANDRA-16282:

Reviewers: Berenguer Blasi

> Fix STCS documentation (the header is currently LCS)
> 
>
> Key: CASSANDRA-16282
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16282
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: Miles Garnsey
>Assignee: Miles Garnsey
>Priority: Normal
> Fix For: 4.0
>
>
> Currently, the header in the [documentation for 
> STCS|https://cassandra.apache.org/doc/latest/operating/compaction/stcs.html] 
> refers to LCS in the header, which also makes it hard to find the STCS 
> documentation via search.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16245) Implement repair quality test scenarios

2020-11-18 Thread Alexander Dejanovski (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234384#comment-17234384
 ] 

Alexander Dejanovski commented on CASSANDRA-16245:
--

[~zvo], I'll write up the Gherkin files with the test scenarios so that you can 
implement the test steps.
As agreed upon, we can work in a separate repo for initial devs and integrate 
the code into the Cassandra repo once we have something to show.

 

> Implement repair quality test scenarios
> ---
>
> Key: CASSANDRA-16245
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16245
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest/java
>Reporter: Alexander Dejanovski
>Assignee: Radovan Zvoncek
>Priority: Normal
> Fix For: 4.0-rc
>
>
> Implement the following test scenarios in a new test suite for repair 
> integration testing with significant load:
> Generate/restore a workload of ~100GB per node. Medusa should be considered 
> to create the initial backup which could then be restored from an S3 bucket 
> to speed up node population.
>  Data should on purpose require repair and be generated accordingly.
> Perform repairs for a 3 nodes cluster with 4 cores each and 16GB-32GB RAM 
> (m5d.xlarge instances would be the most cost efficient type).
>  Repaired keyspaces will use RF=3 or RF=2 in some cases (the latter is for 
> subranges with different sets of replicas).
> ||Mode||Version||Settings||Checks||
> |Full repair|trunk|Sequential + All token ranges|"No anticompaction 
> (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Full repair|trunk|Parallel + Primary range|"No anticompaction (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Full repair|trunk|Force terminate repair shortly after it was 
> triggered|Repair threads must be cleaned up|
> |Subrange repair|trunk|Sequential + single token range|"No anticompaction 
> (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Subrange repair|trunk|Parallel + 10 token ranges which have the same 
> replicas|"No anticompaction (repairedAt == 0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range
> A single repair session will handle all subranges at once"|
> |Subrange repair|trunk|Parallel + 10 token ranges which have different 
> replicas|"No anticompaction (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range
> More than one repair session is triggered to process all subranges"|
> |Subrange repair|trunk|"Single token range.
>  Force terminate repair shortly after it was triggered."|Repair threads must 
> be cleaned up|
> |Incremental repair|trunk|"Parallel (mandatory)
>  No compaction during repair"|"Anticompaction status (repairedAt != 0) on all 
> SSTables
>  No pending repair on SSTables after completion (could require to wait a bit 
> as this will happen asynchronously)
>  Out of sync ranges > 0 + Subsequent run must show no out of sync range"|
> |Incremental repair|trunk|"Parallel (mandatory)
>  Major compaction triggered during repair"|"Anticompaction status (repairedAt 
> != 0) on all SSTables
>  No pending repair on SSTables after completion (could require to wait a bit 
> as this will happen asynchronously)
>  Out of sync ranges > 0 + Subsequent run must show no out of sync range"|
> |Incremental repair|trunk|Force terminate repair shortly after it was 
> triggered.|Repair threads must be cleaned up|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16245) Implement repair quality test scenarios

2020-11-18 Thread Alexander Dejanovski (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234388#comment-17234388
 ] 

Alexander Dejanovski commented on CASSANDRA-16245:
--

Dev repo was created here for anyone interested: 
https://github.com/riptano/cassandra-rtest

> Implement repair quality test scenarios
> ---
>
> Key: CASSANDRA-16245
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16245
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest/java
>Reporter: Alexander Dejanovski
>Assignee: Radovan Zvoncek
>Priority: Normal
> Fix For: 4.0-rc
>
>
> Implement the following test scenarios in a new test suite for repair 
> integration testing with significant load:
> Generate/restore a workload of ~100GB per node. Medusa should be considered 
> to create the initial backup which could then be restored from an S3 bucket 
> to speed up node population.
>  Data should on purpose require repair and be generated accordingly.
> Perform repairs for a 3 nodes cluster with 4 cores each and 16GB-32GB RAM 
> (m5d.xlarge instances would be the most cost efficient type).
>  Repaired keyspaces will use RF=3 or RF=2 in some cases (the latter is for 
> subranges with different sets of replicas).
> ||Mode||Version||Settings||Checks||
> |Full repair|trunk|Sequential + All token ranges|"No anticompaction 
> (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Full repair|trunk|Parallel + Primary range|"No anticompaction (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Full repair|trunk|Force terminate repair shortly after it was 
> triggered|Repair threads must be cleaned up|
> |Subrange repair|trunk|Sequential + single token range|"No anticompaction 
> (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range"|
> |Subrange repair|trunk|Parallel + 10 token ranges which have the same 
> replicas|"No anticompaction (repairedAt == 0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range
> A single repair session will handle all subranges at once"|
> |Subrange repair|trunk|Parallel + 10 token ranges which have different 
> replicas|"No anticompaction (repairedAt==0)
>  Out of sync ranges > 0
>  Subsequent run must show no out of sync range
> More than one repair session is triggered to process all subranges"|
> |Subrange repair|trunk|"Single token range.
>  Force terminate repair shortly after it was triggered."|Repair threads must 
> be cleaned up|
> |Incremental repair|trunk|"Parallel (mandatory)
>  No compaction during repair"|"Anticompaction status (repairedAt != 0) on all 
> SSTables
>  No pending repair on SSTables after completion (could require to wait a bit 
> as this will happen asynchronously)
>  Out of sync ranges > 0 + Subsequent run must show no out of sync range"|
> |Incremental repair|trunk|"Parallel (mandatory)
>  Major compaction triggered during repair"|"Anticompaction status (repairedAt 
> != 0) on all SSTables
>  No pending repair on SSTables after completion (could require to wait a bit 
> as this will happen asynchronously)
>  Out of sync ranges > 0 + Subsequent run must show no out of sync range"|
> |Incremental repair|trunk|Force terminate repair shortly after it was 
> triggered.|Repair threads must be cleaned up|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15580) 4.0 quality testing: Repair

2020-11-18 Thread Alexander Dejanovski (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234403#comment-17234403
 ] 

Alexander Dejanovski commented on CASSANDRA-15580:
--

Sounds good [~marcuse], thanks for the notice.

Work started on CASSANDRA-16245 to implement the new test suite.
If anyone's interested in picking up CASSANDRA-16244 it would be greatly 
appreciated!

> 4.0 quality testing: Repair
> ---
>
> Key: CASSANDRA-15580
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15580
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest/python
>Reporter: Josh McKenzie
>Assignee: Alexander Dejanovski
>Priority: Normal
> Fix For: 4.0-rc
>
>
> Reference [doc from 
> NGCC|https://docs.google.com/document/d/1uhUOp7wpE9ZXNDgxoCZHejHt5SO4Qw1dArZqqsJccyQ/edit#]
>  for context.
> *Shepherd: Alexander Dejanovski*
> We aim for 4.0 to have the first fully functioning incremental repair 
> solution (CASSANDRA-9143)! Furthermore we aim to verify that all types of 
> repair: (full range, sub range, incremental) function as expected as well as 
> ensuring community tools such as Reaper work. CASSANDRA-3200 adds an 
> experimental option to reduce the amount of data streamed during repair, we 
> should write more tests and see how it works with big nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15563) Backport removal of OpenJDK warning log

2020-11-18 Thread Thomas Steinmaurer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Steinmaurer updated CASSANDRA-15563:
---
Description: As requested on ASF Slack, creating this ticket for a backport 
of CASSANDRA-13916 for 3.0.  (was: As requested on Slack, creating this ticket 
for a backport of CASSANDRA-13916 for 3.0.)

> Backport removal of OpenJDK warning log
> ---
>
> Key: CASSANDRA-15563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15563
> Project: Cassandra
>  Issue Type: Task
>Reporter: Thomas Steinmaurer
>Priority: Normal
> Fix For: 3.0.x
>
>
> As requested on ASF Slack, creating this ticket for a backport of 
> CASSANDRA-13916 for 3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234419#comment-17234419
 ] 

Michael Semb Wever commented on CASSANDRA-14477:


bq. the Cassandra-devbranch jenkins pipeline should be including dtest-novnode

patch for that 
[here|https://github.com/apache/cassandra-builds/compare/trunk...thelastpickle:mck/14477]

> The check of num_tokens against the length of inital_token in the yaml 
> triggers unexpectedly
> 
>
> Key: CASSANDRA-14477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Vincent White
>Assignee: Stefan Miklosovic
>Priority: Low
> Fix For: 3.0.23, 3.11.9, 4.0-beta4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In CASSANDRA-10120 we added a check that compares num_tokens against the 
> number of tokens supplied in the yaml via initial_token. From my reading of 
> CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained 
> contradictory values for num_tokens and initial_tokens which should help 
> prevent misconfiguration via human error. The current behaviour appears to 
> differ slightly in that it performs this comparison regardless of whether 
> num_tokens is included in the yaml or not. Below are proposed patches to only 
> perform the check if both options are present in the yaml.
> ||Branch||
> |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]|
> |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16282) Fix STCS documentation (the header is currently LCS)

2020-11-18 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-16282:
---
Reviewers: Berenguer Blasi, Michael Semb Wever  (was: Berenguer Blasi)
   Status: Review In Progress  (was: Patch Available)

> Fix STCS documentation (the header is currently LCS)
> 
>
> Key: CASSANDRA-16282
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16282
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: Miles Garnsey
>Assignee: Miles Garnsey
>Priority: Normal
> Fix For: 4.0
>
>
> Currently, the header in the [documentation for 
> STCS|https://cassandra.apache.org/doc/latest/operating/compaction/stcs.html] 
> refers to LCS in the header, which also makes it hard to find the STCS 
> documentation via search.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16282) Fix STCS documentation (the header is currently LCS)

2020-11-18 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-16282:
---
Status: Ready to Commit  (was: Review In Progress)

> Fix STCS documentation (the header is currently LCS)
> 
>
> Key: CASSANDRA-16282
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16282
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: Miles Garnsey
>Assignee: Miles Garnsey
>Priority: Normal
> Fix For: 4.0
>
>
> Currently, the header in the [documentation for 
> STCS|https://cassandra.apache.org/doc/latest/operating/compaction/stcs.html] 
> refers to LCS in the header, which also makes it hard to find the STCS 
> documentation via search.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234528#comment-17234528
 ] 

Michael Semb Wever commented on CASSANDRA-14477:


bq. he check for num_tokens being defined should be skipped if initial_tokens 
defines only one token, as it unlikely to be a typo, folk would rarely be 
configuring two tokens, and just one initial_token is the traditional 
non-vnodes configuration predating the use of num_tokens

[~stefan.miklosovic] has added fixes for each branch on the same PRs above.

> The check of num_tokens against the length of inital_token in the yaml 
> triggers unexpectedly
> 
>
> Key: CASSANDRA-14477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Vincent White
>Assignee: Stefan Miklosovic
>Priority: Low
> Fix For: 3.0.23, 3.11.9, 4.0-beta4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In CASSANDRA-10120 we added a check that compares num_tokens against the 
> number of tokens supplied in the yaml via initial_token. From my reading of 
> CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained 
> contradictory values for num_tokens and initial_tokens which should help 
> prevent misconfiguration via human error. The current behaviour appears to 
> differ slightly in that it performs this comparison regardless of whether 
> num_tokens is included in the yaml or not. Below are proposed patches to only 
> perform the check if both options are present in the yaml.
> ||Branch||
> |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]|
> |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234528#comment-17234528
 ] 

Michael Semb Wever edited comment on CASSANDRA-14477 at 11/18/20, 11:55 AM:


bq. the check for num_tokens being defined should be skipped if initial_tokens 
defines only one token, as it unlikely to be a typo, folk would rarely be 
configuring two tokens, and just one initial_token is the traditional 
non-vnodes configuration predating the use of num_tokens

[~stefan.miklosovic] has added fixes for each branch on the same PRs above.

CI run 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/216/pipeline/].


was (Author: michaelsembwever):
bq. he check for num_tokens being defined should be skipped if initial_tokens 
defines only one token, as it unlikely to be a typo, folk would rarely be 
configuring two tokens, and just one initial_token is the traditional 
non-vnodes configuration predating the use of num_tokens

[~stefan.miklosovic] has added fixes for each branch on the same PRs above.

CI run 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/216/pipeline/].

> The check of num_tokens against the length of inital_token in the yaml 
> triggers unexpectedly
> 
>
> Key: CASSANDRA-14477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Vincent White
>Assignee: Stefan Miklosovic
>Priority: Low
> Fix For: 3.0.23, 3.11.9, 4.0-beta4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In CASSANDRA-10120 we added a check that compares num_tokens against the 
> number of tokens supplied in the yaml via initial_token. From my reading of 
> CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained 
> contradictory values for num_tokens and initial_tokens which should help 
> prevent misconfiguration via human error. The current behaviour appears to 
> differ slightly in that it performs this comparison regardless of whether 
> num_tokens is included in the yaml or not. Below are proposed patches to only 
> perform the check if both options are present in the yaml.
> ||Branch||
> |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]|
> |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234528#comment-17234528
 ] 

Michael Semb Wever edited comment on CASSANDRA-14477 at 11/18/20, 11:55 AM:


bq. he check for num_tokens being defined should be skipped if initial_tokens 
defines only one token, as it unlikely to be a typo, folk would rarely be 
configuring two tokens, and just one initial_token is the traditional 
non-vnodes configuration predating the use of num_tokens

[~stefan.miklosovic] has added fixes for each branch on the same PRs above.

CI run 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/216/pipeline/].


was (Author: michaelsembwever):
bq. he check for num_tokens being defined should be skipped if initial_tokens 
defines only one token, as it unlikely to be a typo, folk would rarely be 
configuring two tokens, and just one initial_token is the traditional 
non-vnodes configuration predating the use of num_tokens

[~stefan.miklosovic] has added fixes for each branch on the same PRs above.

> The check of num_tokens against the length of inital_token in the yaml 
> triggers unexpectedly
> 
>
> Key: CASSANDRA-14477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Vincent White
>Assignee: Stefan Miklosovic
>Priority: Low
> Fix For: 3.0.23, 3.11.9, 4.0-beta4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In CASSANDRA-10120 we added a check that compares num_tokens against the 
> number of tokens supplied in the yaml via initial_token. From my reading of 
> CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained 
> contradictory values for num_tokens and initial_tokens which should help 
> prevent misconfiguration via human error. The current behaviour appears to 
> differ slightly in that it performs this comparison regardless of whether 
> num_tokens is included in the yaml or not. Below are proposed patches to only 
> perform the check if both options are present in the yaml.
> ||Branch||
> |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]|
> |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16273) nodetool status owns (effective) question mark

2020-11-18 Thread Yakir Gibraltar (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234540#comment-17234540
 ] 

Yakir Gibraltar commented on CASSANDRA-16273:
-

Same issue for me with Cassandra 4:
{code:java}
cqlsh> DESC KEYSPACE test;
CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
'V4CH': '3'}  AND durable_writes = true; {code}

> nodetool status owns (effective) question mark
> --
>
> Key: CASSANDRA-16273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16273
> Project: Cassandra
>  Issue Type: Bug
>Reporter: AaronTrazona
>Priority: Normal
> Fix For: 3.11.x
>
> Attachments: image-2020-11-13-13-55-12-609.png, 
> image-2020-11-16-08-21-08-580.png, image-2020-11-16-08-23-18-474.png, 
> image-2020-11-16-09-30-14-287.png, image-2020-11-16-09-34-23-103.png, 
> image-2020-11-16-09-41-12-804.png, image-2020-11-16-12-42-56-725.png, 
> image-2020-11-16-13-58-08-395.png
>
>
> !image-2020-11-13-13-55-12-609.png!
> I'm wondering why the owns (effective) become question mark.. 
> I already enabled the 
> JVM_OPTS = "$JVM_OPTS -Djava.rmi.server.hostname=public name"
>  
> Let me know if I missed something..
>  
> Thanks
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.11 updated (d9e1af8 -> e8c1af2)

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from d9e1af8  Merge branch 'cassandra-3.0' into cassandra-3.11
 new bfd5d20  Check between num_tokens and initial_token only applies to 
vnodes usage
 new e8c1af2  Merge branch 'cassandra-3.0' into cassandra-3.11

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 NEWS.txt   |  2 +-
 .../cassandra/config/DatabaseDescriptor.java   |  5 -
 .../cassandra/config/DatabaseDescriptorTest.java   | 25 --
 3 files changed, 24 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.0 updated: Check between num_tokens and initial_token only applies to vnodes usage

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch cassandra-3.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-3.0 by this push:
 new bfd5d20  Check between num_tokens and initial_token only applies to 
vnodes usage
bfd5d20 is described below

commit bfd5d20a13501d897d8d34acce9b0394fa1cf00b
Author: Stefan Miklosovic 
AuthorDate: Wed Nov 18 10:21:11 2020 +0100

Check between num_tokens and initial_token only applies to vnodes usage

 patch by Stefan Miklosovic; reviewed by Mick Semb Wever for CASSANDRA-14477
---
 NEWS.txt   |  2 +-
 .../cassandra/config/DatabaseDescriptor.java   |  5 +++-
 .../cassandra/config/DatabaseDescriptorTest.java   | 34 ++
 3 files changed, 33 insertions(+), 8 deletions(-)

diff --git a/NEWS.txt b/NEWS.txt
index 42fbf63..7034c2c 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -47,7 +47,7 @@ using the provided 'sstableupgrade' tool.
 
 Upgrading
 -
-- In cassandra.yaml, num_tokens must be defined if initial_token is 
defined.
+- In cassandra.yaml, when using vnodes num_tokens must be defined if 
initial_token is defined.
   If it is not defined, or not equal to the numbers of tokens defined in 
initial_tokens,
   the node will not start. See CASSANDRA-14477 for details.
 
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 04293fb..3f9aa96 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -300,7 +300,10 @@ public class DatabaseDescriptor
 Collection tokens = tokensFromString(config.initial_token);
 if (config.num_tokens == null)
 {
-throw new ConfigurationException("initial_token was set but 
num_tokens is not!", false);
+if (tokens.size() == 1)
+config.num_tokens = 1;
+else
+throw new ConfigurationException("initial_token was set 
but num_tokens is not!", false);
 }
 
 if (tokens.size() != config.num_tokens)
diff --git a/test/unit/org/apache/cassandra/config/DatabaseDescriptorTest.java 
b/test/unit/org/apache/cassandra/config/DatabaseDescriptorTest.java
index 7614e02..0dcc7f7 100644
--- a/test/unit/org/apache/cassandra/config/DatabaseDescriptorTest.java
+++ b/test/unit/org/apache/cassandra/config/DatabaseDescriptorTest.java
@@ -315,7 +315,7 @@ public class DatabaseDescriptorTest
 }
 
 @Test
-public void 
testApplyInitialTokensInitialTokensSetNumTokensSetAndDoesMatch() throws 
Exception
+public void 
testApplyTokensConfigInitialTokensSetNumTokensSetAndDoesMatch() throws Exception
 {
 Config config = DatabaseDescriptor.loadConfig();
 config.initial_token = "0,256,1024";
@@ -337,7 +337,7 @@ public class DatabaseDescriptorTest
 }
 
 @Test
-public void 
testApplyInitialTokensInitialTokensSetNumTokensSetAndDoesntMatch() throws 
Exception
+public void 
testApplyTokensConfigInitialTokensSetNumTokensSetAndDoesntMatch() throws 
Exception
 {
 Config config = DatabaseDescriptor.loadConfig();
 config.initial_token = "0,256,1024";
@@ -349,7 +349,7 @@ public class DatabaseDescriptorTest
 {
 DatabaseDescriptor.applyTokensConfig(config);
 
-Assert.fail("initial_token = 0,256,1024 and num_tokens = 10 but 
applyInitialTokens() did not fail!");
+Assert.fail("initial_token = 0,256,1024 and num_tokens = 10 but 
applyTokensConfig() did not fail!");
 }
 catch (ConfigurationException ex)
 {
@@ -363,7 +363,7 @@ public class DatabaseDescriptorTest
 }
 
 @Test
-public void testApplyInitialTokensInitialTokensSetNumTokensNotSet() throws 
Exception
+public void testApplyTokensConfigInitialTokensSetNumTokensNotSet() throws 
Exception
 {
 Config config = DatabaseDescriptor.loadConfig();
 
@@ -387,7 +387,7 @@ public class DatabaseDescriptorTest
 }
 
 @Test
-public void testApplyInitialTokensInitialTokensNotSetNumTokensSet() throws 
Exception
+public void testApplyTokensConfigInitialTokensNotSetNumTokensSet() throws 
Exception
 {
 Config config = DatabaseDescriptor.loadConfig();
 config.num_tokens = 3;
@@ -408,7 +408,7 @@ public class DatabaseDescriptorTest
 }
 
 @Test
-public void testApplyInitialTokensInitialTokensNotSetNumTokensNotSet() 
throws Exception
+public void testApplyTokensConfigInitialTokensNotSetNumTokensNotSet() 
throws Exception
 {
 Config config = DatabaseDescriptor.loadConfig();
 
@@ -427,6 +427,28 @@ public class DatabaseDescriptorTest
 
Assert.assertTrue(DatabaseDescriptor.tokensFromString(config.initial_token).isEm

[cassandra] 01/01: Merge branch 'cassandra-3.0' into cassandra-3.11

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit e8c1af26c5d07c12109d51c2d89bc3967141dc55
Merge: d9e1af8 bfd5d20
Author: Mick Semb Wever 
AuthorDate: Wed Nov 18 12:57:14 2020 +0100

Merge branch 'cassandra-3.0' into cassandra-3.11

 NEWS.txt   |  2 +-
 .../cassandra/config/DatabaseDescriptor.java   |  5 -
 .../cassandra/config/DatabaseDescriptorTest.java   | 25 --
 3 files changed, 24 insertions(+), 8 deletions(-)

diff --cc NEWS.txt
index d16bcce,7034c2c..99d589d
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -42,11 -42,12 +42,11 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 -3.0.24
 -==
 -
 +3.11.10
 +=
  Upgrading
  -
- - In cassandra.yaml, num_tokens must be defined if initial_token is 
defined.
+ - In cassandra.yaml, when using vnodes num_tokens must be defined if 
initial_token is defined.
If it is not defined, or not equal to the numbers of tokens defined in 
initial_tokens,
the node will not start. See CASSANDRA-14477 for details.
  
diff --cc src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index cbf42b9,3f9aa96..c88a0e7
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@@ -940,126 -756,60 +940,129 @@@ public class DatabaseDescripto
  }
  if (seedProvider.getSeeds().size() == 0)
  throw new ConfigurationException("The seed provider lists no 
seeds.", false);
 +}
  
 -if (conf.user_defined_function_fail_timeout < 0)
 -throw new 
ConfigurationException("user_defined_function_fail_timeout must not be 
negative", false);
 -if (conf.user_defined_function_warn_timeout < 0)
 -throw new 
ConfigurationException("user_defined_function_warn_timeout must not be 
negative", false);
 +public static void applyTokensConfig()
 +{
 +applyTokensConfig(conf);
 +}
  
 -if (conf.user_defined_function_fail_timeout < 
conf.user_defined_function_warn_timeout)
 -throw new 
ConfigurationException("user_defined_function_warn_timeout must less than 
user_defined_function_fail_timeout", false);
 +static void applyTokensConfig(Config conf)
 +{
 +if (conf.initial_token != null)
 +{
 +Collection tokens = tokensFromString(conf.initial_token);
 +if (conf.num_tokens == null)
 +{
- throw new ConfigurationException("initial_token was set but 
num_tokens is not!", false);
++if (tokens.size() == 1)
++conf.num_tokens = 1;
++else
++throw new ConfigurationException("initial_token was set 
but num_tokens is not!", false);
 +}
  
 -if (conf.commitlog_segment_size_in_mb <= 0)
 -throw new ConfigurationException("commitlog_segment_size_in_mb 
must be positive, but was "
 -+ conf.commitlog_segment_size_in_mb, false);
 -else if (conf.commitlog_segment_size_in_mb >= 2048)
 -throw new ConfigurationException("commitlog_segment_size_in_mb 
must be smaller than 2048, but was "
 -+ conf.commitlog_segment_size_in_mb, false);
 +if (tokens.size() != conf.num_tokens)
 +{
 +throw new ConfigurationException(String.format("The number of 
initial tokens (by initial_token) specified (%s) is different from num_tokens 
value (%s)",
 +   tokens.size(),
 +   
conf.num_tokens),
 + false);
 +}
 +
 +for (String token : tokens)
 +partitioner.getTokenFactory().validate(token);
 +}
 +else if (conf.num_tokens == null)
 +{
 +conf.num_tokens = 1;
 +}
 +}
 +
 +// Maybe safe for clients + tools
 +public static void applyRequestScheduler()
 +{
 +/* Request Scheduler setup */
 +requestSchedulerOptions = conf.request_scheduler_options;
 +if (conf.request_scheduler != null)
 +{
 +try
 +{
 +if (requestSchedulerOptions == null)
 +{
 +requestSchedulerOptions = new RequestSchedulerOptions();
 +}
 +Class cls = Class.forName(conf.request_scheduler);
 +requestScheduler = (IRequestScheduler) 
cls.getConstructor(RequestSchedulerOptions.class).newInstance(requestSchedulerOptions);
 +}
 +catch (ClassNotFoundException e)
 +

[cassandra] 01/01: Merge branch 'cassandra-3.11' into trunk

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 8bcd6ab9bb94d150527ff185caa3d7402c4d9159
Merge: 1122fcf e8c1af2
Author: Mick Semb Wever 
AuthorDate: Wed Nov 18 13:02:46 2020 +0100

Merge branch 'cassandra-3.11' into trunk

 NEWS.txt   |  2 +-
 .../cassandra/config/DatabaseDescriptor.java   |  5 -
 .../cassandra/config/DatabaseDescriptorTest.java   | 25 --
 3 files changed, 24 insertions(+), 8 deletions(-)

diff --cc NEWS.txt
index cf33df9,99d589d..99c8af4
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -33,230 -42,11 +33,230 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 -3.11.10
 -=
 +4.0
 +===
 +
 +New features
 +
 +- Nodes will now bootstrap all intra-cluster connections at startup by 
default and wait
 +  10 seconds for the all but one node in the local data center to be 
connected and marked
 +  UP in gossip. This prevents nodes from coordinating requests and 
failing because they
 +  aren't able to connect to the cluster fast enough. 
block_for_peers_timeout_in_secs in
 +  cassandra.yaml can be used to configure how long to wait (or whether to 
wait at all)
 +  and block_for_peers_in_remote_dcs can be used to also block on all but 
one node in
 +  each remote DC as well. See CASSANDRA-14297 and CASSANDRA-13993 for 
more information.
 +- *Experimental* support for Transient Replication and Cheap Quorums 
introduced by CASSANDRA-14404
 +  The intended audience for this functionality is expert users of 
Cassandra who are prepared
 +  to validate every aspect of the database for their application and 
deployment practices. Future
 +  releases of Cassandra will make this feature suitable for a wider 
audience.
 +- *Experimental* support for Java 11 has been added. JVM options that 
differ between or are
 +  specific for Java 8 and 11 have been moved from jvm.options into 
jvm8.options and jvm11.options.
 +  IMPORTANT: Running C* on Java 11 is *experimental* and do it at your 
own risk.
 +- LCS now respects the max_threshold parameter when compacting - this was 
hard coded to 32
 +  before, but now it is possible to do bigger compactions when compacting 
from L0 to L1.
 +  This also applies to STCS-compactions in L0 - if there are more than 32 
sstables in L0
 +  we will compact at most max_threshold sstables in an L0 STCS 
compaction. See CASSANDRA-14388
 +  for more information.
 +- There is now an option to automatically upgrade sstables after 
Cassandra upgrade, enable
 +  either in `cassandra.yaml:automatic_sstable_upgrade` or via JMX during 
runtime. See
 +  CASSANDRA-14197.
 +- `nodetool refresh` has been deprecated in favour of `nodetool import` - 
see CASSANDRA-6719
 +  for details
 +- An experimental option to compare all merkle trees together has been 
added - for example, in
 +  a 3 node cluster with 2 replicas identical and 1 out-of-date, with this 
option enabled, the
 +  out-of-date replica will only stream a single copy from up-to-date 
replica. Enable it by adding
 +  "-os" to nodetool repair. See CASSANDRA-3200.
 +- The currentTimestamp, currentDate, currentTime and currentTimeUUID 
functions have been added.
 +  See CASSANDRA-13132
 +- Support for arithmetic operations between `timestamp`/`date` and 
`duration` has been added.
 +  See CASSANDRA-11936
 +- Support for arithmetic operations on number has been added. See 
CASSANDRA-11935
 +- Preview expected streaming required for a repair (nodetool repair 
--preview), and validate the
 +  consistency of repaired data between nodes (nodetool repair 
--validate). See CASSANDRA-13257
 +- Support for selecting Map values and Set elements has been added for 
SELECT queries. See CASSANDRA-7396
 +- Change-Data-Capture has been modified to make CommitLogSegments 
available
 +  immediately upon creation via hard-linking the files. This means that 
incomplete
 +  segments will be available in cdc_raw rather than fully flushed. See 
documentation
 +  and CASSANDRA-12148 for more detail.
 +- The initial build of materialized views can be parallelized. The number 
of concurrent builder
 +  threads is specified by the property 
`cassandra.yaml:concurrent_materialized_view_builders`.
 +  This property can be modified at runtime through both JMX and the new 
`setconcurrentviewbuilders`
 +  and `getconcurrentviewbuilders` nodetool commands. See CASSANDRA-12245 
for more details.
 +- There is now a binary full query log based on Chronicle Queue that can 
be controlled using
 +  nodetool enablefullquerylog, disablefullquerylog, and 
resetfullquerylog. The log
 + 

[cassandra] branch trunk updated (1122fcf -> 8bcd6ab)

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 1122fcf  Merge branch 'cassandra-3.11' into trunk
 new bfd5d20  Check between num_tokens and initial_token only applies to 
vnodes usage
 new e8c1af2  Merge branch 'cassandra-3.0' into cassandra-3.11
 new 8bcd6ab  Merge branch 'cassandra-3.11' into trunk

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 NEWS.txt   |  2 +-
 .../cassandra/config/DatabaseDescriptor.java   |  5 -
 .../cassandra/config/DatabaseDescriptorTest.java   | 25 --
 3 files changed, 24 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234544#comment-17234544
 ] 

Michael Semb Wever commented on CASSANDRA-14477:


bq. the check for num_tokens being defined should be skipped if initial_tokens 
defines only one token, as it unlikely to be a typo, folk would rarely be 
configuring two tokens, and just one initial_token is the traditional 
non-vnodes configuration predating the use of num_tokens
 
Committed as 
[bfd5d20a13501d897d8d34acce9b0394fa1cf00b|https://github.com/apache/cassandra/commit/bfd5d20a13501d897d8d34acce9b0394fa1cf00b].

> The check of num_tokens against the length of inital_token in the yaml 
> triggers unexpectedly
> 
>
> Key: CASSANDRA-14477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Vincent White
>Assignee: Stefan Miklosovic
>Priority: Low
> Fix For: 3.0.23, 3.11.9, 4.0-beta4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In CASSANDRA-10120 we added a check that compares num_tokens against the 
> number of tokens supplied in the yaml via initial_token. From my reading of 
> CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained 
> contradictory values for num_tokens and initial_tokens which should help 
> prevent misconfiguration via human error. The current behaviour appears to 
> differ slightly in that it performs this comparison regardless of whether 
> num_tokens is included in the yaml or not. Below are proposed patches to only 
> perform the check if both options are present in the yaml.
> ||Branch||
> |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]|
> |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated: Correct header for STCS documentation

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 872751a  Correct header for STCS documentation
872751a is described below

commit 872751a0501561f43c4360aa9a8bedfe7f180234
Author: Miles Garnsey <11435896+miles-garn...@users.noreply.github.com>
AuthorDate: Wed Nov 18 16:54:29 2020 +1100

Correct header for STCS documentation

In the STCS documentation switch the header "Levelled Compaction Strategy" 
to the corrected "Size Tiered Compaction Strategy" to make this documentation 
easier to find via search.

 patch by Miles Garnsey; reviewed by Berenguer Blasi for CASSANDRA-16282
---
 doc/source/operating/compaction/stcs.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/source/operating/compaction/stcs.rst 
b/doc/source/operating/compaction/stcs.rst
index 6589337..c749a59 100644
--- a/doc/source/operating/compaction/stcs.rst
+++ b/doc/source/operating/compaction/stcs.rst
@@ -17,7 +17,7 @@
 
 .. _STCS:
 
-Leveled Compaction Strategy
+Size Tiered Compaction Strategy
 ^^^
 
 The basic idea of ``SizeTieredCompactionStrategy`` (STCS) is to merge sstables 
of approximately the same size. All


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16282) Fix STCS documentation (the header is currently LCS)

2020-11-18 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-16282:
---
  Fix Version/s: 4.0-beta4
  Since Version: 4.0-alpha4
Source Control Link: 
https://github.com/apache/cassandra/commit/872751a0501561f43c4360aa9a8bedfe7f180234
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed as 
[872751a0501561f43c4360aa9a8bedfe7f180234|https://github.com/apache/cassandra/commit/872751a0501561f43c4360aa9a8bedfe7f180234].

> Fix STCS documentation (the header is currently LCS)
> 
>
> Key: CASSANDRA-16282
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16282
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: Miles Garnsey
>Assignee: Miles Garnsey
>Priority: Normal
> Fix For: 4.0, 4.0-beta4
>
>
> Currently, the header in the [documentation for 
> STCS|https://cassandra.apache.org/doc/latest/operating/compaction/stcs.html] 
> refers to LCS in the header, which also makes it hard to find the STCS 
> documentation via search.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16255) Update jctools dependency

2020-11-18 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-16255:
---
Reviewers: Michael Semb Wever
   Status: Review In Progress  (was: Patch Available)

> Update jctools dependency
> -
>
> Key: CASSANDRA-16255
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16255
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/Other
>Reporter: Marcus Eriksson
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0-beta4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CASSANDRA-15880 started using {{MpmcArrayQueue}} from jctools - before that 
> we only used it in cassandra-stress, we should probably update the dependency 
> as jctools-1.2.1 is more than 4 years old



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-3.11' into trunk

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit fcf293a61a642281c8a4efe57ed08a55199fa515
Merge: 872751a ad9b715
Author: Mick Semb Wever 
AuthorDate: Wed Nov 18 13:27:03 2020 +0100

Merge branch 'cassandra-3.11' into trunk

 CHANGES.txt  |  1 +
 NEWS.txt | 12 
 src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java |  8 +++-
 3 files changed, 20 insertions(+), 1 deletion(-)

diff --cc CHANGES.txt
index 69ccf55,e16f3e5..29100fc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,5 +1,15 @@@
 -3.11.10
 +4.0-beta4
 + * Upgrade JNA to 5.6.0, dropping support for <=glibc-2.6 systems 
(CASSANDRA-16212)
 + * Add saved Host IDs to TokenMetadata at startup (CASSANDRA-16246)
 + * Ensure that CacheMetrics.requests is picked up by the metric reporter 
(CASSANDRA-16228)
 + * Add a ratelimiter to snapshot creation and deletion (CASSANDRA-13019)
 + * Produce consistent tombstone for reads to avoid digest mistmatch 
(CASSANDRA-15369)
 + * Fix SSTableloader issue when restoring a table named backups 
(CASSANDRA-16235)
 + * Invalid serialized size for responses caused by increasing message time by 
1ms which caused extra bytes in size calculation (CASSANDRA-16103)
 + * Throw BufferOverflowException from DataOutputBuffer for better visibility 
(CASSANDRA-16214)
 + * TLS connections to the storage port on a node without server encryption 
configured causes java.io.IOException accessing missing keystore 
(CASSANDRA-16144)
 +Merged from 3.11:
+  * SASI's `max_compaction_flush_memory_in_mb` settings over 100GB revert to 
default of 1GB (CASSANDRA-16071)
  Merged from 3.0:
   * Improved check of num_tokens against the length of initial_token 
(CASSANDRA-14477)
   * Fix a race condition on ColumnFamilyStore and TableMetrics 
(CASSANDRA-16228)
diff --cc NEWS.txt
index 99c8af4,3af2150..498fbbb
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -259,27 -49,29 +259,39 @@@ Upgradin
  - In cassandra.yaml, when using vnodes num_tokens must be defined if 
initial_token is defined.
If it is not defined, or not equal to the numbers of tokens defined in 
initial_tokens,
the node will not start. See CASSANDRA-14477 for details.
 -- SASI's `max_compaction_flush_memory_in_mb` setting was previously 
getting interpreted in bytes. From 3.11.8
 -  it is correctly interpreted in megabytes, but prior to 3.11.10 previous 
configurations of this setting will
 -  lead to nodes OOM during compaction. From 3.11.10 previous 
configurations will be detected as incorrect,
 -  logged, and the setting reverted to the default value of 1GB. It is up 
to the user to correct the setting
 -  after an upgrade, via dropping and recreating the index. See 
CASSANDRA-16071 for details.
  
 -3.11.9
 -==
 -Upgrading
 --
 -   - Custom compaction strategies must handle getting sstables added/removed 
notifications for
 - sstables already added/removed - see CASSANDRA-14103 for details. This 
has been a requirement
 - for correct operation since 3.11.0 due to an issue in 
CompactionStrategyManager.
  
 -3.11.7
 +Deprecation
 +---
 +
 +- The JMX MBean org.apache.cassandra.db:type=BlacklistedDirectories has 
been
 +  deprecated in favor of 
org.apache.cassandra.db:type=DisallowedDirectories
 +  and will be removed in a subsequent major version.
 +
 +
 +Materialized Views
 +---
 +- Following a discussion regarding concerns about the design and safety 
of Materialized Views, the C* development
 +  community no longer recommends them for production use, and considers 
them experimental. Warnings messages will
 +  now be logged when they are created. (See 
https://www.mail-archive.com/dev@cassandra.apache.org/msg11511.html)
 +- An 'enable_materialized_views' flag has been added to cassandra.yaml to 
allow operators to prevent creation of
 +  views
 +- CREATE MATERIALIZED VIEW syntax has become stricter. Partition key 
columns are no longer implicitly considered
 +  to be NOT NULL, and no base primary key columns get automatically 
included in view definition. You have to
 +  specify them explicitly now.
 +
++3.11.10
+ ==
+ 
+ Upgrading
+ -
 -- Nothing specific to this release, but please see previous upgrading 
sections,
 -  especially if you are upgrading from 3.0.
++- SASI's `max_compaction_flush_memory_in_mb` setting was previously 
getting interpreted in bytes. From 3.11.8
++  it is correctly interpreted in megabytes, but prior to 3.11.10 previous 
configurations of this setting will
++  lead to nodes OOM during compaction. From 3.11.10 previous 
configurations will be detected as incorrect,
++  logged, and the setting reverted to the default value of 1GB. It is up 
to the user to corr

[cassandra] branch cassandra-3.11 updated: Protect against max_compaction_flush_memory_in_mb configurations configured still in bytes

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-3.11 by this push:
 new ad9b715  Protect against max_compaction_flush_memory_in_mb 
configurations configured still in bytes
ad9b715 is described below

commit ad9b7156bd3df143c1d090a3d77f9479d906e0ec
Author: Mick Semb Wever 
AuthorDate: Sun Nov 15 13:13:19 2020 +0100

Protect against max_compaction_flush_memory_in_mb configurations configured 
still in bytes

 patch by Mick Semb Wever; reviewed by Zhao Yang for CASSANDRA-16071
---
 CHANGES.txt  | 1 +
 NEWS.txt | 6 ++
 src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java | 8 +++-
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index 80b1532..e16f3e5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.10
+ * SASI's `max_compaction_flush_memory_in_mb` settings over 100GB revert to 
default of 1GB (CASSANDRA-16071)
 Merged from 3.0:
  * Improved check of num_tokens against the length of initial_token 
(CASSANDRA-14477)
  * Fix a race condition on ColumnFamilyStore and TableMetrics (CASSANDRA-16228)
diff --git a/NEWS.txt b/NEWS.txt
index 99d589d..3af2150 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -49,6 +49,11 @@ Upgrading
 - In cassandra.yaml, when using vnodes num_tokens must be defined if 
initial_token is defined.
   If it is not defined, or not equal to the numbers of tokens defined in 
initial_tokens,
   the node will not start. See CASSANDRA-14477 for details.
+- SASI's `max_compaction_flush_memory_in_mb` setting was previously 
getting interpreted in bytes. From 3.11.8
+  it is correctly interpreted in megabytes, but prior to 3.11.10 previous 
configurations of this setting will
+  lead to nodes OOM during compaction. From 3.11.10 previous 
configurations will be detected as incorrect,
+  logged, and the setting reverted to the default value of 1GB. It is up 
to the user to correct the setting
+  after an upgrade, via dropping and recreating the index. See 
CASSANDRA-16071 for details.
 
 3.11.9
 ==
@@ -66,6 +71,7 @@ Upgrading
 - Nothing specific to this release, but please see previous upgrading 
sections,
   especially if you are upgrading from 3.0.
 
+
 3.11.6
 ==
 
diff --git a/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java 
b/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java
index 8d76bb0..60a19a6 100644
--- a/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java
+++ b/src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java
@@ -56,6 +56,7 @@ public class IndexMode
 private static final String INDEX_IS_LITERAL_OPTION = "is_literal";
 private static final String INDEX_MAX_FLUSH_MEMORY_OPTION = 
"max_compaction_flush_memory_in_mb";
 private static final double INDEX_MAX_FLUSH_DEFAULT_MULTIPLIER = 0.15;
+private static final long DEFAULT_MAX_MEM_BYTES = (long) (1073741824 * 
INDEX_MAX_FLUSH_DEFAULT_MULTIPLIER); // 1G default for memtable
 
 public final Mode mode;
 public final boolean isAnalyzed, isLiteral;
@@ -187,9 +188,14 @@ public class IndexMode
 }
 
 long maxMemBytes = indexOptions.get(INDEX_MAX_FLUSH_MEMORY_OPTION) == 
null
-? (long) (1073741824 * INDEX_MAX_FLUSH_DEFAULT_MULTIPLIER) // 
1G default for memtable
+? DEFAULT_MAX_MEM_BYTES
 : 1048576L * 
Long.parseLong(indexOptions.get(INDEX_MAX_FLUSH_MEMORY_OPTION));
 
+if (maxMemBytes > 100L * 1073741824)
+{
+logger.error("{} configured as {} is above 100GB, reverting to 
default 1GB", INDEX_MAX_FLUSH_MEMORY_OPTION, maxMemBytes);
+maxMemBytes = DEFAULT_MAX_MEM_BYTES;
+}
 return new IndexMode(mode, isLiteral, isAnalyzed, analyzerClass, 
maxMemBytes);
 }
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (872751a -> fcf293a)

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 872751a  Correct header for STCS documentation
 new ad9b715  Protect against max_compaction_flush_memory_in_mb 
configurations configured still in bytes
 new fcf293a  Merge branch 'cassandra-3.11' into trunk

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt  |  1 +
 NEWS.txt | 12 
 src/java/org/apache/cassandra/index/sasi/conf/IndexMode.java |  8 +++-
 3 files changed, 20 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16071) max_compaction_flush_memory_in_mb is interpreted as bytes

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234568#comment-17234568
 ] 

Michael Semb Wever commented on CASSANDRA-16071:


Committed upgrade protection as 
[ad9b7156bd3df143c1d090a3d77f9479d906e0ec|https://github.com/apache/cassandra/commit/ad9b7156bd3df143c1d090a3d77f9479d906e0ec].

> max_compaction_flush_memory_in_mb is interpreted as bytes
> -
>
> Key: CASSANDRA-16071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/SASI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 4.0, 3.11.8, 4.0-beta2
>
>
> In CASSANDRA-12662, [~scottcarey] 
> [reported|https://issues.apache.org/jira/browse/CASSANDRA-12662?focusedCommentId=17070055&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17070055]
>  that the {{max_compaction_flush_memory_in_mb}} setting gets incorrectly 
> interpreted in bytes rather than megabytes as its name implies.
> {quote}
> 1.  the setting 'max_compaction_flush_memory_in_mb' is a misnomer, it is 
> actually memory in BYTES.  If you take it at face value, and set it to say, 
> '512' thinking that means 512MB,  you will produce a million temp files 
> rather quickly in a large compaction, which will exhaust even large values of 
> max_map_count rapidly, and get the OOM: Map Error issue above and possibly 
> have a very difficult situation to get a cluster back into a place where 
> nodes aren't crashing while initilaizing or soon after.  This issue is minor 
> if you know about it in advance and set the value IN BYTES.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16071) max_compaction_flush_memory_in_mb is interpreted as bytes

2020-11-18 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-16071:
---
Fix Version/s: 3.11.10
   4.0-beta4

> max_compaction_flush_memory_in_mb is interpreted as bytes
> -
>
> Key: CASSANDRA-16071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/SASI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 4.0, 3.11.8, 4.0-beta2, 4.0-beta4, 3.11.10
>
>
> In CASSANDRA-12662, [~scottcarey] 
> [reported|https://issues.apache.org/jira/browse/CASSANDRA-12662?focusedCommentId=17070055&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17070055]
>  that the {{max_compaction_flush_memory_in_mb}} setting gets incorrectly 
> interpreted in bytes rather than megabytes as its name implies.
> {quote}
> 1.  the setting 'max_compaction_flush_memory_in_mb' is a misnomer, it is 
> actually memory in BYTES.  If you take it at face value, and set it to say, 
> '512' thinking that means 512MB,  you will produce a million temp files 
> rather quickly in a large compaction, which will exhaust even large values of 
> max_map_count rapidly, and get the OOM: Map Error issue above and possibly 
> have a very difficult situation to get a cluster back into a place where 
> nodes aren't crashing while initilaizing or soon after.  This issue is minor 
> if you know about it in advance and set the value IN BYTES.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly

2020-11-18 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-14477:
---
Resolution: Fixed
Status: Resolved  (was: Open)

> The check of num_tokens against the length of inital_token in the yaml 
> triggers unexpectedly
> 
>
> Key: CASSANDRA-14477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Vincent White
>Assignee: Stefan Miklosovic
>Priority: Low
> Fix For: 3.0.23, 3.11.9, 4.0-beta4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In CASSANDRA-10120 we added a check that compares num_tokens against the 
> number of tokens supplied in the yaml via initial_token. From my reading of 
> CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained 
> contradictory values for num_tokens and initial_tokens which should help 
> prevent misconfiguration via human error. The current behaviour appears to 
> differ slightly in that it performs this comparison regardless of whether 
> num_tokens is included in the yaml or not. Below are proposed patches to only 
> perform the check if both options are present in the yaml.
> ||Branch||
> |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]|
> |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234571#comment-17234571
 ] 

Michael Semb Wever commented on CASSANDRA-14477:


bq. the Cassandra-devbranch jenkins pipeline should be including dtest-novnode

Committed as 
[69cfcb31078dd9d79d19d29d5c4543832fa00ffa|https://github.com/apache/cassandra-builds/commit/69cfcb31078dd9d79d19d29d5c4543832fa00ffa].

> The check of num_tokens against the length of inital_token in the yaml 
> triggers unexpectedly
> 
>
> Key: CASSANDRA-14477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Vincent White
>Assignee: Stefan Miklosovic
>Priority: Low
> Fix For: 3.0.23, 3.11.9, 4.0-beta4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In CASSANDRA-10120 we added a check that compares num_tokens against the 
> number of tokens supplied in the yaml via initial_token. From my reading of 
> CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained 
> contradictory values for num_tokens and initial_tokens which should help 
> prevent misconfiguration via human error. The current behaviour appears to 
> differ slightly in that it performs this comparison regardless of whether 
> num_tokens is included in the yaml or not. Below are proposed patches to only 
> perform the check if both options are present in the yaml.
> ||Branch||
> |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]|
> |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly

2020-11-18 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-14477:
---
Fix Version/s: 4.0

> The check of num_tokens against the length of inital_token in the yaml 
> triggers unexpectedly
> 
>
> Key: CASSANDRA-14477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Vincent White
>Assignee: Stefan Miklosovic
>Priority: Low
> Fix For: 4.0, 3.0.23, 3.11.9, 4.0-beta4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In CASSANDRA-10120 we added a check that compares num_tokens against the 
> number of tokens supplied in the yaml via initial_token. From my reading of 
> CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained 
> contradictory values for num_tokens and initial_tokens which should help 
> prevent misconfiguration via human error. The current behaviour appears to 
> differ slightly in that it performs this comparison regardless of whether 
> num_tokens is included in the yaml or not. Below are proposed patches to only 
> perform the check if both options are present in the yaml.
> ||Branch||
> |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]|
> |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16283) Incorrect output in "nodetool status -r"

2020-11-18 Thread Yakir Gibraltar (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yakir Gibraltar updated CASSANDRA-16283:

Description: 
nodetool status -r not working well on C* 4,
 Version:
{code:java}
[root@foo001 ~]# nodetool version
ReleaseVersion: 4.0-beta3
{code}
Without resolving:
{code:java}
[root@foo001 ~]# nodetool status
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address LoadTokens  Owns(effective) Host ID
Rack
UN  1.2.3.4 363.68 KiB  128 ? 92ae4c39-edb3-4e67-8623-b49fd8301b66 
RAC1
UN  1.2.3.5 109.71 KiB  128 ? d80647a8-32b2-4a8f-8022-f5ae3ce8fbb2 
RAC1
{code}
With resolving:
{code:java}
[root@foo001 ~]# nodetool status -r
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address  Load  Tokens  Owns (effective)  Host ID  Rack
?N  foo001.tab.com   ? 128 ?  RAC1
?N  foo002.tab.com   ? 128 ?  RAC1
{code}

I only changed here IPs and hostnames.


  was:
nodetool status -r not working well on C* 4,
 Version:
{code:java}
[root@foo001 ~]# nodetool version
ReleaseVersion: 4.0-beta3
{code}
Without resolving:
{code:java}
[root@foo001 ~]# nodetool status
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address LoadTokens  Owns(effective) Host ID
Rack
UN  1.2.3.4 363.68 KiB  128 ? 92ae4c39-edb3-4e67-8623-b49fd8301b66 
RAC1
UN  1.2.3.5 109.71 KiB  128 ? d80647a8-32b2-4a8f-8022-f5ae3ce8fbb2 
RAC1
{code}
With resolving:
{code:java}
[root@foo001 ~]# nodetool status -r
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address  Load  Tokens  Owns (effective)  Host ID  Rack
?N  foo001.tab.com   ? 128 ?  RAC1
?N  foo002.tab.com   ? 128 ?  RAC1
{code}

I changed IPs and hostnames.



> Incorrect output in "nodetool status -r"
> 
>
> Key: CASSANDRA-16283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16283
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yakir Gibraltar
>Priority: Normal
>
> nodetool status -r not working well on C* 4,
>  Version:
> {code:java}
> [root@foo001 ~]# nodetool version
> ReleaseVersion: 4.0-beta3
> {code}
> Without resolving:
> {code:java}
> [root@foo001 ~]# nodetool status
> Datacenter: V4CH
> 
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address LoadTokens  Owns(effective) Host ID   
>  Rack
> UN  1.2.3.4 363.68 KiB  128 ? 
> 92ae4c39-edb3-4e67-8623-b49fd8301b66 RAC1
> UN  1.2.3.5 109.71 KiB  128 ? 
> d80647a8-32b2-4a8f-8022-f5ae3ce8fbb2 RAC1
> {code}
> With resolving:
> {code:java}
> [root@foo001 ~]# nodetool status -r
> Datacenter: V4CH
> 
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address  Load  Tokens  Owns (effective)  Host ID  Rack
> ?N  foo001.tab.com   ? 128 ?  RAC1
> ?N  foo002.tab.com   ? 128 ?  RAC1
> {code}
> I only changed here IPs and hostnames.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16283) Incorrect output in "nodetool status -r"

2020-11-18 Thread Yakir Gibraltar (Jira)
Yakir Gibraltar created CASSANDRA-16283:
---

 Summary: Incorrect output in "nodetool status -r"
 Key: CASSANDRA-16283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16283
 Project: Cassandra
  Issue Type: Bug
Reporter: Yakir Gibraltar


nodetool status -r not working well on C* 4,
 Version:
{code:java}
[root@foo001 ~]# nodetool version
ReleaseVersion: 4.0-beta3
{code}
Without resolving:
{code:java}
[root@foo001 ~]# nodetool status
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address LoadTokens  Owns(effective) Host ID
Rack
UN  1.2.3.4 363.68 KiB  128 ? 92ae4c39-edb3-4e67-8623-b49fd8301b66 
RAC1
UN  1.2.3.5 109.71 KiB  128 ? d80647a8-32b2-4a8f-8022-f5ae3ce8fbb2 
RAC1
{code}
With resolving:
{code:java}

I changed IPs and hostnames.
[root@foo001 ~]# nodetool status -r
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address  Load  Tokens  Owns (effective)  Host ID  Rack
?N  foo001.tab.com   ? 128 ?  RAC1
?N  foo002.tab.com   ? 128 ?  RAC1
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16283) Incorrect output in "nodetool status -r"

2020-11-18 Thread Yakir Gibraltar (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yakir Gibraltar updated CASSANDRA-16283:

Description: 
nodetool status -r not working well on C* 4,
 Version:
{code:java}
[root@foo001 ~]# nodetool version
ReleaseVersion: 4.0-beta3
{code}
Without resolving:
{code:java}
[root@foo001 ~]# nodetool status
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address LoadTokens  Owns(effective) Host ID
Rack
UN  1.2.3.4 363.68 KiB  128 ? 92ae4c39-edb3-4e67-8623-b49fd8301b66 
RAC1
UN  1.2.3.5 109.71 KiB  128 ? d80647a8-32b2-4a8f-8022-f5ae3ce8fbb2 
RAC1
{code}
With resolving:
{code:java}
[root@foo001 ~]# nodetool status -r
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address  Load  Tokens  Owns (effective)  Host ID  Rack
?N  foo001.tab.com   ? 128 ?  RAC1
?N  foo002.tab.com   ? 128 ?  RAC1
{code}

I changed IPs and hostnames.


  was:
nodetool status -r not working well on C* 4,
 Version:
{code:java}
[root@foo001 ~]# nodetool version
ReleaseVersion: 4.0-beta3
{code}
Without resolving:
{code:java}
[root@foo001 ~]# nodetool status
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address LoadTokens  Owns(effective) Host ID
Rack
UN  1.2.3.4 363.68 KiB  128 ? 92ae4c39-edb3-4e67-8623-b49fd8301b66 
RAC1
UN  1.2.3.5 109.71 KiB  128 ? d80647a8-32b2-4a8f-8022-f5ae3ce8fbb2 
RAC1
{code}
With resolving:
{code:java}

I changed IPs and hostnames.
[root@foo001 ~]# nodetool status -r
Datacenter: V4CH

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address  Load  Tokens  Owns (effective)  Host ID  Rack
?N  foo001.tab.com   ? 128 ?  RAC1
?N  foo002.tab.com   ? 128 ?  RAC1
{code}


> Incorrect output in "nodetool status -r"
> 
>
> Key: CASSANDRA-16283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16283
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yakir Gibraltar
>Priority: Normal
>
> nodetool status -r not working well on C* 4,
>  Version:
> {code:java}
> [root@foo001 ~]# nodetool version
> ReleaseVersion: 4.0-beta3
> {code}
> Without resolving:
> {code:java}
> [root@foo001 ~]# nodetool status
> Datacenter: V4CH
> 
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address LoadTokens  Owns(effective) Host ID   
>  Rack
> UN  1.2.3.4 363.68 KiB  128 ? 
> 92ae4c39-edb3-4e67-8623-b49fd8301b66 RAC1
> UN  1.2.3.5 109.71 KiB  128 ? 
> d80647a8-32b2-4a8f-8022-f5ae3ce8fbb2 RAC1
> {code}
> With resolving:
> {code:java}
> [root@foo001 ~]# nodetool status -r
> Datacenter: V4CH
> 
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address  Load  Tokens  Owns (effective)  Host ID  Rack
> ?N  foo001.tab.com   ? 128 ?  RAC1
> ?N  foo002.tab.com   ? 128 ?  RAC1
> {code}
> I changed IPs and hostnames.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-builds] branch trunk updated: Include dtest-novnode and dtest-large in the pre-commit (Cassandra-devbranch) jenkins pipeline build (CASSANDRA-14477)

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra-builds.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 69cfcb3  Include dtest-novnode and dtest-large in the pre-commit 
(Cassandra-devbranch) jenkins pipeline build (CASSANDRA-14477)
69cfcb3 is described below

commit 69cfcb31078dd9d79d19d29d5c4543832fa00ffa
Author: Mick Semb Wever 
AuthorDate: Wed Nov 18 10:25:03 2020 +0100

Include dtest-novnode and dtest-large in the pre-commit 
(Cassandra-devbranch) jenkins pipeline build (CASSANDRA-14477)
---
 jenkins-dsl/cassandra_pipeline.groovy | 36 +++
 1 file changed, 36 insertions(+)

diff --git a/jenkins-dsl/cassandra_pipeline.groovy 
b/jenkins-dsl/cassandra_pipeline.groovy
index e6547e5..1f1d807 100644
--- a/jenkins-dsl/cassandra_pipeline.groovy
+++ b/jenkins-dsl/cassandra_pipeline.groovy
@@ -228,7 +228,43 @@ pipeline {
 }
   }
 }
+  stage('dtest-large') {
+steps {
+  script {
+dtest_large = build job: "${env.JOB_NAME}-dtest-large", 
parameters: [string(name: 'REPO', value: params.REPO), string(name: 'BRANCH', 
value: params.BRANCH), string(name: 'DTEST_REPO', value: params.DTEST_REPO), 
string(name: 'DTEST_BRANCH', value: params.DTEST_BRANCH), string(name: 
'DOCKER_IMAGE', value: params.DOCKER_IMAGE)], propagate: false
+if (dtest_large.result != 'SUCCESS') unstable('dtest-large 
failures')
+if (dtest_large.result == 'FAILURE') 
currentBuild.result='FAILURE'
+  }
+}
+post {
+  always {
+warnError('missing test xml files') {
+script {
+copyTestResults('dtest-large', dtest_large.getNumber())
+}
+}
+  }
+}
+  }
+  stage('dtest-novnode') {
+steps {
+  script {
+dtest_novnode = build job: "${env.JOB_NAME}-dtest-novnode", 
parameters: [string(name: 'REPO', value: params.REPO), string(name: 'BRANCH', 
value: params.BRANCH), string(name: 'DTEST_REPO', value: params.DTEST_REPO), 
string(name: 'DTEST_BRANCH', value: params.DTEST_BRANCH), string(name: 
'DOCKER_IMAGE', value: params.DOCKER_IMAGE)], propagate: false
+if (dtest_novnode.result != 'SUCCESS') unstable('dtest-novnode 
failures')
+if (dtest_novnode.result == 'FAILURE') 
currentBuild.result='FAILURE'
+  }
+}
+post {
+  always {
+warnError('missing test xml files') {
+script {
+copyTestResults('dtest-novnode', 
dtest_novnode.getNumber())
+}
+}
+  }
+}
   }
+}
   }
   stage('Summary') {
 steps {


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-builds] branch trunk updated: Include dtest-novnode and dtest-large in the pre-commit (Cassandra-devbranch) jenkins pipeline build (CASSANDRA-14477)

2020-11-18 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra-builds.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 69cfcb3  Include dtest-novnode and dtest-large in the pre-commit 
(Cassandra-devbranch) jenkins pipeline build (CASSANDRA-14477)
69cfcb3 is described below

commit 69cfcb31078dd9d79d19d29d5c4543832fa00ffa
Author: Mick Semb Wever 
AuthorDate: Wed Nov 18 10:25:03 2020 +0100

Include dtest-novnode and dtest-large in the pre-commit 
(Cassandra-devbranch) jenkins pipeline build (CASSANDRA-14477)
---
 jenkins-dsl/cassandra_pipeline.groovy | 36 +++
 1 file changed, 36 insertions(+)

diff --git a/jenkins-dsl/cassandra_pipeline.groovy 
b/jenkins-dsl/cassandra_pipeline.groovy
index e6547e5..1f1d807 100644
--- a/jenkins-dsl/cassandra_pipeline.groovy
+++ b/jenkins-dsl/cassandra_pipeline.groovy
@@ -228,7 +228,43 @@ pipeline {
 }
   }
 }
+  stage('dtest-large') {
+steps {
+  script {
+dtest_large = build job: "${env.JOB_NAME}-dtest-large", 
parameters: [string(name: 'REPO', value: params.REPO), string(name: 'BRANCH', 
value: params.BRANCH), string(name: 'DTEST_REPO', value: params.DTEST_REPO), 
string(name: 'DTEST_BRANCH', value: params.DTEST_BRANCH), string(name: 
'DOCKER_IMAGE', value: params.DOCKER_IMAGE)], propagate: false
+if (dtest_large.result != 'SUCCESS') unstable('dtest-large 
failures')
+if (dtest_large.result == 'FAILURE') 
currentBuild.result='FAILURE'
+  }
+}
+post {
+  always {
+warnError('missing test xml files') {
+script {
+copyTestResults('dtest-large', dtest_large.getNumber())
+}
+}
+  }
+}
+  }
+  stage('dtest-novnode') {
+steps {
+  script {
+dtest_novnode = build job: "${env.JOB_NAME}-dtest-novnode", 
parameters: [string(name: 'REPO', value: params.REPO), string(name: 'BRANCH', 
value: params.BRANCH), string(name: 'DTEST_REPO', value: params.DTEST_REPO), 
string(name: 'DTEST_BRANCH', value: params.DTEST_BRANCH), string(name: 
'DOCKER_IMAGE', value: params.DOCKER_IMAGE)], propagate: false
+if (dtest_novnode.result != 'SUCCESS') unstable('dtest-novnode 
failures')
+if (dtest_novnode.result == 'FAILURE') 
currentBuild.result='FAILURE'
+  }
+}
+post {
+  always {
+warnError('missing test xml files') {
+script {
+copyTestResults('dtest-novnode', 
dtest_novnode.getNumber())
+}
+}
+  }
+}
   }
+}
   }
   stage('Summary') {
 steps {


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16276) Drain and/or shutdown might throw because of slow messaging service shutdown

2020-11-18 Thread Alex Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234589#comment-17234589
 ] 

Alex Petrov commented on CASSANDRA-16276:
-

+1

> Drain and/or shutdown might throw because of slow messaging service shutdown
> 
>
> Key: CASSANDRA-16276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16276
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0-beta
>
>
> If we invoke nodetool drain before shutdown, it sometimes fails to shut down 
> messaging service in time (in this case - timing out the shutdown of the 
> eventloopgroup by Netty). But, not before we manage to set isShutdown of 
> StorageService to true, despite aborting further drain logic (including 
> shutting down mutation stages).
> Then, via on shutdown hook, we invoke drain() method again, implicitly. We 
> see that the mutation stage is not shutdown and proceed to assert that 
> isShutdown == false, failing that assertion and triggering a second error log 
> message.
> The patch merely ensures that any exception thrown by MS shutdown is captured 
> so that drain logic can complete the first time around.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16283) Incorrect output in "nodetool status -r"

2020-11-18 Thread Paulo Motta (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-16283:

 Bug Category: Parent values: Correctness(12982)Level 1 values: API / 
Semantic Implementation(12988)
   Complexity: Normal
Discovered By: User Report
Fix Version/s: 4.0-beta
 Severity: Low

> Incorrect output in "nodetool status -r"
> 
>
> Key: CASSANDRA-16283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16283
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yakir Gibraltar
>Priority: Low
> Fix For: 4.0-beta
>
>
> nodetool status -r not working well on C* 4,
>  Version:
> {code:java}
> [root@foo001 ~]# nodetool version
> ReleaseVersion: 4.0-beta3
> {code}
> Without resolving:
> {code:java}
> [root@foo001 ~]# nodetool status
> Datacenter: V4CH
> 
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address LoadTokens  Owns(effective) Host ID   
>  Rack
> UN  1.2.3.4 363.68 KiB  128 ? 
> 92ae4c39-edb3-4e67-8623-b49fd8301b66 RAC1
> UN  1.2.3.5 109.71 KiB  128 ? 
> d80647a8-32b2-4a8f-8022-f5ae3ce8fbb2 RAC1
> {code}
> With resolving:
> {code:java}
> [root@foo001 ~]# nodetool status -r
> Datacenter: V4CH
> 
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address  Load  Tokens  Owns (effective)  Host ID  Rack
> ?N  foo001.tab.com   ? 128 ?  RAC1
> ?N  foo002.tab.com   ? 128 ?  RAC1
> {code}
> I only changed here IPs and hostnames.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated: Drain and/or shutdown might throw because of slow messaging service shutdown

2020-11-18 Thread aleksey
This is an automated email from the ASF dual-hosted git repository.

aleksey pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7ca997b  Drain and/or shutdown might throw because of slow messaging 
service shutdown
7ca997b is described below

commit 7ca997ba3514e19864d53b8ca56a1e4f5c26208f
Author: Aleksey Yeshchenko 
AuthorDate: Tue Oct 20 18:02:45 2020 +0100

Drain and/or shutdown might throw because of slow messaging service shutdown

patch by Aleksey Yeschenko; reviewed by Marcus Eriksson and Alex Petrov
for CASSANDRA-16276
---
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/StorageService.java   | 18 +++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index 29100fc..176bc04 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0-beta4
+ * Drain and/or shutdown might throw because of slow messaging service 
shutdown (CASSANDRA-16276)
  * Upgrade JNA to 5.6.0, dropping support for <=glibc-2.6 systems 
(CASSANDRA-16212)
  * Add saved Host IDs to TokenMetadata at startup (CASSANDRA-16246)
  * Ensure that CacheMetrics.requests is picked up by the metric reporter 
(CASSANDRA-16228)
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 7d27163..9c6499a 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -4746,15 +4746,27 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 if (!isFinalShutdown)
 setMode(Mode.DRAINING, "shutting down MessageService", false);
 
-// In-progress writes originating here could generate hints to be 
written, so shut down MessagingService
-// before mutation stage, so we can get all the hints saved before 
shutting down
-MessagingService.instance().shutdown();
+// In-progress writes originating here could generate hints to be 
written,
+// which is currently scheduled on the mutation stage. So shut 
down MessagingService
+// before mutation stage, so we can get all the hints saved before 
shutting down.
+try
+{
+MessagingService.instance().shutdown();
+}
+catch (Throwable t)
+{
+// prevent messaging service timing out shutdown from aborting
+// drain process; otherwise drain and/or shutdown might throw
+logger.error("Messaging service timed out shutting down", t);
+}
 
 if (!isFinalShutdown)
 setMode(Mode.DRAINING, "clearing mutation stage", false);
 viewMutationStage.shutdown();
 counterMutationStage.shutdown();
 mutationStage.shutdown();
+
+// FIXME? should these *really* take up to one hour?
 viewMutationStage.awaitTermination(3600, TimeUnit.SECONDS);
 counterMutationStage.awaitTermination(3600, TimeUnit.SECONDS);
 mutationStage.awaitTermination(3600, TimeUnit.SECONDS);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16276) Drain and/or shutdown might throw because of slow messaging service shutdown

2020-11-18 Thread Aleksey Yeschenko (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234647#comment-17234647
 ] 

Aleksey Yeschenko commented on CASSANDRA-16276:
---

Cheers, committed as 
[7ca997ba3514e19864d53b8ca56a1e4f5c26208f|https://github.com/apache/cassandra/commit/7ca997ba3514e19864d53b8ca56a1e4f5c26208f]
 to trunk.

> Drain and/or shutdown might throw because of slow messaging service shutdown
> 
>
> Key: CASSANDRA-16276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16276
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0-beta
>
>
> If we invoke nodetool drain before shutdown, it sometimes fails to shut down 
> messaging service in time (in this case - timing out the shutdown of the 
> eventloopgroup by Netty). But, not before we manage to set isShutdown of 
> StorageService to true, despite aborting further drain logic (including 
> shutting down mutation stages).
> Then, via on shutdown hook, we invoke drain() method again, implicitly. We 
> see that the mutation stage is not shutdown and proceed to assert that 
> isShutdown == false, failing that assertion and triggering a second error log 
> message.
> The patch merely ensures that any exception thrown by MS shutdown is captured 
> so that drain logic can complete the first time around.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16276) Drain and/or shutdown might throw because of slow messaging service shutdown

2020-11-18 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-16276:
--
Reviewers: Alex Petrov, Marcus Eriksson, Aleksey Yeschenko  (was: Aleksey 
Yeschenko, Alex Petrov, Marcus Eriksson)
   Alex Petrov, Marcus Eriksson, Aleksey Yeschenko  (was: Alex 
Petrov, Marcus Eriksson)
   Status: Review In Progress  (was: Patch Available)

> Drain and/or shutdown might throw because of slow messaging service shutdown
> 
>
> Key: CASSANDRA-16276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16276
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0-beta
>
>
> If we invoke nodetool drain before shutdown, it sometimes fails to shut down 
> messaging service in time (in this case - timing out the shutdown of the 
> eventloopgroup by Netty). But, not before we manage to set isShutdown of 
> StorageService to true, despite aborting further drain logic (including 
> shutting down mutation stages).
> Then, via on shutdown hook, we invoke drain() method again, implicitly. We 
> see that the mutation stage is not shutdown and proceed to assert that 
> isShutdown == false, failing that assertion and triggering a second error log 
> message.
> The patch merely ensures that any exception thrown by MS shutdown is captured 
> so that drain logic can complete the first time around.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16276) Drain and/or shutdown might throw because of slow messaging service shutdown

2020-11-18 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-16276:
--
Status: Ready to Commit  (was: Review In Progress)

> Drain and/or shutdown might throw because of slow messaging service shutdown
> 
>
> Key: CASSANDRA-16276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16276
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0-beta
>
>
> If we invoke nodetool drain before shutdown, it sometimes fails to shut down 
> messaging service in time (in this case - timing out the shutdown of the 
> eventloopgroup by Netty). But, not before we manage to set isShutdown of 
> StorageService to true, despite aborting further drain logic (including 
> shutting down mutation stages).
> Then, via on shutdown hook, we invoke drain() method again, implicitly. We 
> see that the mutation stage is not shutdown and proceed to assert that 
> isShutdown == false, failing that assertion and triggering a second error log 
> message.
> The patch merely ensures that any exception thrown by MS shutdown is captured 
> so that drain logic can complete the first time around.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16276) Drain and/or shutdown might throw because of slow messaging service shutdown

2020-11-18 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-16276:
--
  Fix Version/s: (was: 4.0-beta)
 4.0-beta4
  Since Version: 4.0-alpha1
Source Control Link: 
[7ca997ba3514e19864d53b8ca56a1e4f5c26208f|https://github.com/apache/cassandra/commit/7ca997ba3514e19864d53b8ca56a1e4f5c26208f]
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Drain and/or shutdown might throw because of slow messaging service shutdown
> 
>
> Key: CASSANDRA-16276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16276
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0-beta4
>
>
> If we invoke nodetool drain before shutdown, it sometimes fails to shut down 
> messaging service in time (in this case - timing out the shutdown of the 
> eventloopgroup by Netty). But, not before we manage to set isShutdown of 
> StorageService to true, despite aborting further drain logic (including 
> shutting down mutation stages).
> Then, via on shutdown hook, we invoke drain() method again, implicitly. We 
> see that the mutation stage is not shutdown and proceed to assert that 
> isShutdown == false, failing that assertion and triggering a second error log 
> message.
> The patch merely ensures that any exception thrown by MS shutdown is captured 
> so that drain logic can complete the first time around.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16240) Having issues creating a table with name profiles

2020-11-18 Thread Anuj Kulkarni (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234649#comment-17234649
 ] 

Anuj Kulkarni commented on CASSANDRA-16240:
---

Yes [~aholmber], there is no functional impact of this issue.
What's strange is we aren't observing this issue on production environment, but 
encountered it on an earlier environment that's running the same Cassandra 
version.

> Having issues creating a table with name profiles
> -
>
> Key: CASSANDRA-16240
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16240
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anuj Kulkarni
>Priority: Normal
> Attachments: image-2020-11-02-12-13-16-999.png
>
>
> Whenever I try to create a table with name profiles, it always gets created 
> with additional quotes surrounding it. Attaching the screenshot.
> I am on Cassandra 3.7
> I tried creating the table in another keyspace. I also tried creating new 
> virtual machines with the same AMI and same Cassandra version, but to no 
> avail.
> If I try to create a table with any other name, there are no issues at all. 
> It's just with the name profiles.
> I am on Ubuntu 18.04 by the way.
> !image-2020-11-02-12-13-16-999.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16277) 'SSLEngine closed already' exception on failed outbound connection

2020-11-18 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-16277:
--
Status: Ready to Commit  (was: Review In Progress)

> 'SSLEngine closed already' exception on failed outbound connection
> --
>
> Key: CASSANDRA-16277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16277
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Internode
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
>
> Occasionally Netty will invoke 
> {{OutboundConnectionInitiator#exceptionCaught()}} handler to process an 
> exception of the following kind:
> {code}
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> {code}
> When we invoke {{ctx.close()}} later in that method, the listener, set up in 
> {{channelActive()}}, might be
> failed with an {{SSLException("SSLEngine closed already”)}} by Netty, and 
> {{exceptionCaught()}} will be invoked
> once again, this time to handle the {{SSLException}} triggered by 
> {{ctx.close()}}.
> The exception at this stage is benign, and we shouldn't be double-logging the 
> failure to connect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16277) 'SSLEngine closed already' exception on failed outbound connection

2020-11-18 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-16277:
--
Reviewers: Alex Petrov, Norman Maurer, Aleksey Yeschenko  (was: Aleksey 
Yeschenko, Alex Petrov, Norman Maurer)
   Alex Petrov, Norman Maurer, Aleksey Yeschenko  (was: Alex 
Petrov, Norman Maurer)
   Status: Review In Progress  (was: Patch Available)

> 'SSLEngine closed already' exception on failed outbound connection
> --
>
> Key: CASSANDRA-16277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16277
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Internode
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
>
> Occasionally Netty will invoke 
> {{OutboundConnectionInitiator#exceptionCaught()}} handler to process an 
> exception of the following kind:
> {code}
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> {code}
> When we invoke {{ctx.close()}} later in that method, the listener, set up in 
> {{channelActive()}}, might be
> failed with an {{SSLException("SSLEngine closed already”)}} by Netty, and 
> {{exceptionCaught()}} will be invoked
> once again, this time to handle the {{SSLException}} triggered by 
> {{ctx.close()}}.
> The exception at this stage is benign, and we shouldn't be double-logging the 
> failure to connect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16277) 'SSLEngine closed already' exception on failed outbound connection

2020-11-18 Thread Aleksey Yeschenko (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234667#comment-17234667
 ] 

Aleksey Yeschenko commented on CASSANDRA-16277:
---

Cheers. Committed as 
[cefe43b06bb1c0dd3bd362638cc56e0fc1f78ddb|https://github.com/apache/cassandra/commit/cefe43b06bb1c0dd3bd362638cc56e0fc1f78ddb]
 and 
[e572c8fca0c5cd68229b8db8d4915817d5d49daf|https://github.com/apache/cassandra/commit/e572c8fca0c5cd68229b8db8d4915817d5d49daf]
 to trunk.

> 'SSLEngine closed already' exception on failed outbound connection
> --
>
> Key: CASSANDRA-16277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16277
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Internode
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
>
> Occasionally Netty will invoke 
> {{OutboundConnectionInitiator#exceptionCaught()}} handler to process an 
> exception of the following kind:
> {code}
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> {code}
> When we invoke {{ctx.close()}} later in that method, the listener, set up in 
> {{channelActive()}}, might be
> failed with an {{SSLException("SSLEngine closed already”)}} by Netty, and 
> {{exceptionCaught()}} will be invoked
> once again, this time to handle the {{SSLException}} triggered by 
> {{ctx.close()}}.
> The exception at this stage is benign, and we shouldn't be double-logging the 
> failure to connect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16277) 'SSLEngine closed already' exception on failed outbound connection

2020-11-18 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-16277:
--
  Fix Version/s: 4.0-beta4
  Since Version: 4.0-alpha1
Source Control Link: 
[cefe43b06bb1c0dd3bd362638cc56e0fc1f78ddb|https://github.com/apache/cassandra/commit/cefe43b06bb1c0dd3bd362638cc56e0fc1f78ddb]
 and 
[e572c8fca0c5cd68229b8db8d4915817d5d49daf|https://github.com/apache/cassandra/commit/e572c8fca0c5cd68229b8db8d4915817d5d49d
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> 'SSLEngine closed already' exception on failed outbound connection
> --
>
> Key: CASSANDRA-16277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16277
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Internode
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0-beta4
>
>
> Occasionally Netty will invoke 
> {{OutboundConnectionInitiator#exceptionCaught()}} handler to process an 
> exception of the following kind:
> {code}
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> {code}
> When we invoke {{ctx.close()}} later in that method, the listener, set up in 
> {{channelActive()}}, might be
> failed with an {{SSLException("SSLEngine closed already”)}} by Netty, and 
> {{exceptionCaught()}} will be invoked
> once again, this time to handle the {{SSLException}} triggered by 
> {{ctx.close()}}.
> The exception at this stage is benign, and we shouldn't be double-logging the 
> failure to connect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16277) 'SSLEngine closed already' exception on failed outbound connection

2020-11-18 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-16277:
--
Source Control Link:   (was: 
[cefe43b06bb1c0dd3bd362638cc56e0fc1f78ddb|https://github.com/apache/cassandra/commit/cefe43b06bb1c0dd3bd362638cc56e0fc1f78ddb]
 and 
[e572c8fca0c5cd68229b8db8d4915817d5d49daf|https://github.com/apache/cassandra/commit/e572c8fca0c5cd68229b8db8d4915817d5d49d)

> 'SSLEngine closed already' exception on failed outbound connection
> --
>
> Key: CASSANDRA-16277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16277
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Internode
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0-beta4
>
>
> Occasionally Netty will invoke 
> {{OutboundConnectionInitiator#exceptionCaught()}} handler to process an 
> exception of the following kind:
> {code}
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> {code}
> When we invoke {{ctx.close()}} later in that method, the listener, set up in 
> {{channelActive()}}, might be
> failed with an {{SSLException("SSLEngine closed already”)}} by Netty, and 
> {{exceptionCaught()}} will be invoked
> once again, this time to handle the {{SSLException}} triggered by 
> {{ctx.close()}}.
> The exception at this stage is benign, and we shouldn't be double-logging the 
> failure to connect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/02: Upgrade netty to 5.1.54 and netty-tcnative to 2.0.34

2020-11-18 Thread aleksey
This is an automated email from the ASF dual-hosted git repository.

aleksey pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit cefe43b06bb1c0dd3bd362638cc56e0fc1f78ddb
Author: Aleksey Yeshchenko 
AuthorDate: Mon Nov 16 17:18:45 2020 +

Upgrade netty to 5.1.54 and netty-tcnative to 2.0.34
---
 .../{netty-4.1.50.txt => netty-4.1.54.txt} |   0
 ...native-2.0.31.txt => netty-tcnative-2.0.34.txt} |   0
 ...4.1.50.Final.jar => netty-all-4.1.54.Final.jar} | Bin 4213710 -> 4318380 
bytes
 ...etty-tcnative-boringssl-static-2.0.31.Final.jar | Bin 3953120 -> 0 bytes
 ...etty-tcnative-boringssl-static-2.0.34.Final.jar | Bin 0 -> 4018015 bytes
 5 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/lib/licenses/netty-4.1.50.txt b/lib/licenses/netty-4.1.54.txt
similarity index 100%
rename from lib/licenses/netty-4.1.50.txt
rename to lib/licenses/netty-4.1.54.txt
diff --git a/lib/licenses/netty-tcnative-2.0.31.txt 
b/lib/licenses/netty-tcnative-2.0.34.txt
similarity index 100%
rename from lib/licenses/netty-tcnative-2.0.31.txt
rename to lib/licenses/netty-tcnative-2.0.34.txt
diff --git a/lib/netty-all-4.1.50.Final.jar b/lib/netty-all-4.1.54.Final.jar
similarity index 67%
rename from lib/netty-all-4.1.50.Final.jar
rename to lib/netty-all-4.1.54.Final.jar
index f8b1557..5b9d4d9 100644
Binary files a/lib/netty-all-4.1.50.Final.jar and 
b/lib/netty-all-4.1.54.Final.jar differ
diff --git a/lib/netty-tcnative-boringssl-static-2.0.31.Final.jar 
b/lib/netty-tcnative-boringssl-static-2.0.31.Final.jar
deleted file mode 100644
index 582c582..000
Binary files a/lib/netty-tcnative-boringssl-static-2.0.31.Final.jar and 
/dev/null differ
diff --git a/lib/netty-tcnative-boringssl-static-2.0.34.Final.jar 
b/lib/netty-tcnative-boringssl-static-2.0.34.Final.jar
new file mode 100644
index 000..ae902f5
Binary files /dev/null and 
b/lib/netty-tcnative-boringssl-static-2.0.34.Final.jar differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (7ca997b -> e572c8f)

2020-11-18 Thread aleksey
This is an automated email from the ASF dual-hosted git repository.

aleksey pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 7ca997b  Drain and/or shutdown might throw because of slow messaging 
service shutdown
 new cefe43b  Upgrade netty to 5.1.54 and netty-tcnative to 2.0.34
 new e572c8f  'SSLEngine closed already' exception on failed outbound 
connection

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt|   1 +
 .../{netty-4.1.50.txt => netty-4.1.54.txt} |   0
 ...native-2.0.31.txt => netty-tcnative-2.0.34.txt} |   0
 ...4.1.50.Final.jar => netty-all-4.1.54.Final.jar} | Bin 4213710 -> 4318380 
bytes
 ...etty-tcnative-boringssl-static-2.0.31.Final.jar | Bin 3953120 -> 0 bytes
 ...etty-tcnative-boringssl-static-2.0.34.Final.jar | Bin 0 -> 4018015 bytes
 .../cassandra/net/OutboundConnectionInitiator.java |  18 ++
 7 files changed, 19 insertions(+)
 rename lib/licenses/{netty-4.1.50.txt => netty-4.1.54.txt} (100%)
 rename lib/licenses/{netty-tcnative-2.0.31.txt => netty-tcnative-2.0.34.txt} 
(100%)
 rename lib/{netty-all-4.1.50.Final.jar => netty-all-4.1.54.Final.jar} (67%)
 delete mode 100644 lib/netty-tcnative-boringssl-static-2.0.31.Final.jar
 create mode 100644 lib/netty-tcnative-boringssl-static-2.0.34.Final.jar


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 02/02: 'SSLEngine closed already' exception on failed outbound connection

2020-11-18 Thread aleksey
This is an automated email from the ASF dual-hosted git repository.

aleksey pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit e572c8fca0c5cd68229b8db8d4915817d5d49daf
Author: Aleksey Yeshchenko 
AuthorDate: Mon Nov 16 17:36:24 2020 +

'SSLEngine closed already' exception on failed outbound connection

patch by Aleksey Yeschenko; reviewed by Alex Petrov and Norman Maurer for
(CASSANDRA-16277)
---
 CHANGES.txt|  1 +
 .../cassandra/net/OutboundConnectionInitiator.java | 18 ++
 2 files changed, 19 insertions(+)

diff --git a/CHANGES.txt b/CHANGES.txt
index 176bc04..7d904a9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0-beta4
+ * 'SSLEngine closed already' exception on failed outbound connection 
(CASSANDRA-16277)
  * Drain and/or shutdown might throw because of slow messaging service 
shutdown (CASSANDRA-16276)
  * Upgrade JNA to 5.6.0, dropping support for <=glibc-2.6 systems 
(CASSANDRA-16212)
  * Add saved Host IDs to TokenMetadata at startup (CASSANDRA-16246)
diff --git a/src/java/org/apache/cassandra/net/OutboundConnectionInitiator.java 
b/src/java/org/apache/cassandra/net/OutboundConnectionInitiator.java
index 4a5585a..2c26005 100644
--- a/src/java/org/apache/cassandra/net/OutboundConnectionInitiator.java
+++ b/src/java/org/apache/cassandra/net/OutboundConnectionInitiator.java
@@ -42,6 +42,7 @@ import io.netty.handler.codec.ByteToMessageDecoder;
 
 import io.netty.handler.logging.LogLevel;
 import io.netty.handler.logging.LoggingHandler;
+import io.netty.handler.ssl.SslClosedEngineException;
 import io.netty.handler.ssl.SslContext;
 import io.netty.handler.ssl.SslHandler;
 import io.netty.util.concurrent.FailedFuture;
@@ -86,6 +87,7 @@ public class OutboundConnectionInitiator> resultPromise;
+private boolean isClosed;
 
 private OutboundConnectionInitiator(ConnectionType type, 
OutboundConnectionSettings settings,
 int requestMessagingVersion, 
Promise> resultPromise)
@@ -363,6 +365,21 @@ public class OutboundConnectionInitiator

[cassandra] branch trunk updated (7ca997b -> e572c8f)

2020-11-18 Thread aleksey
This is an automated email from the ASF dual-hosted git repository.

aleksey pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 7ca997b  Drain and/or shutdown might throw because of slow messaging 
service shutdown
 new cefe43b  Upgrade netty to 5.1.54 and netty-tcnative to 2.0.34
 new e572c8f  'SSLEngine closed already' exception on failed outbound 
connection

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt|   1 +
 .../{netty-4.1.50.txt => netty-4.1.54.txt} |   0
 ...native-2.0.31.txt => netty-tcnative-2.0.34.txt} |   0
 ...4.1.50.Final.jar => netty-all-4.1.54.Final.jar} | Bin 4213710 -> 4318380 
bytes
 ...etty-tcnative-boringssl-static-2.0.31.Final.jar | Bin 3953120 -> 0 bytes
 ...etty-tcnative-boringssl-static-2.0.34.Final.jar | Bin 0 -> 4018015 bytes
 .../cassandra/net/OutboundConnectionInitiator.java |  18 ++
 7 files changed, 19 insertions(+)
 rename lib/licenses/{netty-4.1.50.txt => netty-4.1.54.txt} (100%)
 rename lib/licenses/{netty-tcnative-2.0.31.txt => netty-tcnative-2.0.34.txt} 
(100%)
 rename lib/{netty-all-4.1.50.Final.jar => netty-all-4.1.54.Final.jar} (67%)
 delete mode 100644 lib/netty-tcnative-boringssl-static-2.0.31.Final.jar
 create mode 100644 lib/netty-tcnative-boringssl-static-2.0.34.Final.jar


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 02/02: 'SSLEngine closed already' exception on failed outbound connection

2020-11-18 Thread aleksey
This is an automated email from the ASF dual-hosted git repository.

aleksey pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit e572c8fca0c5cd68229b8db8d4915817d5d49daf
Author: Aleksey Yeshchenko 
AuthorDate: Mon Nov 16 17:36:24 2020 +

'SSLEngine closed already' exception on failed outbound connection

patch by Aleksey Yeschenko; reviewed by Alex Petrov and Norman Maurer for
(CASSANDRA-16277)
---
 CHANGES.txt|  1 +
 .../cassandra/net/OutboundConnectionInitiator.java | 18 ++
 2 files changed, 19 insertions(+)

diff --git a/CHANGES.txt b/CHANGES.txt
index 176bc04..7d904a9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0-beta4
+ * 'SSLEngine closed already' exception on failed outbound connection 
(CASSANDRA-16277)
  * Drain and/or shutdown might throw because of slow messaging service 
shutdown (CASSANDRA-16276)
  * Upgrade JNA to 5.6.0, dropping support for <=glibc-2.6 systems 
(CASSANDRA-16212)
  * Add saved Host IDs to TokenMetadata at startup (CASSANDRA-16246)
diff --git a/src/java/org/apache/cassandra/net/OutboundConnectionInitiator.java 
b/src/java/org/apache/cassandra/net/OutboundConnectionInitiator.java
index 4a5585a..2c26005 100644
--- a/src/java/org/apache/cassandra/net/OutboundConnectionInitiator.java
+++ b/src/java/org/apache/cassandra/net/OutboundConnectionInitiator.java
@@ -42,6 +42,7 @@ import io.netty.handler.codec.ByteToMessageDecoder;
 
 import io.netty.handler.logging.LogLevel;
 import io.netty.handler.logging.LoggingHandler;
+import io.netty.handler.ssl.SslClosedEngineException;
 import io.netty.handler.ssl.SslContext;
 import io.netty.handler.ssl.SslHandler;
 import io.netty.util.concurrent.FailedFuture;
@@ -86,6 +87,7 @@ public class OutboundConnectionInitiator> resultPromise;
+private boolean isClosed;
 
 private OutboundConnectionInitiator(ConnectionType type, 
OutboundConnectionSettings settings,
 int requestMessagingVersion, 
Promise> resultPromise)
@@ -363,6 +365,21 @@ public class OutboundConnectionInitiator

[cassandra] 01/02: Upgrade netty to 5.1.54 and netty-tcnative to 2.0.34

2020-11-18 Thread aleksey
This is an automated email from the ASF dual-hosted git repository.

aleksey pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit cefe43b06bb1c0dd3bd362638cc56e0fc1f78ddb
Author: Aleksey Yeshchenko 
AuthorDate: Mon Nov 16 17:18:45 2020 +

Upgrade netty to 5.1.54 and netty-tcnative to 2.0.34
---
 .../{netty-4.1.50.txt => netty-4.1.54.txt} |   0
 ...native-2.0.31.txt => netty-tcnative-2.0.34.txt} |   0
 ...4.1.50.Final.jar => netty-all-4.1.54.Final.jar} | Bin 4213710 -> 4318380 
bytes
 ...etty-tcnative-boringssl-static-2.0.31.Final.jar | Bin 3953120 -> 0 bytes
 ...etty-tcnative-boringssl-static-2.0.34.Final.jar | Bin 0 -> 4018015 bytes
 5 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/lib/licenses/netty-4.1.50.txt b/lib/licenses/netty-4.1.54.txt
similarity index 100%
rename from lib/licenses/netty-4.1.50.txt
rename to lib/licenses/netty-4.1.54.txt
diff --git a/lib/licenses/netty-tcnative-2.0.31.txt 
b/lib/licenses/netty-tcnative-2.0.34.txt
similarity index 100%
rename from lib/licenses/netty-tcnative-2.0.31.txt
rename to lib/licenses/netty-tcnative-2.0.34.txt
diff --git a/lib/netty-all-4.1.50.Final.jar b/lib/netty-all-4.1.54.Final.jar
similarity index 67%
rename from lib/netty-all-4.1.50.Final.jar
rename to lib/netty-all-4.1.54.Final.jar
index f8b1557..5b9d4d9 100644
Binary files a/lib/netty-all-4.1.50.Final.jar and 
b/lib/netty-all-4.1.54.Final.jar differ
diff --git a/lib/netty-tcnative-boringssl-static-2.0.31.Final.jar 
b/lib/netty-tcnative-boringssl-static-2.0.31.Final.jar
deleted file mode 100644
index 582c582..000
Binary files a/lib/netty-tcnative-boringssl-static-2.0.31.Final.jar and 
/dev/null differ
diff --git a/lib/netty-tcnative-boringssl-static-2.0.34.Final.jar 
b/lib/netty-tcnative-boringssl-static-2.0.34.Final.jar
new file mode 100644
index 000..ae902f5
Binary files /dev/null and 
b/lib/netty-tcnative-boringssl-static-2.0.34.Final.jar differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16277) 'SSLEngine closed already' exception on failed outbound connection

2020-11-18 Thread Aleksey Yeschenko (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-16277:
--
Source Control Link: 
[e572c8fca0c5cd68229b8db8d4915817d5d49daf|https://github.com/apache/cassandra/commit/e572c8fca0c5cd68229b8db8d4915817d5d49daf]

> 'SSLEngine closed already' exception on failed outbound connection
> --
>
> Key: CASSANDRA-16277
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16277
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Internode
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Normal
> Fix For: 4.0-beta4
>
>
> Occasionally Netty will invoke 
> {{OutboundConnectionInitiator#exceptionCaught()}} handler to process an 
> exception of the following kind:
> {code}
> io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
> Connection reset by peer
> {code}
> When we invoke {{ctx.close()}} later in that method, the listener, set up in 
> {{channelActive()}}, might be
> failed with an {{SSLException("SSLEngine closed already”)}} by Netty, and 
> {{exceptionCaught()}} will be invoked
> once again, this time to handle the {{SSLException}} triggered by 
> {{ctx.close()}}.
> The exception at this stage is benign, and we shouldn't be double-logging the 
> failure to connect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16284) Too defensive check when picking sstables for preview repair

2020-11-18 Thread Marcus Eriksson (Jira)
Marcus Eriksson created CASSANDRA-16284:
---

 Summary: Too defensive check when picking sstables for preview 
repair
 Key: CASSANDRA-16284
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16284
 Project: Cassandra
  Issue Type: Bug
  Components: Consistency/Repair
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson


We fail starting any preview repair if any sstable is marked pending but the 
session not being finalized. The current check does not care if the range we 
are previewing intersects with the sstable marked pending - this means that we 
abort too many preview repairs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16284) Too defensive check when picking sstables for preview repair

2020-11-18 Thread Marcus Eriksson (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-16284:

 Bug Category: Parent values: Degradation(12984)Level 1 values: Other 
Exception(12998)
   Complexity: Low Hanging Fruit
Discovered By: Code Inspection
Fix Version/s: 4.0-beta4
 Severity: Low
   Status: Open  (was: Triage Needed)

> Too defensive check when picking sstables for preview repair
> 
>
> Key: CASSANDRA-16284
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16284
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 4.0-beta4
>
>
> We fail starting any preview repair if any sstable is marked pending but the 
> session not being finalized. The current check does not care if the range we 
> are previewing intersects with the sstable marked pending - this means that 
> we abort too many preview repairs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16284) Too defensive check when picking sstables for preview repair

2020-11-18 Thread Marcus Eriksson (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-16284:

Test and Documentation Plan: cci run, new jvm dtest
 Status: Patch Available  (was: Open)

patch to apply the predicate after checking range intersection:

patch: https://github.com/krummas/cassandra/commits/marcuse/16284
cci: (on its way, seems circleci is down)

> Too defensive check when picking sstables for preview repair
> 
>
> Key: CASSANDRA-16284
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16284
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 4.0-beta4
>
>
> We fail starting any preview repair if any sstable is marked pending but the 
> session not being finalized. The current check does not care if the range we 
> are previewing intersects with the sstable marked pending - this means that 
> we abort too many preview repairs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16284) Too defensive check when picking sstables for preview repair

2020-11-18 Thread Marcus Eriksson (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234679#comment-17234679
 ] 

Marcus Eriksson edited comment on CASSANDRA-16284 at 11/18/20, 2:57 PM:


patch to apply the predicate after checking range intersection:

patch: https://github.com/krummas/cassandra/commits/marcuse/16284
cci: 
https://app.circleci.com/pipelines/github/krummas/cassandra?branch=marcuse%2F16284


was (Author: krummas):
patch to apply the predicate after checking range intersection:

patch: https://github.com/krummas/cassandra/commits/marcuse/16284
cci: (on its way, seems circleci is down)

> Too defensive check when picking sstables for preview repair
> 
>
> Key: CASSANDRA-16284
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16284
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 4.0-beta4
>
>
> We fail starting any preview repair if any sstable is marked pending but the 
> session not being finalized. The current check does not care if the range we 
> are previewing intersects with the sstable marked pending - this means that 
> we abort too many preview repairs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16275) Update python driver used by cassandra-dtest

2020-11-18 Thread Adam Holmberg (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234699#comment-17234699
 ] 

Adam Holmberg commented on CASSANDRA-16275:
---

Apologies, I thought I commented here...

This sounds like a fine plan to me. Just let me know when you would like the 
driver branch updated.

I was not aware that a docker image update was involved. Is the driver 
installed there just to save time during the build? "back in the day" it was 
just a driver branch update followed by a dtest merge. I'm not familiar with 
how step three fits into things.

Also curious if INFRA-21103 is a strict prerequisite, or if the images could be 
published under the same individual account(s) they are now?

> Update python driver used by cassandra-dtest
> 
>
> Key: CASSANDRA-16275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16275
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest/python
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
>
> In order to commit CASSANDRA-15299, the python driver used by the dtests 
> needs to include PYTHON-1258, support for V5 framing. 
> Updating the python driver's cassandra-test branch to latest trunk causes 1 
> additional dtest failure in 
> {{auth_test.py::TestAuth::test_handle_corrupt_role_data}} because the 
> {{ServerError}} response is now subject to the configured {{retry_policy}}. 
> This means the error ultimately returned from the driver is 
> {{NoHostAvailable}}, rather than {{ServerError}}. 
> I'll open a dtest pr to change the expectation in the test and we can commit 
> that when the cassandra-test branch is updated.
> cc [~aholmber] [~aboudreault]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16240) Having issues creating a table with name profiles

2020-11-18 Thread Adam Holmberg (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234702#comment-17234702
 ] 

Adam Holmberg commented on CASSANDRA-16240:
---

That is a little strange. Are you using cqlsh from different versions of 
Cassandra? Or maybe an environment with a different Python driver installed?

> Having issues creating a table with name profiles
> -
>
> Key: CASSANDRA-16240
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16240
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anuj Kulkarni
>Priority: Normal
> Attachments: image-2020-11-02-12-13-16-999.png
>
>
> Whenever I try to create a table with name profiles, it always gets created 
> with additional quotes surrounding it. Attaching the screenshot.
> I am on Cassandra 3.7
> I tried creating the table in another keyspace. I also tried creating new 
> virtual machines with the same AMI and same Cassandra version, but to no 
> avail.
> If I try to create a table with any other name, there are no issues at all. 
> It's just with the name profiles.
> I am on Ubuntu 18.04 by the way.
> !image-2020-11-02-12-13-16-999.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16275) Update python driver used by cassandra-dtest

2020-11-18 Thread Sam Tunnicliffe (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234711#comment-17234711
 ] 

Sam Tunnicliffe commented on CASSANDRA-16275:
-

No worries, thanks [~aholmber].

Yes, the drivers are installed during the docker image build (I guess to save 
time when starting the containers). They do pull the driver branch to get the 
latest update every time the container is launched, the thought being that this 
will be minimal I suppose. 

I tried just updating the requirements.txt so that pip would pull a version 
with the V5 stuff, but the {{-w}} flag on pip update doesn't seem to remove the 
compiled artifacts ({.c} & {.so} objects) that were installed at build time, 
and there are some incompatibilities there with the latest src. 

You're right that it doesn't strictly require INFRA-21103 though, it might just 
be a good time to start publishing to an asf namespace. If there's no movement 
on that ticket in the next day or two we could just go ahead and publish to an 
individual account.

> Update python driver used by cassandra-dtest
> 
>
> Key: CASSANDRA-16275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16275
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest/python
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
>
> In order to commit CASSANDRA-15299, the python driver used by the dtests 
> needs to include PYTHON-1258, support for V5 framing. 
> Updating the python driver's cassandra-test branch to latest trunk causes 1 
> additional dtest failure in 
> {{auth_test.py::TestAuth::test_handle_corrupt_role_data}} because the 
> {{ServerError}} response is now subject to the configured {{retry_policy}}. 
> This means the error ultimately returned from the driver is 
> {{NoHostAvailable}}, rather than {{ServerError}}. 
> I'll open a dtest pr to change the expectation in the test and we can commit 
> that when the cassandra-test branch is updated.
> cc [~aholmber] [~aboudreault]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16275) Update python driver used by cassandra-dtest

2020-11-18 Thread Sam Tunnicliffe (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234711#comment-17234711
 ] 

Sam Tunnicliffe edited comment on CASSANDRA-16275 at 11/18/20, 3:21 PM:


No worries, thanks [~aholmber].

Yes, the drivers are installed during the docker image build (I guess to save 
time when starting the containers). They do pull the driver branch to get the 
latest update every time the container is launched, the thought being that this 
will be minimal I suppose.

I tried just updating the requirements.txt so that pip would pull a version 
with the V5 stuff, but the {{-w}} flag on pip update doesn't seem to remove the 
compiled artifacts ({{.c}} & {{.so}} objects) that were installed at build 
time, and there are some incompatibilities there with the latest src.

You're right that it doesn't strictly require INFRA-21103 though, it might just 
be a good time to start publishing to an asf namespace. If there's no movement 
on that ticket in the next day or two we could just go ahead and publish to an 
individual account.


was (Author: beobal):
No worries, thanks [~aholmber].

Yes, the drivers are installed during the docker image build (I guess to save 
time when starting the containers). They do pull the driver branch to get the 
latest update every time the container is launched, the thought being that this 
will be minimal I suppose. 

I tried just updating the requirements.txt so that pip would pull a version 
with the V5 stuff, but the {{-w}} flag on pip update doesn't seem to remove the 
compiled artifacts ({.c} & {.so} objects) that were installed at build time, 
and there are some incompatibilities there with the latest src. 

You're right that it doesn't strictly require INFRA-21103 though, it might just 
be a good time to start publishing to an asf namespace. If there's no movement 
on that ticket in the next day or two we could just go ahead and publish to an 
individual account.

> Update python driver used by cassandra-dtest
> 
>
> Key: CASSANDRA-16275
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16275
> Project: Cassandra
>  Issue Type: Task
>  Components: Test/dtest/python
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
>
> In order to commit CASSANDRA-15299, the python driver used by the dtests 
> needs to include PYTHON-1258, support for V5 framing. 
> Updating the python driver's cassandra-test branch to latest trunk causes 1 
> additional dtest failure in 
> {{auth_test.py::TestAuth::test_handle_corrupt_role_data}} because the 
> {{ServerError}} response is now subject to the configured {{retry_policy}}. 
> This means the error ultimately returned from the driver is 
> {{NoHostAvailable}}, rather than {{ServerError}}. 
> I'll open a dtest pr to change the expectation in the test and we can commit 
> that when the cassandra-test branch is updated.
> cc [~aholmber] [~aboudreault]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16261) Prevent unbounded number of flushing tasks

2020-11-18 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-16261:
--
Reviewers: Andres de la Peña, Andres de la Peña  (was: Andres de la Peña)
   Andres de la Peña, Andres de la Peña  (was: Andres de la Peña)
   Status: Review In Progress  (was: Patch Available)

> Prevent unbounded number of flushing tasks
> --
>
> Key: CASSANDRA-16261
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16261
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Ekaterina Dimitrova
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 3.11.x, 4.0-beta4
>
>
> The cleaner thread is not prevented from queueing an unbounded number of 
> flushing tasks for memtables that are almost empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16255) Update jctools dependency

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234763#comment-17234763
 ] 

Michael Semb Wever commented on CASSANDRA-16255:


[~Bereng], since you touched the netbeans project file, would you mind 
reviewing CASSANDRA-16234 (it's a one-liner and not so easy to find reviewers 
for…), as it would make sense to push these out together.

> Update jctools dependency
> -
>
> Key: CASSANDRA-16255
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16255
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/Other
>Reporter: Marcus Eriksson
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0-beta4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CASSANDRA-15880 started using {{MpmcArrayQueue}} from jctools - before that 
> we only used it in cassandra-stress, we should probably update the dependency 
> as jctools-1.2.1 is more than 4 years old



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread Jordan West (Jira)
Jordan West created CASSANDRA-16285:
---

 Summary: Change Dynamic Snitch Default Badness Threshold to 1.0
 Key: CASSANDRA-16285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
 Project: Cassandra
  Issue Type: Improvement
  Components: Consistency/Coordination
Reporter: Jordan West
Assignee: Jordan West
 Attachments: readcount-0.1.png, readcount-1.0.png, 
readlatency-0.1.png, readlatency-1.0.png

With the removal of compaction and IO from the DynamicEndpointSnitch score 
calculation, the default badness threshold of 10% (0.1) is too small of a 
margin from experience with production clusters. When compaction and IO values 
were included, the resulting scores were dominated by them and 10% was a much 
more noticeable difference. When relying solely on latency, the 
DynamicEndpointSnitch can rely on nodes that are performing only marginally 
better than their peers. This results in a lopsided request distribution among 
the replicas despite similar performance. 

Some graphs are attached from a production cluster showing the read count and 
latency among the nodes with the default of 0.1 and with the badness threshold 
set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread Jordan West (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-16285:

Change Category: Performance
 Complexity: Low Hanging Fruit
 Status: Open  (was: Triage Needed)

> Change Dynamic Snitch Default Badness Threshold to 1.0
> --
>
> Key: CASSANDRA-16285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Attachments: readcount-0.1.png, readcount-1.0.png, 
> readlatency-0.1.png, readlatency-1.0.png
>
>
> With the removal of compaction and IO from the DynamicEndpointSnitch score 
> calculation, the default badness threshold of 10% (0.1) is too small of a 
> margin from experience with production clusters. When compaction and IO 
> values were included, the resulting scores were dominated by them and 10% was 
> a much more noticeable difference. When relying solely on latency, the 
> DynamicEndpointSnitch can rely on nodes that are performing only marginally 
> better than their peers. This results in a lopsided request distribution 
> among the replicas despite similar performance. 
> Some graphs are attached from a production cluster showing the read count and 
> latency among the nodes with the default of 0.1 and with the badness 
> threshold set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread Jordan West (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-16285:

Test and Documentation Plan: Run existing test suites
 Status: Patch Available  (was: Open)

[branch| https://github.com/jrwest/cassandra/tree/jwest/16285] 
[tests| 
https://app.circleci.com/pipelines/github/jrwest/cassandra?branch=jwest%2F16285]

> Change Dynamic Snitch Default Badness Threshold to 1.0
> --
>
> Key: CASSANDRA-16285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Attachments: readcount-0.1.png, readcount-1.0.png, 
> readlatency-0.1.png, readlatency-1.0.png
>
>
> With the removal of compaction and IO from the DynamicEndpointSnitch score 
> calculation, the default badness threshold of 10% (0.1) is too small of a 
> margin from experience with production clusters. When compaction and IO 
> values were included, the resulting scores were dominated by them and 10% was 
> a much more noticeable difference. When relying solely on latency, the 
> DynamicEndpointSnitch can rely on nodes that are performing only marginally 
> better than their peers. This results in a lopsided request distribution 
> among the replicas despite similar performance. 
> Some graphs are attached from a production cluster showing the read count and 
> latency among the nodes with the default of 0.1 and with the badness 
> threshold set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234876#comment-17234876
 ] 

David Capwell commented on CASSANDRA-16285:
---

+1

> Change Dynamic Snitch Default Badness Threshold to 1.0
> --
>
> Key: CASSANDRA-16285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Attachments: readcount-0.1.png, readcount-1.0.png, 
> readlatency-0.1.png, readlatency-1.0.png
>
>
> With the removal of compaction and IO from the DynamicEndpointSnitch score 
> calculation, the default badness threshold of 10% (0.1) is too small of a 
> margin from experience with production clusters. When compaction and IO 
> values were included, the resulting scores were dominated by them and 10% was 
> a much more noticeable difference. When relying solely on latency, the 
> DynamicEndpointSnitch can rely on nodes that are performing only marginally 
> better than their peers. This results in a lopsided request distribution 
> among the replicas despite similar performance. 
> Some graphs are attached from a production cluster showing the read count and 
> latency among the nodes with the default of 0.1 and with the badness 
> threshold set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-16285:
--
Reviewers: David Capwell
   Status: Review In Progress  (was: Patch Available)

> Change Dynamic Snitch Default Badness Threshold to 1.0
> --
>
> Key: CASSANDRA-16285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Attachments: readcount-0.1.png, readcount-1.0.png, 
> readlatency-0.1.png, readlatency-1.0.png
>
>
> With the removal of compaction and IO from the DynamicEndpointSnitch score 
> calculation, the default badness threshold of 10% (0.1) is too small of a 
> margin from experience with production clusters. When compaction and IO 
> values were included, the resulting scores were dominated by them and 10% was 
> a much more noticeable difference. When relying solely on latency, the 
> DynamicEndpointSnitch can rely on nodes that are performing only marginally 
> better than their peers. This results in a lopsided request distribution 
> among the replicas despite similar performance. 
> Some graphs are attached from a production cluster showing the read count and 
> latency among the nodes with the default of 0.1 and with the badness 
> threshold set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234892#comment-17234892
 ] 

Jordan West commented on CASSANDRA-16285:
-

There was a failure in a dynamic endpoint snitch test, looking at that before 
merge. 

> Change Dynamic Snitch Default Badness Threshold to 1.0
> --
>
> Key: CASSANDRA-16285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Attachments: readcount-0.1.png, readcount-1.0.png, 
> readlatency-0.1.png, readlatency-1.0.png
>
>
> With the removal of compaction and IO from the DynamicEndpointSnitch score 
> calculation, the default badness threshold of 10% (0.1) is too small of a 
> margin from experience with production clusters. When compaction and IO 
> values were included, the resulting scores were dominated by them and 10% was 
> a much more noticeable difference. When relying solely on latency, the 
> DynamicEndpointSnitch can rely on nodes that are performing only marginally 
> better than their peers. This results in a lopsided request distribution 
> among the replicas despite similar performance. 
> Some graphs are attached from a production cluster showing the read count and 
> latency among the nodes with the default of 0.1 and with the badness 
> threshold set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread Jordan West (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234913#comment-17234913
 ] 

Jordan West commented on CASSANDRA-16285:
-

Pushed an update to {{DyanmicEndpointSnitchTest}} to use the old default (0.1). 

> Change Dynamic Snitch Default Badness Threshold to 1.0
> --
>
> Key: CASSANDRA-16285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Attachments: readcount-0.1.png, readcount-1.0.png, 
> readlatency-0.1.png, readlatency-1.0.png
>
>
> With the removal of compaction and IO from the DynamicEndpointSnitch score 
> calculation, the default badness threshold of 10% (0.1) is too small of a 
> margin from experience with production clusters. When compaction and IO 
> values were included, the resulting scores were dominated by them and 10% was 
> a much more noticeable difference. When relying solely on latency, the 
> DynamicEndpointSnitch can rely on nodes that are performing only marginally 
> better than their peers. This results in a lopsided request distribution 
> among the replicas despite similar performance. 
> Some graphs are attached from a production cluster showing the read count and 
> latency among the nodes with the default of 0.1 and with the badness 
> threshold set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15897) Dropping compact storage with 2.1-sstables on disk make them unreadable

2020-11-18 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234925#comment-17234925
 ] 

Ekaterina Dimitrova edited comment on CASSANDRA-15897 at 11/18/20, 6:52 PM:


Talked to [~ifesdjeen] on Slack. 

Considering the scope left until GA and that this issue fixes behavior in case 
someone didn't follow the instructions to upgrade sstables before dropping 
compact storage and upgrade, this patch will be put on hold for later(for sure 
great improvement that should go in) in favor of solid documentation and 
NEWS.txt. 

Anyone against moving it to 4.x for example and posting a patch to update 
NEWS.txt and docs for now?


was (Author: e.dimitrova):
Talked to Alex on Slack. 

Considering the scope left until GA and that this issue fixes behavior in case 
someone didn't follow the instructions to upgrade sstables before dropping 
compact storage and upgrade, this patch will be put on hold for later(for sure 
great improvement that should go in) in favor of solid documentation and 
NEWS.txt. 

Anyone against moving it to 4.x for example and posting a patch to update 
NEWS.txt and docs for now?

> Dropping compact storage with 2.1-sstables on disk make them unreadable
> ---
>
> Key: CASSANDRA-15897
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15897
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Marcus Eriksson
>Assignee: Sylvain Lebresne
>Priority: Normal
> Fix For: 3.0.x, 4.0-beta
>
>
> Test reproducing: 
> https://github.com/krummas/cassandra/commits/marcuse/dropcompactstorage



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15897) Dropping compact storage with 2.1-sstables on disk make them unreadable

2020-11-18 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234925#comment-17234925
 ] 

Ekaterina Dimitrova commented on CASSANDRA-15897:
-

Talked to Alex on Slack. 

Considering the scope left until GA and that this issue fixes behavior in case 
someone didn't follow the instructions to upgrade sstables before dropping 
compact storage and upgrade, this patch will be put on hold for later(for sure 
great improvement that should go in) in favor of solid documentation and 
NEWS.txt. 

Anyone against moving it to 4.x for example and posting a patch to update 
NEWS.txt and docs for now?

> Dropping compact storage with 2.1-sstables on disk make them unreadable
> ---
>
> Key: CASSANDRA-15897
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15897
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Marcus Eriksson
>Assignee: Sylvain Lebresne
>Priority: Normal
> Fix For: 3.0.x, 4.0-beta
>
>
> Test reproducing: 
> https://github.com/krummas/cassandra/commits/marcuse/dropcompactstorage



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15897) Dropping compact storage with 2.1-sstables on disk make them unreadable

2020-11-18 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234925#comment-17234925
 ] 

Ekaterina Dimitrova edited comment on CASSANDRA-15897 at 11/18/20, 6:58 PM:


Talked to [~ifesdjeen] on Slack. 

Considering the scope left until GA and that this issue fixes behavior in case 
someone didn't follow the instructions to upgrade sstables before dropping 
compact storage, this patch will be put on hold for later(for sure great 
improvement that should go in) in favor of solid documentation and NEWS.txt. 

Anyone against moving it to 4.x for example and posting a patch to update 
NEWS.txt and docs for now?


was (Author: e.dimitrova):
Talked to [~ifesdjeen] on Slack. 

Considering the scope left until GA and that this issue fixes behavior in case 
someone didn't follow the instructions to upgrade sstables before dropping 
compact storage and upgrade, this patch will be put on hold for later(for sure 
great improvement that should go in) in favor of solid documentation and 
NEWS.txt. 

Anyone against moving it to 4.x for example and posting a patch to update 
NEWS.txt and docs for now?

> Dropping compact storage with 2.1-sstables on disk make them unreadable
> ---
>
> Key: CASSANDRA-15897
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15897
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Marcus Eriksson
>Assignee: Sylvain Lebresne
>Priority: Normal
> Fix For: 3.0.x, 4.0-beta
>
>
> Test reproducing: 
> https://github.com/krummas/cassandra/commits/marcuse/dropcompactstorage



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16161) Validation Compactions causing Java GC pressure

2020-11-18 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234978#comment-17234978
 ] 

Michael Semb Wever commented on CASSANDRA-16161:


Thanks for the (long) unit tests in the patch [~stefan.miklosovic]! 

CI run is 
[here|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/219/pipeline].

> Validation Compactions causing Java GC pressure
> ---
>
> Key: CASSANDRA-16161
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16161
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Tool/nodetool
>Reporter: Cameron Zemek
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 3.11.x
>
> Attachments: 16161.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Validation Compactions are not rate limited which can cause Java GC pressure 
> and result in spikes in latency.
> PR https://github.com/apache/cassandra/pull/814



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16161) Validation Compactions causing Java GC pressure

2020-11-18 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-16161:
---
Status: Review In Progress  (was: Patch Available)

> Validation Compactions causing Java GC pressure
> ---
>
> Key: CASSANDRA-16161
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16161
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Tool/nodetool
>Reporter: Cameron Zemek
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 3.11.x
>
> Attachments: 16161.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Validation Compactions are not rate limited which can cause Java GC pressure 
> and result in spikes in latency.
> PR https://github.com/apache/cassandra/pull/814



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16071) max_compaction_flush_memory_in_mb is interpreted as bytes

2020-11-18 Thread Scott Carey (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235004#comment-17235004
 ] 

Scott Carey commented on CASSANDRA-16071:
-

I still have to do a rolling re-index which is not very nice.   If it just 
interpreted the large values as if it were bytes it would be ok.  Sure, log a 
loud warning or something.   I don't comprehend why interpreting it as bytes 
for such values is problematic.   Maybe set a floor of some sort if 100k is too 
small.  But 1GB is useless for anyone who already set this lower than the 
default 1GB on purpose.

 

The default 1GB isn't safe either, due to the bugs I listed in the other 
ticket.   Large compactions with multiple output files are 1GB _per file_ 
_output_ per index in the worst case.   So a compaction that outputs 40 files 
from LCS id DOA on my environment at 1GB  – no different than setting it to 
1TB.   Anyone who set the value smaller than the default did so most likely to 
avoid going OOM.

I suppose the patch here will help some people, but is not helpful for me.  It 
does highlight the issue in the logs which is a big improvement.

To compound issues, the cassandra yum repo does not store older versions, so 
rolling back to 3.11.7 is non-trivial.

 

RE: the upgrade process

 

In no way is it acceptable in most environments using SASI to drop the old 
index and only then build the new one.   Most likely, there are queries that 
will not function without the index.   I have to build a new index with the new 
settings (but a different name), then drop the old one.  

I also have to carefully build them in the correct order since the query 
planner is dependent on the order of creation of the index.

 

If this interpreted the value as bytes when it is huge, I wouldn't have to 
create a new index.   If addressing the error log message was as simple as 
dividing the value by 2^20 and nothing else, it would probably even be 
reasonable to halt the start-up and correct it.  But as long as it is an index 
rebuild that can take a LONG time on a large table, I think this fix should be 
more sensitive to the operational cost incurred – after all this is a minor 
patch release and it seems unusual to require data rebuilds in such a patch.

> max_compaction_flush_memory_in_mb is interpreted as bytes
> -
>
> Key: CASSANDRA-16071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/SASI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 4.0, 3.11.8, 4.0-beta2, 4.0-beta4, 3.11.10
>
>
> In CASSANDRA-12662, [~scottcarey] 
> [reported|https://issues.apache.org/jira/browse/CASSANDRA-12662?focusedCommentId=17070055&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17070055]
>  that the {{max_compaction_flush_memory_in_mb}} setting gets incorrectly 
> interpreted in bytes rather than megabytes as its name implies.
> {quote}
> 1.  the setting 'max_compaction_flush_memory_in_mb' is a misnomer, it is 
> actually memory in BYTES.  If you take it at face value, and set it to say, 
> '512' thinking that means 512MB,  you will produce a million temp files 
> rather quickly in a large compaction, which will exhaust even large values of 
> max_map_count rapidly, and get the OOM: Map Error issue above and possibly 
> have a very difficult situation to get a cluster back into a place where 
> nodes aren't crashing while initilaizing or soon after.  This issue is minor 
> if you know about it in advance and set the value IN BYTES.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16071) max_compaction_flush_memory_in_mb is interpreted as bytes

2020-11-18 Thread Scott Carey (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235006#comment-17235006
 ] 

Scott Carey commented on CASSANDRA-16071:
-

On the other hand, maybe I'm the only one crazy enough to use this feature on 
an LCS table with 500GB of data per node.

> max_compaction_flush_memory_in_mb is interpreted as bytes
> -
>
> Key: CASSANDRA-16071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/SASI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 4.0, 3.11.8, 4.0-beta2, 4.0-beta4, 3.11.10
>
>
> In CASSANDRA-12662, [~scottcarey] 
> [reported|https://issues.apache.org/jira/browse/CASSANDRA-12662?focusedCommentId=17070055&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17070055]
>  that the {{max_compaction_flush_memory_in_mb}} setting gets incorrectly 
> interpreted in bytes rather than megabytes as its name implies.
> {quote}
> 1.  the setting 'max_compaction_flush_memory_in_mb' is a misnomer, it is 
> actually memory in BYTES.  If you take it at face value, and set it to say, 
> '512' thinking that means 512MB,  you will produce a million temp files 
> rather quickly in a large compaction, which will exhaust even large values of 
> max_map_count rapidly, and get the OOM: Map Error issue above and possibly 
> have a very difficult situation to get a cluster back into a place where 
> nodes aren't crashing while initilaizing or soon after.  This issue is minor 
> if you know about it in advance and set the value IN BYTES.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16071) max_compaction_flush_memory_in_mb is interpreted as bytes

2020-11-18 Thread Scott Carey (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235006#comment-17235006
 ] 

Scott Carey edited comment on CASSANDRA-16071 at 11/18/20, 9:15 PM:


On the other hand, maybe I'm the only one crazy enough to use this feature on 
an LCS table with 500GB of data per node.

 

Yet, it absolutely obliterates the ordinary secondary indexing on a low to 
moderate cardinality index and opens up use cases that are impossible without 
it.  I'm looking forward to the new secondary indexing feature that is being 
designed that likewise uses a local index alongside an SSTable.

 

Sorry for the delay replying, I'm not getting email notifications at the moment.

 

Thanks for listening!   I thought  about submitting a patch but I don't have a 
cassandra dev environment set up.


was (Author: scott_carey):
On the other hand, maybe I'm the only one crazy enough to use this feature on 
an LCS table with 500GB of data per node.

> max_compaction_flush_memory_in_mb is interpreted as bytes
> -
>
> Key: CASSANDRA-16071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/SASI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 4.0, 3.11.8, 4.0-beta2, 4.0-beta4, 3.11.10
>
>
> In CASSANDRA-12662, [~scottcarey] 
> [reported|https://issues.apache.org/jira/browse/CASSANDRA-12662?focusedCommentId=17070055&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17070055]
>  that the {{max_compaction_flush_memory_in_mb}} setting gets incorrectly 
> interpreted in bytes rather than megabytes as its name implies.
> {quote}
> 1.  the setting 'max_compaction_flush_memory_in_mb' is a misnomer, it is 
> actually memory in BYTES.  If you take it at face value, and set it to say, 
> '512' thinking that means 512MB,  you will produce a million temp files 
> rather quickly in a large compaction, which will exhaust even large values of 
> max_map_count rapidly, and get the OOM: Map Error issue above and possibly 
> have a very difficult situation to get a cluster back into a place where 
> nodes aren't crashing while initilaizing or soon after.  This issue is minor 
> if you know about it in advance and set the value IN BYTES.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16286) Make TokenMetadata's ring version increments atomic

2020-11-18 Thread Caleb Rackliffe (Jira)
Caleb Rackliffe created CASSANDRA-16286:
---

 Summary: Make TokenMetadata's ring version increments atomic
 Key: CASSANDRA-16286
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16286
 Project: Cassandra
  Issue Type: Bug
  Components: Cluster/Gossip
Reporter: Caleb Rackliffe


The update semantics of the ring version in {{TokenMetadata}} are not clear. 
The instance variable itself is {{volatile}}, but it is still incremented by a 
non-atomic check-and-set, and not all codepaths do that while holding the 
{{TokenMetadata}} write lock. We could make this more intelligible by forcing 
the external callers to use both the write when invalidating the ring and read 
lock when reading the current ring version. Most of the readers of the ring 
version (ex. compaction) don't need it to be fast, but it shouldn't be a 
problem even if they do. If we do this, we should be able to avoid a situation 
where concurrent invalidations don't produce two distinct version increments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16286) Make TokenMetadata's ring version increments atomic

2020-11-18 Thread Caleb Rackliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Rackliffe updated CASSANDRA-16286:

 Bug Category: Parent values: Correctness(12982)Level 1 values: Recoverable 
Corruption / Loss(12986)
   Complexity: Normal
Discovered By: Code Inspection
Fix Version/s: 4.0-beta
   3.11.x
   3.0.x
 Severity: Normal
   Status: Open  (was: Triage Needed)

> Make TokenMetadata's ring version increments atomic
> ---
>
> Key: CASSANDRA-16286
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16286
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip
>Reporter: Caleb Rackliffe
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0-beta
>
>
> The update semantics of the ring version in {{TokenMetadata}} are not clear. 
> The instance variable itself is {{volatile}}, but it is still incremented by 
> a non-atomic check-and-set, and not all codepaths do that while holding the 
> {{TokenMetadata}} write lock. We could make this more intelligible by forcing 
> the external callers to use both the write when invalidating the ring and 
> read lock when reading the current ring version. Most of the readers of the 
> ring version (ex. compaction) don't need it to be fast, but it shouldn't be a 
> problem even if they do. If we do this, we should be able to avoid a 
> situation where concurrent invalidations don't produce two distinct version 
> increments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated: Set dynamic snitch default badness threshold to 1.0

2020-11-18 Thread jwest
This is an automated email from the ASF dual-hosted git repository.

jwest pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fae1f88  Set dynamic snitch default badness threshold to 1.0
fae1f88 is described below

commit fae1f883541f329f7575d6ff4117b230e371293b
Author: Jordan West 
AuthorDate: Wed Nov 18 08:32:44 2020 -0800

Set dynamic snitch default badness threshold to 1.0

Patch by Jordan West; Reviewed by David Capwell for CASSANDRA-16285
---
 CHANGES.txt   | 1 +
 conf/cassandra.yaml   | 2 +-
 src/java/org/apache/cassandra/config/Config.java  | 2 +-
 test/unit/org/apache/cassandra/locator/DynamicEndpointSnitchTest.java | 1 +
 4 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index 7d904a9..0d93b85 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -18,6 +18,7 @@ Merged from 3.0:
  * Remove the SEPExecutor blocking behavior (CASSANDRA-16186)
  * Wait for schema agreement when bootstrapping (CASSANDRA-15158)
  * Prevent invoking enable/disable gossip when not in NORMAL (CASSANDRA-16146)
+ * Raise Dynamic Snitch Default Badness Threshold to 1.0 (CASSANDRA-16285)
 
 4.0-beta3
  * Segregate Network and Chunk Cache BufferPools and Recirculate Partially 
Freed Chunks (CASSANDRA-15229)
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 11e54ad..066d22e 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -1062,7 +1062,7 @@ dynamic_snitch_reset_interval_in_ms: 60
 # expressed as a double which represents a percentage.  Thus, a value of
 # 0.2 means Cassandra would continue to prefer the static snitch values
 # until the pinned host was 20% worse than the fastest.
-dynamic_snitch_badness_threshold: 0.1
+dynamic_snitch_badness_threshold: 1.0
 
 # Configure server-to-server internode encryption
 #
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index cde7d53..464f8ad 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -266,7 +266,7 @@ public class Config
 public boolean dynamic_snitch = true;
 public int dynamic_snitch_update_interval_in_ms = 100;
 public int dynamic_snitch_reset_interval_in_ms = 60;
-public double dynamic_snitch_badness_threshold = 0.1;
+public double dynamic_snitch_badness_threshold = 1.0;
 
 public EncryptionOptions.ServerEncryptionOptions server_encryption_options 
= new EncryptionOptions.ServerEncryptionOptions();
 public EncryptionOptions client_encryption_options = new 
EncryptionOptions();
diff --git 
a/test/unit/org/apache/cassandra/locator/DynamicEndpointSnitchTest.java 
b/test/unit/org/apache/cassandra/locator/DynamicEndpointSnitchTest.java
index 069c222..b7d4243 100644
--- a/test/unit/org/apache/cassandra/locator/DynamicEndpointSnitchTest.java
+++ b/test/unit/org/apache/cassandra/locator/DynamicEndpointSnitchTest.java
@@ -66,6 +66,7 @@ public class DynamicEndpointSnitchTest
 public void testSnitch() throws InterruptedException, IOException, 
ConfigurationException
 {
 // do this because SS needs to be initialized before DES can work 
properly.
+DatabaseDescriptor.setDynamicBadnessThreshold(0.1);
 StorageService.instance.unsafeInitialize();
 SimpleSnitch ss = new SimpleSnitch();
 DynamicEndpointSnitch dsnitch = new DynamicEndpointSnitch(ss, 
String.valueOf(ss.hashCode()));


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread Jordan West (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-16285:

Status: Ready to Commit  (was: Review In Progress)

> Change Dynamic Snitch Default Badness Threshold to 1.0
> --
>
> Key: CASSANDRA-16285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Attachments: readcount-0.1.png, readcount-1.0.png, 
> readlatency-0.1.png, readlatency-1.0.png
>
>
> With the removal of compaction and IO from the DynamicEndpointSnitch score 
> calculation, the default badness threshold of 10% (0.1) is too small of a 
> margin from experience with production clusters. When compaction and IO 
> values were included, the resulting scores were dominated by them and 10% was 
> a much more noticeable difference. When relying solely on latency, the 
> DynamicEndpointSnitch can rely on nodes that are performing only marginally 
> better than their peers. This results in a lopsided request distribution 
> among the replicas despite similar performance. 
> Some graphs are attached from a production cluster showing the read count and 
> latency among the nodes with the default of 0.1 and with the badness 
> threshold set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16285) Change Dynamic Snitch Default Badness Threshold to 1.0

2020-11-18 Thread Jordan West (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-16285:

  Fix Version/s: 4.0-beta4
Source Control Link: 
https://github.com/apache/cassandra/commit/fae1f883541f329f7575d6ff4117b230e371293b
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed as 
https://github.com/apache/cassandra/commit/fae1f883541f329f7575d6ff4117b230e371293b

> Change Dynamic Snitch Default Badness Threshold to 1.0
> --
>
> Key: CASSANDRA-16285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16285
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
> Fix For: 4.0-beta4
>
> Attachments: readcount-0.1.png, readcount-1.0.png, 
> readlatency-0.1.png, readlatency-1.0.png
>
>
> With the removal of compaction and IO from the DynamicEndpointSnitch score 
> calculation, the default badness threshold of 10% (0.1) is too small of a 
> margin from experience with production clusters. When compaction and IO 
> values were included, the resulting scores were dominated by them and 10% was 
> a much more noticeable difference. When relying solely on latency, the 
> DynamicEndpointSnitch can rely on nodes that are performing only marginally 
> better than their peers. This results in a lopsided request distribution 
> among the replicas despite similar performance. 
> Some graphs are attached from a production cluster showing the read count and 
> latency among the nodes with the default of 0.1 and with the badness 
> threshold set to 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16282) Fix STCS documentation (the header is currently LCS)

2020-11-18 Thread Miles Garnsey (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235077#comment-17235077
 ] 

Miles Garnsey commented on CASSANDRA-16282:
---

:D Awesome, thanks [~mck]!

> Fix STCS documentation (the header is currently LCS)
> 
>
> Key: CASSANDRA-16282
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16282
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: Miles Garnsey
>Assignee: Miles Garnsey
>Priority: Normal
> Fix For: 4.0, 4.0-beta4
>
>
> Currently, the header in the [documentation for 
> STCS|https://cassandra.apache.org/doc/latest/operating/compaction/stcs.html] 
> refers to LCS in the header, which also makes it hard to find the STCS 
> documentation via search.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16240) Having issues creating a table with name profiles

2020-11-18 Thread Anuj Kulkarni (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235173#comment-17235173
 ] 

Anuj Kulkarni commented on CASSANDRA-16240:
---

Good point Adam, I will check Python driver on these two environments.

> Having issues creating a table with name profiles
> -
>
> Key: CASSANDRA-16240
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16240
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anuj Kulkarni
>Priority: Normal
> Attachments: image-2020-11-02-12-13-16-999.png
>
>
> Whenever I try to create a table with name profiles, it always gets created 
> with additional quotes surrounding it. Attaching the screenshot.
> I am on Cassandra 3.7
> I tried creating the table in another keyspace. I also tried creating new 
> virtual machines with the same AMI and same Cassandra version, but to no 
> avail.
> If I try to create a table with any other name, there are no issues at all. 
> It's just with the name profiles.
> I am on Ubuntu 18.04 by the way.
> !image-2020-11-02-12-13-16-999.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16234) Update NetBeans project file for dependency changes since 11th Feb 2020

2020-11-18 Thread Berenguer Blasi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Berenguer Blasi updated CASSANDRA-16234:

Reviewers: Berenguer Blasi

> Update NetBeans project file for dependency changes since 11th Feb 2020
> ---
>
> Key: CASSANDRA-16234
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16234
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
>
> A number of dependencies have been add/removed/updated to the project.
> The netbeans project file needs an update to be in-sync.
> Causing tickets:
>  - CASSANDRA-15677
>  - CASSANDRA-16064
>  - CASSANDRA-12995
>  - CASSANDRA-15867
>  - CASSANDRA-12197
>  - CASSANDRA-15556
>  - CASSANDRA-15868
>  - CASSANDRA-16150
>  - CASSANDRA-15631
>  - CASSANDRA-15851
>  - CASSANDRA-16148
>  - CASSANDRA-14655
>  - CASSANDRA-15867
>  - CASSANDRA-15388
>  - CASSANDRA-15564
>  - CASSANDRA-16127



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16255) Update jctools dependency

2020-11-18 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235194#comment-17235194
 ] 

Berenguer Blasi commented on CASSANDRA-16255:
-

[~mck] CASSANDRA-16234 LGTM. I opened and built the project despite I am not a 
NetBeans user for full disclosure... So yes you will have to somehow 'merge' 
both project.xml files from both tickets together...

> Update jctools dependency
> -
>
> Key: CASSANDRA-16255
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16255
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/Other
>Reporter: Marcus Eriksson
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0-beta4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CASSANDRA-15880 started using {{MpmcArrayQueue}} from jctools - before that 
> we only used it in cassandra-stress, we should probably update the dependency 
> as jctools-1.2.1 is more than 4 years old



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16234) Update NetBeans project file for dependency changes since 11th Feb 2020

2020-11-18 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235195#comment-17235195
 ] 

Berenguer Blasi commented on CASSANDRA-16234:
-

LGTM 1+

> Update NetBeans project file for dependency changes since 11th Feb 2020
> ---
>
> Key: CASSANDRA-16234
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16234
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
>
> A number of dependencies have been add/removed/updated to the project.
> The netbeans project file needs an update to be in-sync.
> Causing tickets:
>  - CASSANDRA-15677
>  - CASSANDRA-16064
>  - CASSANDRA-12995
>  - CASSANDRA-15867
>  - CASSANDRA-12197
>  - CASSANDRA-15556
>  - CASSANDRA-15868
>  - CASSANDRA-16150
>  - CASSANDRA-15631
>  - CASSANDRA-15851
>  - CASSANDRA-16148
>  - CASSANDRA-14655
>  - CASSANDRA-15867
>  - CASSANDRA-15388
>  - CASSANDRA-15564
>  - CASSANDRA-16127



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org