[jira] [Commented] (CASSANDRA-14358) OutboundTcpConnection can hang for many minutes when nodes restart

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433171#comment-16433171
 ] 

Ariel Weisberg commented on CASSANDRA-14358:


30 seconds and a hot prop sounds excellent.

> OutboundTcpConnection can hang for many minutes when nodes restart
> --
>
> Key: CASSANDRA-14358
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14358
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.19 (also reproduced on 3.0.15), running 
> with {{internode_encryption: all}} and the EC2 multi region snitch on Linux 
> 4.13 within the same AWS region. Smallest cluster I've seen the problem on is 
> 12 nodes, reproduces more reliably on 40+ and 300 node clusters consistently 
> reproduce on at least one node in the cluster.
> So all the connections are SSL and we're connecting on the internal ip 
> addresses (not the public endpoint ones).
> Potentially relevant sysctls:
> {noformat}
> /proc/sys/net/ipv4/tcp_syn_retries = 2
> /proc/sys/net/ipv4/tcp_synack_retries = 5
> /proc/sys/net/ipv4/tcp_keepalive_time = 7200
> /proc/sys/net/ipv4/tcp_keepalive_probes = 9
> /proc/sys/net/ipv4/tcp_keepalive_intvl = 75
> /proc/sys/net/ipv4/tcp_retries2 = 15
> {noformat}
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Major
> Attachments: 10 Minute Partition.pdf
>
>
> I've been trying to debug nodes not being able to see each other during 
> longer (~5 minute+) Cassandra restarts in 3.0.x and 2.1.x which can 
> contribute to {{UnavailableExceptions}} during rolling restarts of 3.0.x and 
> 2.1.x clusters for us. I think I finally have a lead. It appears that prior 
> to trunk (with the awesome Netty refactor) we do not set socket connect 
> timeouts on SSL connections (in 2.1.x, 3.0.x, or 3.11.x) nor do we set 
> {{SO_TIMEOUT}} as far as I can tell on outbound connections either. I believe 
> that this means that we could potentially block forever on {{connect}} or 
> {{recv}} syscalls, and we could block forever on the SSL Handshake as well. I 
> think that the OS will protect us somewhat (and that may be what's causing 
> the eventual timeout) but I think that given the right network conditions our 
> {{OutboundTCPConnection}} threads can just be stuck never making any progress 
> until the OS intervenes.
> I have attached some logs of such a network partition during a rolling 
> restart where an old node in the cluster has a completely foobarred 
> {{OutboundTcpConnection}} for ~10 minutes before finally getting a 
> {{java.net.SocketException: Connection timed out (Write failed)}} and 
> immediately successfully reconnecting. I conclude that the old node is the 
> problem because the new node (the one that restarted) is sending ECHOs to the 
> old node, and the old node is sending ECHOs and REQUEST_RESPONSES to the new 
> node's ECHOs, but the new node is never getting the ECHO's. This appears, to 
> me, to indicate that the old node's {{OutboundTcpConnection}} thread is just 
> stuck and can't make any forward progress. By the time we could notice this 
> and slap TRACE logging on, the only thing we see is ~10 minutes later a 
> {{SocketException}} inside {{writeConnected}}'s flush and an immediate 
> recovery. It is interesting to me that the exception happens in 
> {{writeConnected}} and it's a _connection timeout_ (and since we see {{Write 
> failure}} I believe that this can't be a connection reset), because my 
> understanding is that we should have a fully handshaked SSL connection at 
> that point in the code.
> Current theory:
>  # "New" node restarts,  "Old" node calls 
> [newSocket|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L433]
>  # Old node starts [creating a 
> new|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java#L141]
>  SSL socket 
>  # SSLSocket calls 
> [createSocket|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/security/SSLFactory.java#L98],
>  which conveniently calls connect with a default timeout of "forever". We 
> could hang here forever until the OS kills us.
>  # If we continue, we get to 
> [writeConnected|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L263]
>  which eventually calls 
> [flush|https://github.com/apache/cassandra/blob/6f30677b28dcbf82bcd0a291f3294ddf87dafaac/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L341]
>  on the output stream and also can hang forever. I think the pr

[jira] [Created] (CASSANDRA-14376) Limiting a clustering column with a range not allowed when using "group by"

2018-04-10 Thread Chris mildebrandt (JIRA)
Chris mildebrandt created CASSANDRA-14376:
-

 Summary: Limiting a clustering column with a range not allowed 
when using "group by"
 Key: CASSANDRA-14376
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14376
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: Cassandra 3.11.1
Reporter: Chris mildebrandt


I’m trying to use a range to limit a clustering column while at the same time 
using `group by` and running into issues. Here’s a sample table:

{{create table if not exists samples (name text, partition int, sample int, 
city text, state text, count counter, primary key ((name, partition), sample, 
city, state)) with clustering order by (sample desc);}}

When I filter `sample` by a range, I get an error:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample>=1 and sample<=3 group by city, state;}}
{{{color:#FF}InvalidRequest: Error from server: code=2200 [Invalid query] 
message="Group by currently only support groups of columns following their 
declared order in the PRIMARY KEY"{color}}}

However, it allows the query when I change from a range to an equals:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample=1 group by city, state;}}

{{city | state | system.sum(count)}}
{{+---+---}}
{{ Austin | TX | 2}}
{{ Denver | CO | 1}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14376) Limiting a clustering column with a range not allowed when using "group by"

2018-04-10 Thread Chris mildebrandt (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris mildebrandt updated CASSANDRA-14376:
--
Description: 
I’m trying to use a range to limit a clustering column while at the same time 
using `group by` and running into issues. Here’s a sample table:

{{create table if not exists samples (name text, partition int, sample int, 
city text, state text, count counter, primary key ((name, partition), sample, 
city, state)) with clustering order by (sample desc);}}

When I filter `sample` by a range, I get an error:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample>=1 and sample<=3 group by city, state;}}
 {{{color:#ff}InvalidRequest: Error from server: code=2200 [Invalid query] 
message="Group by currently only support groups of columns following their 
declared order in the PRIMARY KEY"{color}}}

However, it allows the query when I change from a range to an equals:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample=1 group by city, state;}}

{{city | state | system.sum(count)}}
{{++--}}
{{ Austin | TX | 2}}
{{ Denver | CO | 1}}

  was:
I’m trying to use a range to limit a clustering column while at the same time 
using `group by` and running into issues. Here’s a sample table:

{{create table if not exists samples (name text, partition int, sample int, 
city text, state text, count counter, primary key ((name, partition), sample, 
city, state)) with clustering order by (sample desc);}}

When I filter `sample` by a range, I get an error:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample>=1 and sample<=3 group by city, state;}}
{{{color:#FF}InvalidRequest: Error from server: code=2200 [Invalid query] 
message="Group by currently only support groups of columns following their 
declared order in the PRIMARY KEY"{color}}}

However, it allows the query when I change from a range to an equals:

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample=1 group by city, state;}}

{{city | state | system.sum(count)}}
{{+---+---}}
{{ Austin | TX | 2}}
{{ Denver | CO | 1}}


> Limiting a clustering column with a range not allowed when using "group by"
> ---
>
> Key: CASSANDRA-14376
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14376
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.11.1
>Reporter: Chris mildebrandt
>Priority: Major
>
> I’m trying to use a range to limit a clustering column while at the same time 
> using `group by` and running into issues. Here’s a sample table:
> {{create table if not exists samples (name text, partition int, sample int, 
> city text, state text, count counter, primary key ((name, partition), sample, 
> city, state)) with clustering order by (sample desc);}}
> When I filter `sample` by a range, I get an error:
> {{select city, state, sum(count) from samples where name='bob' and 
> partition=1 and sample>=1 and sample<=3 group by city, state;}}
>  {{{color:#ff}InvalidRequest: Error from server: code=2200 [Invalid 
> query] message="Group by currently only support groups of columns following 
> their declared order in the PRIMARY KEY"{color}}}
> However, it allows the query when I change from a range to an equals:
> {{select city, state, sum(count) from samples where name='bob' and 
> partition=1 and sample=1 group by city, state;}}
> {{city | state | system.sum(count)}}
> {{++--}}
> {{ Austin | TX | 2}}
> {{ Denver | CO | 1}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14376) Limiting a clustering column with a range not allowed when using "group by"

2018-04-10 Thread Chris mildebrandt (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433183#comment-16433183
 ] 

Chris mildebrandt commented on CASSANDRA-14376:
---

Sample for reproduction:

{{create table if not exists samples (name text, partition int, sample int, 
city text, state text, count counter, primary key ((name, partition), sample, 
city, state)) with clustering order by (sample desc);}}

{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=1 and city='Denver' and state='CO';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=2 and city='Denver' and state='CO';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=3 and city='Denver' and state='CO';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=3 and city='Denver' and state='CO';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=1 and city='Austin' and state='TX';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=1 and city='Austin' and state='TX';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=2 and city='Austin' and state='TX';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=2 and city='Austin' and state='TX';}}
{{update samples set count=count+1 where name='bob' and partition=1 and 
sample=2 and city='Austin' and state='TX';}}

{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample>=1 and sample<=3 group by city, state;}}
{{select city, state, sum(count) from samples where name='bob' and partition=1 
and sample=1 group by city, state;}}

> Limiting a clustering column with a range not allowed when using "group by"
> ---
>
> Key: CASSANDRA-14376
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14376
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.11.1
>Reporter: Chris mildebrandt
>Priority: Major
>
> I’m trying to use a range to limit a clustering column while at the same time 
> using `group by` and running into issues. Here’s a sample table:
> {{create table if not exists samples (name text, partition int, sample int, 
> city text, state text, count counter, primary key ((name, partition), sample, 
> city, state)) with clustering order by (sample desc);}}
> When I filter `sample` by a range, I get an error:
> {{select city, state, sum(count) from samples where name='bob' and 
> partition=1 and sample>=1 and sample<=3 group by city, state;}}
>  {{{color:#ff}InvalidRequest: Error from server: code=2200 [Invalid 
> query] message="Group by currently only support groups of columns following 
> their declared order in the PRIMARY KEY"{color}}}
> However, it allows the query when I change from a range to an equals:
> {{select city, state, sum(count) from samples where name='bob' and 
> partition=1 and sample=1 group by city, state;}}
> {{city | state | system.sum(count)}}
> {{++--}}
> {{ Austin | TX | 2}}
> {{ Denver | CO | 1}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: renamed ColumnFamilyStoreCQLHelper to TableCQLHelper

2018-04-10 Thread rustyrazorblade
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4991ca26a -> e75c51719


renamed ColumnFamilyStoreCQLHelper to TableCQLHelper

Patch by Venkata+Harikrishna, reviewed by Jon Haddad for CASSANDRA-14354


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e75c5171
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e75c5171
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e75c5171

Branch: refs/heads/trunk
Commit: e75c5171964b3211776136c50f0d8514b85d6295
Parents: 4991ca2
Author: Venkata+Harikrishna Nukala 
Authored: Sat Mar 31 04:16:27 2018 +0530
Committer: Jon Haddad 
Committed: Tue Apr 10 16:29:14 2018 -0700

--
 CHANGES.txt |   2 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   2 +-
 .../db/ColumnFamilyStoreCQLHelper.java  | 428 --
 .../org/apache/cassandra/db/TableCQLHelper.java | 428 ++
 .../db/ColumnFamilyStoreCQLHelperTest.java  | 447 ---
 .../apache/cassandra/db/TableCQLHelperTest.java | 447 +++
 6 files changed, 878 insertions(+), 876 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e75c5171/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 650f740..bb8c731 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 4.0
+ * Rename internals to reflect CQL vocabulary
+   (CASSANDRA-14354)
  * Add support for hybrid MIN(), MAX() speculative retry policies
(CASSANDRA-14293, CASSANDRA-14338, CASSANDRA-14352)
  * Fix some regressions caused by 14058 (CASSANDRA-14353)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e75c5171/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index e4b84fe..34535e5 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -1824,7 +1824,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 try (PrintStream out = new PrintStream(schemaFile))
 {
-for (String s: 
ColumnFamilyStoreCQLHelper.dumpReCreateStatements(metadata()))
+for (String s: 
TableCQLHelper.dumpReCreateStatements(metadata()))
 out.println(s);
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e75c5171/src/java/org/apache/cassandra/db/ColumnFamilyStoreCQLHelper.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStoreCQLHelper.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStoreCQLHelper.java
deleted file mode 100644
index 740ef3f..000
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStoreCQLHelper.java
+++ /dev/null
@@ -1,428 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.cassandra.db;
-
-import java.nio.ByteBuffer;
-import java.util.*;
-import java.util.concurrent.atomic.*;
-import java.util.function.*;
-
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.collect.Iterables;
-
-import org.apache.cassandra.cql3.statements.*;
-import org.apache.cassandra.db.marshal.*;
-import org.apache.cassandra.schema.*;
-import org.apache.cassandra.utils.*;
-
-/**
- * Helper methods to represent TableMetadata and related objects in CQL format
- */
-public class ColumnFamilyStoreCQLHelper
-{
-public static List dumpReCreateStatements(TableMetadata metadata)
-{
-List l = new ArrayList<>();
-// Types come first, as table can't be created without them
-l.addAll(ColumnFamilyStoreCQLHelper.getUserTypesAsCQL(metadata));
-// Record re-create schema statements
-l.add(ColumnFamilyStoreCQLHelper.getTableMetadataAsCQL(

[jira] [Updated] (CASSANDRA-14354) rename ColumnFamilyStoreCQLHelper to TableCQLHelper

2018-04-10 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-14354:
---
   Resolution: Fixed
Fix Version/s: 4.0
   Status: Resolved  (was: Patch Available)

Committed to trunk as e75c517196.

> rename ColumnFamilyStoreCQLHelper to TableCQLHelper
> ---
>
> Key: CASSANDRA-14354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14354
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jon Haddad
>Assignee: Venkata Harikrishna Nukala
>Priority: Major
> Fix For: 4.0
>
> Attachments: 14354-trunk.txt
>
>
> Seems like a simple 1:1 rename.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14218) Deprecate Throwables.propagate usage

2018-04-10 Thread Kirk True (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433196#comment-16433196
 ] 

Kirk True commented on CASSANDRA-14218:
---

I know this is minor, but I'd love a review on this. Thanks!

> Deprecate Throwables.propagate usage
> 
>
> Key: CASSANDRA-14218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14218
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
>Reporter: Romain Hardouin
>Assignee: Kirk True
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: 14218-trunk.txt
>
>
> Google deciced to deprecate guava {{Throwables.propagate}} method:
>  * [Why we deprecated 
> Throwables.propagate|https://github.com/google/guava/wiki/Why-we-deprecated-Throwables.propagate]
>  * [Documentation 
> update|https://github.com/google/guava/wiki/ThrowablesExplained/_compare/92190ee7e37d334fa5fcdb6db8d0f43a2fdf02e1...226a3060445716d479981e606f589c99eee517ca]
> We have 35 occurences in the trunk:
> {code:java}
> $ rg -c 'Throwables.propagate' *
> src/java/org/apache/cassandra/streaming/StreamReader.java:1
> src/java/org/apache/cassandra/streaming/StreamTransferTask.java:1
> src/java/org/apache/cassandra/db/SnapshotDetailsTabularData.java:1
> src/java/org/apache/cassandra/db/Memtable.java:1
> src/java/org/apache/cassandra/db/ColumnFamilyStore.java:4
> src/java/org/apache/cassandra/cache/ChunkCache.java:2
> src/java/org/apache/cassandra/utils/WrappedRunnable.java:1
> src/java/org/apache/cassandra/hints/Hint.java:1
> src/java/org/apache/cassandra/tools/LoaderOptions.java:1
> src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java:1
> src/java/org/apache/cassandra/streaming/management/ProgressInfoCompositeData.java:3
> src/java/org/apache/cassandra/streaming/management/StreamStateCompositeData.java:2
> src/java/org/apache/cassandra/streaming/management/StreamSummaryCompositeData.java:2
> src/java/org/apache/cassandra/streaming/compress/CompressedStreamReader.java:1
> src/java/org/apache/cassandra/db/compaction/Scrubber.java:1
> src/java/org/apache/cassandra/db/compaction/Verifier.java:1
> src/java/org/apache/cassandra/db/compaction/CompactionHistoryTabularData.java:1
> src/java/org/apache/cassandra/db/compaction/Upgrader.java:1
> src/java/org/apache/cassandra/io/compress/CompressionMetadata.java:1
> src/java/org/apache/cassandra/streaming/management/SessionCompleteEventCompositeData.java:2
> src/java/org/apache/cassandra/io/sstable/SSTableSimpleWriter.java:1
> src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java:1
> src/java/org/apache/cassandra/streaming/management/SessionInfoCompositeData.java:3
> src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java:1
> {code}
> I don't know if we want to remove all usages but we should at least check 
> author's intention for each usage and refactor if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2018-04-10 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433197#comment-16433197
 ] 

Jon Haddad commented on CASSANDRA-13010:


Hey [~alourie], sorry for the delay.  The patch no longer applies cleanly.  
Would you mind taking care of the conflicts?  I'll review it immediately.

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>Priority: Major
>  Labels: lhf
> Attachments: 13010.patch, cleanup.png, multiple operations.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14375) Digest mismatch Exception when sending raw hints in cluster

2018-04-10 Thread Vineet Ghatge (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Ghatge updated CASSANDRA-14375:
--
Description: 
We have 14 nodes cluster where we seen hints file getting corrupted and 
resulting in the following error

ERROR [HintsDispatcher:1] 2018-04-06 16:26:44,423 CassandraDaemon.java:228 - 
Exception in thread Thread[HintsDispatcher:1,1,main]
 org.apache.cassandra.io.FSReadError: java.io.IOException: Digest mismatch 
exception
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:298)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:263)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:169) 
~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:128)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:113) 
~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:94) 
~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:278)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:260)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:238)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:217)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_141]
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_141]
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_141]
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[na:1.8.0_141]
 at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
 [apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_141]
 Caused by: java.io.IOException: Digest mismatch exception
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:315)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:289)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 ... 16 common frames omitted

Notes on cluster and investigation done so far
 1. Cassandra used here is built locally from 3.11.1 branch along with 
following patch from issue: CASSANDRA-14080
 
[https://github.com/apache/cassandra/commit/68079e4b2ed4e58dbede70af45414b3d4214e195]
 2. The bootstrap of 14 nodes happens in the following way:
 - Out of 14 nodes only 3 nodes are picked as seed nodes.
 - Only 1 out 3 seed nodes is started and schema is created if it was not 
created previously.
 - Post this, rest of nodes are bootstrapped.
 - In failure scenario, only 5 out of 14 succesfully formed the cassandra 
cluster. The failed nodes include two seed nodes.
 3. We confirmed the following patch from issue: CASSANDRA-13696 has been 
applied. From confirmed from Jay Zhuang that this is different issue from what 
was previously fixed.
 "this should be a different issue, as HintsDispatcher.java:128 sends hints 
with \{{buffer}}s, this patch is only to fix the digest mismatch for 
HintsDispatcher.java:129, which sends hints one by one."
 4. Application uses java driver with quoram setting for cassandra
 5. We saw this issue on 7 node cluster too (different from 14 node cluster)
 6. We are able to workaround by running nodetool truncatehints on failed nodes 
and restarting cassandra.

  was:
We have 14 nodes cluster where we seen hints file getting corrupted and 
resulting in the following error

[04/06/18 12:21 PM] Kotkar, Shantanu: ERROR [HintsDispatcher:1] 2018-04-06 
16:26:44,423 CassandraDaemon.java:228 - Exception in thread 
Thread[HintsDispatcher:1,1,main]
org.apache.cassandra.io.FSReadError: java.io.IOException: Digest mismatch 
exception
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:298)
 ~[apache-cassandra-3.11.1.jar:3.11.1-SNAPSHOT]
 at 
org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:263)
 ~[apache-c

[jira] [Commented] (CASSANDRA-13889) cfstats should take sorting and limit parameters

2018-04-10 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433242#comment-16433242
 ] 

Jon Haddad commented on CASSANDRA-13889:


The patch looks good.  Nice job on the unit tests.

Running it through 
[CircleCI|https://circleci.com/gh/rustyrazorblade/cassandra/17].  

> cfstats should take sorting and limit parameters
> 
>
> Key: CASSANDRA-13889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13889
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jon Haddad
>Assignee: Patrick Bannister
>Priority: Major
> Fix For: 4.0
>
> Attachments: 13889-trunk.txt, sample_output_normal.txt, 
> sample_output_sorted.txt, sample_output_sorted_top3.txt
>
>
> When looking at a problematic node I'm not familiar with, one of the first 
> things I do is check cfstats to identify the tables with the most reads, 
> writes, and data.  This is fine as long as there aren't a lot of tables but 
> once it goes above a dozen it's quite difficult.  cfstats should allow me to 
> sort the results and limit to top K tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/2] cassandra git commit: sort and limit output with nodetool tablestats

2018-04-10 Thread rustyrazorblade
sort and limit output with nodetool tablestats

Patch by Patrick Bannister, reviewed by Jon Haddad for CASSANDRA-13889


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c90b0d62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c90b0d62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c90b0d62

Branch: refs/heads/trunk
Commit: c90b0d62bc32c91c3a98ab691cb36c8177f12871
Parents: e75c517
Author: Patrick Bannister 
Authored: Tue Apr 3 23:58:50 2018 +
Committer: Jon Haddad 
Committed: Tue Apr 10 17:44:15 2018 -0700

--
 CHANGES.txt |   1 +
 NEWS.txt|   1 +
 .../org/apache/cassandra/io/util/FileUtils.java |  39 ++
 .../cassandra/tools/nodetool/TableStats.java|  43 +-
 .../tools/nodetool/stats/StatsTable.java|   4 +-
 .../nodetool/stats/StatsTableComparator.java| 336 +++
 .../tools/nodetool/stats/TableStatsHolder.java  | 184 +---
 .../tools/nodetool/stats/TableStatsPrinter.java | 130 +++---
 .../apache/cassandra/io/util/FileUtilsTest.java |  29 ++
 .../stats/StatsTableComparatorTest.java | 311 +
 .../nodetool/stats/TableStatsPrinterTest.java   | 366 
 .../nodetool/stats/TableStatsTestBase.java  | 432 +++
 12 files changed, 1755 insertions(+), 121 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c90b0d62/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bb8c731..0059ce0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add options to nodetool tablestats to sort and limit output 
(CASSANDRA-13889)
  * Rename internals to reflect CQL vocabulary
(CASSANDRA-14354)
  * Add support for hybrid MIN(), MAX() speculative retry policies

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c90b0d62/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index f8e3ca6..df4c32a 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -69,6 +69,7 @@ New features
- SSTableDump now supports the -l option to output each partition as it's 
own json object
  See CASSANDRA-13848 for more detail
- Metric for coordinator writes per table has been added. See 
CASSANDRA-14232
+   - Nodetool cfstats now has options to sort by various metrics as well as 
limit results.
 
 Upgrading
 -

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c90b0d62/src/java/org/apache/cassandra/io/util/FileUtils.java
--
diff --git a/src/java/org/apache/cassandra/io/util/FileUtils.java 
b/src/java/org/apache/cassandra/io/util/FileUtils.java
index b8be84f..a085813 100644
--- a/src/java/org/apache/cassandra/io/util/FileUtils.java
+++ b/src/java/org/apache/cassandra/io/util/FileUtils.java
@@ -400,6 +400,45 @@ public final class FileUtils
 ScheduledExecutors.nonPeriodicTasks.execute(runnable);
 }
 
+public static long parseFileSize(String value)
+{
+long result;
+if (!value.matches("\\d+(\\.\\d+)? (GiB|KiB|MiB|TiB|bytes)"))
+{
+throw new IllegalArgumentException(
+String.format("value %s is not a valid human-readable file 
size", value));
+}
+if (value.endsWith(" TiB"))
+{
+result = Math.round(Double.valueOf(value.replace(" TiB", "")) * 
ONE_TB);
+return result;
+}
+else if (value.endsWith(" GiB"))
+{
+result = Math.round(Double.valueOf(value.replace(" GiB", "")) * 
ONE_GB);
+return result;
+}
+else if (value.endsWith(" KiB"))
+{
+result = Math.round(Double.valueOf(value.replace(" KiB", "")) * 
ONE_KB);
+return result;
+}
+else if (value.endsWith(" MiB"))
+{
+result = Math.round(Double.valueOf(value.replace(" MiB", "")) * 
ONE_MB);
+return result;
+}
+else if (value.endsWith(" bytes"))
+{
+result = Math.round(Double.valueOf(value.replace(" bytes", "")));
+return result;
+}
+else
+{
+throw new 
IllegalStateException(String.format("FileUtils.parseFileSize() reached an 
illegal state parsing %s", value));
+}
+}
+
 public static String stringifyFileSize(double value)
 {
 double d;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c90b0d62/src/java/org/apache/cassandra/tools/nodetool/TableStats.java
--
diff --git a/src/java/org/apache/cassandra/tools/nodetool/TableStats.java 
b/src/ja

[1/2] cassandra git commit: sort and limit output with nodetool tablestats

2018-04-10 Thread rustyrazorblade
Repository: cassandra
Updated Branches:
  refs/heads/trunk e75c51719 -> c90b0d62b


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c90b0d62/test/unit/org/apache/cassandra/tools/nodetool/stats/TableStatsTestBase.java
--
diff --git 
a/test/unit/org/apache/cassandra/tools/nodetool/stats/TableStatsTestBase.java 
b/test/unit/org/apache/cassandra/tools/nodetool/stats/TableStatsTestBase.java
new file mode 100644
index 000..bb56ef8
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/tools/nodetool/stats/TableStatsTestBase.java
@@ -0,0 +1,432 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.tools.nodetool.stats;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Create a test vector for unit testing of TableStats features.
+ */
+public class TableStatsTestBase
+{
+
+/**
+ * A test vector of StatsKeyspace and StatsTable objects loaded with human 
readable stats.
+ */
+protected static List humanReadableKeyspaces;
+
+/**
+ * A test vector of StatsTable objects loaded with human readable 
statistics.
+ */
+protected static List humanReadableTables;
+
+/**
+ * A test vector of StatsKeyspace and StatsTable objects.
+ */
+protected static List testKeyspaces;
+
+/**
+ * A test vector of StatsTable objects.
+ */
+protected static List testTables;
+
+/**
+ * @returns StatsKeyspace an instance of StatsKeyspace preset with values 
for use in a test vector
+ */
+private static StatsKeyspace createStatsKeyspaceTemplate(String 
keyspaceName)
+{
+return new StatsKeyspace(null, keyspaceName);
+}
+
+/**
+ * @returns StatsTable an instance of StatsTable preset with values for 
use in a test vector
+ */
+private static StatsTable createStatsTableTemplate(String keyspaceName, 
String tableName)
+{
+StatsTable template = new StatsTable();
+template.fullName = keyspaceName + "." + tableName;
+template.keyspaceName = new String(keyspaceName);
+template.tableName = new String(tableName);
+template.isIndex = false;
+template.sstableCount = 0L;
+template.spaceUsedLive = "0";
+template.spaceUsedTotal = "0";
+template.spaceUsedBySnapshotsTotal = "0";
+template.percentRepaired = 1.0D;
+template.bytesRepaired = 0L;
+template.bytesUnrepaired = 0L;
+template.bytesPendingRepair = 0L;
+template.sstableCompressionRatio = -1.0D;
+template.numberOfPartitionsEstimate = 0L;
+template.memtableCellCount = 0L;
+template.memtableDataSize = "0";
+template.memtableSwitchCount = 0L;
+template.localReadCount =0L;
+template.localReadLatencyMs = Double.NaN;
+template.localWriteCount = 0L;
+template.localWriteLatencyMs = 0D;
+template.pendingFlushes = 0L;
+template.bloomFilterFalsePositives = 0L;
+template.bloomFilterFalseRatio = 0D;
+template.bloomFilterSpaceUsed = "0";
+template.indexSummaryOffHeapMemoryUsed = "0";
+template.compressionMetadataOffHeapMemoryUsed = "0";
+template.compactedPartitionMinimumBytes = 0L;
+template.compactedPartitionMaximumBytes = 0L;
+template.compactedPartitionMeanBytes = 0L;
+template.bytesRepaired = 0L;
+template.bytesUnrepaired = 0L;
+template.bytesPendingRepair = 0L;
+template.averageLiveCellsPerSliceLastFiveMinutes = Double.NaN;
+template.maximumLiveCellsPerSliceLastFiveMinutes = 0L;
+template.averageTombstonesPerSliceLastFiveMinutes = Double.NaN;
+template.maximumTombstonesPerSliceLastFiveMinutes = 0L;
+template.droppedMutations = "0";
+return template;
+}
+
+@BeforeClass
+public static void createTestVector()
+{
+// create test tables from templates
+StatsTable table1 = createStatsTableTemplate("keyspace1", "table1");
+StatsTable table2 = createStatsTableTemplate("keyspace1"

[jira] [Updated] (CASSANDRA-13889) cfstats should take sorting and limit parameters

2018-04-10 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-13889:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed as c90b0d62bc, thanks!

> cfstats should take sorting and limit parameters
> 
>
> Key: CASSANDRA-13889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13889
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jon Haddad
>Assignee: Patrick Bannister
>Priority: Major
> Fix For: 4.0
>
> Attachments: 13889-trunk.txt, sample_output_normal.txt, 
> sample_output_sorted.txt, sample_output_sorted_top3.txt
>
>
> When looking at a problematic node I'm not familiar with, one of the first 
> things I do is check cfstats to identify the tables with the most reads, 
> writes, and data.  This is fine as long as there aren't a lot of tables but 
> once it goes above a dozen it's quite difficult.  cfstats should allow me to 
> sort the results and limit to top K tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2018-04-10 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433267#comment-16433267
 ] 

Alex Lourie commented on CASSANDRA-13010:
-

[~rustyrazorblade] Fixed. If any more updates needed, just let me know, I'm 
interested in pulling this in, so I'll be online for quite some time :)

Thanks for the review!

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>Priority: Major
>  Labels: lhf
> Attachments: 13010.patch, cleanup.png, multiple operations.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13910) Remove read_repair_chance/dclocal_read_repair_chance

2018-04-10 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433273#comment-16433273
 ] 

Jeremiah Jordan commented on CASSANDRA-13910:
-

The WARN on using old things is how we have done this in the past. Like when we 
renamed the row and key cache settings.

> Remove read_repair_chance/dclocal_read_repair_chance
> 
>
> Key: CASSANDRA-13910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13910
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.0
>
>
> First, let me clarify so this is not misunderstood that I'm not *at all* 
> suggesting to remove the read-repair mechanism of detecting and repairing 
> inconsistencies between read responses: that mechanism is imo fine and 
> useful.  But the {{read_repair_chance}} and {{dclocal_read_repair_chance}} 
> have never been about _enabling_ that mechanism, they are about querying all 
> replicas (even when this is not required by the consistency level) for the 
> sole purpose of maybe read-repairing some of the replica that wouldn't have 
> been queried otherwise. Which btw, bring me to reason 1 for considering their 
> removal: their naming/behavior is super confusing. Over the years, I've seen 
> countless users (and not only newbies) misunderstanding what those options 
> do, and as a consequence misunderstand when read-repair itself was happening.
> But my 2nd reason for suggesting this is that I suspect 
> {{read_repair_chance}}/{{dclocal_read_repair_chance}} are, especially 
> nowadays, more harmful than anything else when enabled. When those option 
> kick in, what you trade-off is additional resources consumption (all nodes 
> have to execute the read) for a _fairly remote chance_ of having some 
> inconsistencies repaired on _some_ replica _a bit faster_ than they would 
> otherwise be. To justify that last part, let's recall that:
> # most inconsistencies are actually fixed by hints in practice; and in the 
> case where a node stay dead for a long time so that hints ends up timing-out, 
> you really should repair the node when it comes back (if not simply 
> re-bootstrapping it).  Read-repair probably don't fix _that_ much stuff in 
> the first place.
> # again, read-repair do happen without those options kicking in. If you do 
> reads at {{QUORUM}}, inconsistencies will eventually get read-repaired all 
> the same.  Just a tiny bit less quickly.
> # I suspect almost everyone use a low "chance" for those options at best 
> (because the extra resources consumption is real), so at the end of the day, 
> it's up to chance how much faster this fixes inconsistencies.
> Overall, I'm having a hard time imagining real cases where that trade-off 
> really make sense. Don't get me wrong, those options had their places a long 
> time ago when hints weren't working all that well, but I think they bring 
> more confusion than benefits now.
> And I think it's sane to reconsider stuffs every once in a while, and to 
> clean up anything that may not make all that much sense anymore, which I 
> think is the case here.
> Tl;dr, I feel the benefits brought by those options are very slim at best and 
> well overshadowed by the confusion they bring, and not worth maintaining the 
> code that supports them (which, to be fair, isn't huge, but getting rid of 
> {{ReadCallback.AsyncRepairRunner}} wouldn't hurt for instance).
> Lastly, if the consensus here ends up being that they can have their use in 
> weird case and that we fill supporting those cases is worth confusing 
> everyone else and maintaining that code, I would still suggest disabling them 
> totally by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2018-04-10 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433320#comment-16433320
 ] 

Jon Haddad commented on CASSANDRA-13010:


I'm done with my day over here, can't read any more code.  I'll get it reviewed 
tomorrow, thanks for the quick turnaround!

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>Priority: Major
>  Labels: lhf
> Attachments: 13010.patch, cleanup.png, multiple operations.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14218) Deprecate Throwables.propagate usage

2018-04-10 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433365#comment-16433365
 ] 

Jeff Jirsa commented on CASSANDRA-14218:


Appreciate your patience Kirk, will try to nudge some people to get . you some 
eyeballs in the near future. 

 

> Deprecate Throwables.propagate usage
> 
>
> Key: CASSANDRA-14218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14218
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
>Reporter: Romain Hardouin
>Assignee: Kirk True
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: 14218-trunk.txt
>
>
> Google deciced to deprecate guava {{Throwables.propagate}} method:
>  * [Why we deprecated 
> Throwables.propagate|https://github.com/google/guava/wiki/Why-we-deprecated-Throwables.propagate]
>  * [Documentation 
> update|https://github.com/google/guava/wiki/ThrowablesExplained/_compare/92190ee7e37d334fa5fcdb6db8d0f43a2fdf02e1...226a3060445716d479981e606f589c99eee517ca]
> We have 35 occurences in the trunk:
> {code:java}
> $ rg -c 'Throwables.propagate' *
> src/java/org/apache/cassandra/streaming/StreamReader.java:1
> src/java/org/apache/cassandra/streaming/StreamTransferTask.java:1
> src/java/org/apache/cassandra/db/SnapshotDetailsTabularData.java:1
> src/java/org/apache/cassandra/db/Memtable.java:1
> src/java/org/apache/cassandra/db/ColumnFamilyStore.java:4
> src/java/org/apache/cassandra/cache/ChunkCache.java:2
> src/java/org/apache/cassandra/utils/WrappedRunnable.java:1
> src/java/org/apache/cassandra/hints/Hint.java:1
> src/java/org/apache/cassandra/tools/LoaderOptions.java:1
> src/java/org/apache/cassandra/tools/SSTableOfflineRelevel.java:1
> src/java/org/apache/cassandra/streaming/management/ProgressInfoCompositeData.java:3
> src/java/org/apache/cassandra/streaming/management/StreamStateCompositeData.java:2
> src/java/org/apache/cassandra/streaming/management/StreamSummaryCompositeData.java:2
> src/java/org/apache/cassandra/streaming/compress/CompressedStreamReader.java:1
> src/java/org/apache/cassandra/db/compaction/Scrubber.java:1
> src/java/org/apache/cassandra/db/compaction/Verifier.java:1
> src/java/org/apache/cassandra/db/compaction/CompactionHistoryTabularData.java:1
> src/java/org/apache/cassandra/db/compaction/Upgrader.java:1
> src/java/org/apache/cassandra/io/compress/CompressionMetadata.java:1
> src/java/org/apache/cassandra/streaming/management/SessionCompleteEventCompositeData.java:2
> src/java/org/apache/cassandra/io/sstable/SSTableSimpleWriter.java:1
> src/java/org/apache/cassandra/io/sstable/ISSTableScanner.java:1
> src/java/org/apache/cassandra/streaming/management/SessionInfoCompositeData.java:3
> src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java:1
> {code}
> I don't know if we want to remove all usages but we should at least check 
> author's intention for each usage and refactor if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Yaml comments: data_file_directories distributes data evenly by partitioning its token ranges.

2018-04-10 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/trunk c90b0d62b -> 42827e6a6


Yaml comments: data_file_directories distributes data evenly by partitioning 
its token ranges.

Patch by Venkata Harikrishna Nukala; Reviewed by Jeff Jirsa for CASSANDRA-14372


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/42827e6a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/42827e6a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/42827e6a

Branch: refs/heads/trunk
Commit: 42827e6a6709c4ba031e0a137a3bab257f88b54f
Parents: c90b0d6
Author: nvharikrishna 
Authored: Wed Apr 11 01:56:37 2018 +0530
Committer: Jeff Jirsa 
Committed: Tue Apr 10 21:22:19 2018 -0700

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/42827e6a/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 0a954b4..1be6feb 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -183,9 +183,9 @@ credentials_validity_in_ms: 2000
 #
 partitioner: org.apache.cassandra.dht.Murmur3Partitioner
 
-# Directories where Cassandra should store data on disk.  Cassandra
-# will spread data evenly across them, subject to the granularity of
-# the configured compaction strategy.
+# Directories where Cassandra should store data on disk. If multiple
+# directories are specified, Cassandra will spread data evenly across 
+# them by partitioning the token ranges.
 # If not set, the default directory is $CASSANDRA_HOME/data/data.
 # data_file_directories:
 # - /var/lib/cassandra/data


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14372) data_file_directories config - update documentation in cassandra.yaml

2018-04-10 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14372:
---
Status: Ready to Commit  (was: Patch Available)

> data_file_directories config - update documentation in cassandra.yaml
> -
>
> Key: CASSANDRA-14372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14372
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Venkata Harikrishna Nukala
>Assignee: Venkata Harikrishna Nukala
>Priority: Minor
> Attachments: 14372-trunk.txt
>
>
> If "data_file_directories" configuration is enabled with multiple 
> directories, data is partitioned by token range so that data gets distributed 
> evenly. But the current documentation says that "Cassandra will spread data 
> evenly across them, subject to the granularity of the configured compaction 
> strategy". Need to update this comment to reflect the correct behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14372) data_file_directories config - update documentation in cassandra.yaml

2018-04-10 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14372:
---
   Resolution: Fixed
 Reviewer: Jeff Jirsa
Fix Version/s: 4.0
   Status: Resolved  (was: Ready to Commit)

Thanks! Committed as 42827e6a6709c4ba031e0a137a3bab257f88b54f

 

> data_file_directories config - update documentation in cassandra.yaml
> -
>
> Key: CASSANDRA-14372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14372
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Venkata Harikrishna Nukala
>Assignee: Venkata Harikrishna Nukala
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 14372-trunk.txt
>
>
> If "data_file_directories" configuration is enabled with multiple 
> directories, data is partitioned by token range so that data gets distributed 
> evenly. But the current documentation says that "Cassandra will spread data 
> evenly across them, subject to the granularity of the configured compaction 
> strategy". Need to update this comment to reflect the correct behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2018-04-10 Thread Alex Lourie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433399#comment-16433399
 ] 

Alex Lourie commented on CASSANDRA-13010:
-

No worries [~rustyrazorblade] , there's no rush. Thank you for reviewing!

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Tools
>Reporter: Jon Haddad
>Assignee: Alex Lourie
>Priority: Major
>  Labels: lhf
> Attachments: 13010.patch, cleanup.png, multiple operations.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jürgen Albersdorfer updated CASSANDRA-14239:

Attachment: gc.log.0.current.zip

> OutOfMemoryError when bootstrapping with less than 100GB RAM
> 
>
> Key: CASSANDRA-14239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14239
> Project: Cassandra
>  Issue Type: Bug
> Environment: Details of the bootstrapping Node
>  * ProLiant BL460c G7
>  * 56GB RAM
>  * 2x 146GB 10K HDD (One dedicated for Commitlog, one for Data, Hints and 
> saved_caches)
>  * CentOS 7.4 on SD-Card
>  * /tmp and /var/log on tmpfs
>  * Oracle JDK 1.8.0_151
>  * Cassandra 3.11.1
> Cluster
>  * 10 existing Nodes (Up and Normal)
>Reporter: Jürgen Albersdorfer
>Priority: Major
> Attachments: Objects-by-class.csv, 
> Objects-with-biggest-retained-size.csv, cassandra-env.sh, cassandra.yaml, 
> gc.log.0.current.zip, jvm.options, jvm_opts.txt, stack-traces.txt
>
>
> Hi, I face an issue when bootstrapping a Node having less than 100GB RAM on 
> our 10 Node C* 3.11.1 Cluster.
> During bootstrap, when I watch the cassandra.log I observe a growth in JVM 
> Heap Old Gen which gets not significantly freed up any more.
> I know that JVM collects on Old Gen only when really needed. I can see 
> collections, but there is always a remainder which seems to grow forever 
> without ever getting freed.
> After the Node successfully Joined the Cluster, I can remove the extra RAM I 
> have given it for bootstrapping without any further effect.
> It feels like Cassandra will not forget about every single byte streamed over 
> the Network over time during bootstrapping, - which would be a memory leak 
> and a major problem, too.
> I was able to produce a HeapDumpOnOutOfMemoryError from a 56GB Node (40 GB 
> assigned JVM Heap). YourKit Profiler shows huge amount of Memory allocated 
> for org.apache.cassandra.db.Memtable (22 GB) 
> org.apache.cassandra.db.rows.BufferCell (19 GB) and java.nio.HeapByteBuffer 
> (11 GB)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433463#comment-16433463
 ] 

Jürgen Albersdorfer commented on CASSANDRA-14239:
-

Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 1TB SSD, I didn't change
# memtable_flush_writers: 2
and also left
# memtable_heap_space_in_mb: 1048# memtable_offheap_space_in_mb: 1048
defaulting to 25% of Heap.

I cannot see any IO Pressure on the System during the whole bootstrap Process:
{code:java}
system ---load-avg--- ---procs--- --memory-usage- ---paging-- 
-dsk/total- ---system-- total-cpu-usage --io/total- -net/total-
 time | 1m   5m  15m |run blk new| used  buff  cach  free|  in   out | 
read  writ| int   csw |usr sys idl wai hiq siq| read  writ| recv  send
10-04 16:32:07| 118  128  133| 44 8.3 0.8|78.2G    0  15.9G  314M|   0 0 | 
200k 4458k|  23k   11k| 59   1  40   0   0   0|31.7  14.0 |3198k   79k
10-04 16:32:17|99.7  124  132|1.0   0 0.9|78.2G    0  15.9G  310M|   0 0 |  
 0  3123B|1509   214 |  6   0  94   0   0   0|   0  0.50 | 176k 3337B
10-04 16:32:27|84.5  120  130|1.0   0 0.8|78.2G    0  15.9G  315M|   0 0 |  
 0 0 |2312   203 |  6   0  94   0   0   0|   0 0 | 905k   10k
10-04 16:32:37|71.7  116  129|1.0   0 0.8|78.2G    0  15.9G  316M|   0 0 |  
 0   121k|1259   198 |  6   0  94   0   0   0|   0  1.20 |1737B  505B
10-04 16:32:47|60.8  112  127|1.0   0 0.8|78.2G    0  15.9G  316M|   0 0 |  
 0    37k|1240   184 |  6   0  94   0   0   0|   0  2.20 |1450B  308B
10-04 16:32:57|51.6  109  126|1.1   0 0.8|78.2G    0  15.9G  315M|   0 0 |  
 0 0 |1240   175 |  6   0  94   0   0   0|   0 0 |1541B  308B
10-04 16:33:07|43.8  105  125|1.0   0 0.8|78.2G    0  15.9G  316M|   0 0 |  
 0 0 |1218   153 |  6   0  94   0   0   0|   0 0 |1791B  593B
10-04 16:33:17|37.2  102  123|1.0   0 0.8|78.2G    0  15.9G  316M|   0 0 |  
 0    21k|1198   141 |  6   0  94   0   0   0|   0  1.40 |1496B  389B
10-04 16:33:27|31.7 98.5  122|1.0   0 0.8|78.2G    0  15.9G  316M|   0 0 |  
 0 0 |1188   122 |  6   0  94   0   0   0|   0 0 |1610B  425B
10-04 16:33:37|27.0 95.3  121|1.0   0 0.8|78.2G    0  15.9G  316M|   0 0 |  
 0 0 |1176   121 |  6   0  94   0   0   0|   0 0 |1723B  313B
10-04 16:33:47|23.0 92.2  119|1.0   0 0.9|78.2G    0  15.9G  317M|   0 0 |  
 0   307B|1165   120 |  6   0  94   0   0   0|   0  0.40 |1515B  276B
10-04 16:33:57|19.6 89.2  118|1.1   0 0.8|78.2G    0  15.9G  317M|   0 0 |  
 0 0 |1166   116 |  6   0  94   0   0   0|   0 0 |1543B  384B
10-04 16:34:07|16.7 86.3  117|1.0   0 0.8|78.2G    0  15.9G  317M|   0 0 |  
 0 0 |1169   114 |  6   0  94   0   0   0|   0 0 |1635B  582B
10-04 16:34:17|15.3 83.7  116| 12   0 1.7|78.2G    0  15.9G  312M|   0 0 |  
20k 1382B|  20k 1648 | 58   0  42   0   0   0|1.50  0.50 | 102k 7651B
10-04 16:34:27|29.9 84.5  116| 87   0 5.7|78.2G    0  15.9G  315M|   0 0 | 
248k 5055k|  40k   27k| 96   1   3   0   0   0|37.1  18.3 |4296k  424k
10-04 16:34:37|47.9 86.6  116|148 0.3 0.8|78.2G    0  15.9G  309M|   0 0 | 
232k 2647k|  35k   29k| 98   1   1   0   0   0|33.3  7.20 |2510k  207k
10-04 16:34:47|44.6 84.6  115| 24   0 1.3|78.2G    0  15.9G  310M|   0 0 | 
894k   17M|  80k   83k| 91   4   4   0   0   2| 119  59.8 |  15M 3217k
10-04 16:34:57|41.0 82.5  114| 19   0 1.0|78.2G    0  15.9G  301M|   0 0 | 
304k   19M|  35k 5311 | 95   2   2   0   0   1|40.4  56.1 |  17M  146k
10-04 16:35:07|37.9 80.5  113| 21   0 1.1|78.2G    0  15.9G  320M|   0 0 | 
342k   18M|  39k 5805 | 96   2   1   0   0   1|43.6  56.2 |  20M  179k
10-04 16:35:17|35.4 78.5  112| 20   0 0.9|78.2G    0  15.9G  315M|   0 0 | 
334k   18M|  34k 5770 | 96   2   2   0   0   0|42.5  54.2 |  17M   79k
10-04 16:35:27|33.3 76.7  111| 20   0 1.0|78.2G    0  15.9G  303M|   0 0 | 
290k   19M|  36k 6144 | 96   2   2   0   0   0|38.0  55.1 |  19M   83k
10-04 16:35:37|31.0 74.8  110| 18   0 0.8|78.2G    0  15.9G  305M|   0 0 | 
813k   23M|  42k 6870 | 94   2   3   0   0   1| 104  62.3 |  23M   90k
10-04 16:35:47|29.5 73.0  109| 21   0 0.8|78.2G    0  15.9G  323M|   0 0 | 
360k   18M|  35k 5955 | 96   2   2   0   0   0|45.8  51.4 |  18M   55k
10-04 16:35:57|28.4 71.3  108| 20 0.1 0.8|78.2G    0  15.9G  313M|   0 0 | 
325k   19M|  36k 6081 | 96   2   2   0   0   0|41.3  52.2 |  18M   54k
10-04 16:36:07|27.2 69.7  107| 21   0 0.8|78.2G    0  15.9G  304M|   0 0 | 
358k   18M|  36k 6036 | 95   2   3   0   0   0|45.5  50.7 |  18M   56k
10-04 16:36:17|26.3 68.1  106| 21   0 0.8|78.2G    0  15.9G  305M|   0 0 | 
3

[jira] [Comment Edited] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433463#comment-16433463
 ] 

Jürgen Albersdorfer edited comment on CASSANDRA-14239 at 4/11/18 6:29 AM:
--

Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 1TB SSD, I didn't change
 # memtable_flush_writers: 2
 and also left
 # memtable_heap_space_in_mb: 1048# memtable_offheap_space_in_mb: 1048
 defaulting to 25% of Heap.

I cannot see any IO Pressure on the System during the whole bootstrap Process:
{code:java}
-dsk/total- ---system-- total-cpu-usage --io/total-
 read  writ| int   csw |usr sys idl wai hiq siq| read  writ
 200k 4458k|  23k   11k| 59   1  40   0   0   0|31.7  14.0
   0  3123B|1509   214 |  6   0  94   0   0   0|   0  0.50
   0 0 |2312   203 |  6   0  94   0   0   0|   0 0
   0   121k|1259   198 |  6   0  94   0   0   0|   0  1.20
   0    37k|1240   184 |  6   0  94   0   0   0|   0  2.20
   0 0 |1240   175 |  6   0  94   0   0   0|   0 0
   0 0 |1218   153 |  6   0  94   0   0   0|   0 0
   0    21k|1198   141 |  6   0  94   0   0   0|   0  1.40
   0 0 |1188   122 |  6   0  94   0   0   0|   0 0
   0 0 |1176   121 |  6   0  94   0   0   0|   0 0
   0   307B|1165   120 |  6   0  94   0   0   0|   0  0.40
   0 0 |1166   116 |  6   0  94   0   0   0|   0 0
   0 0 |1169   114 |  6   0  94   0   0   0|   0 0
  20k 1382B|  20k 1648 | 58   0  42   0   0   0|1.50  0.50
 248k 5055k|  40k   27k| 96   1   3   0   0   0|37.1  18.3
 232k 2647k|  35k   29k| 98   1   1   0   0   0|33.3  7.20
 894k   17M|  80k   83k| 91   4   4   0   0   2| 119  59.8
 304k   19M|  35k 5311 | 95   2   2   0   0   1|40.4  56.1
 342k   18M|  39k 5805 | 96   2   1   0   0   1|43.6  56.2
 334k   18M|  34k 5770 | 96   2   2   0   0   0|42.5  54.2
 290k   19M|  36k 6144 | 96   2   2   0   0   0|38.0  55.1
 813k   23M|  42k 6870 | 94   2   3   0   0   1| 104  62.3
 360k   18M|  35k 5955 | 96   2   2   0   0   0|45.8  51.4
 325k   19M|  36k 6081 | 96   2   2   0   0   0|41.3  52.2
 358k   18M|  36k 6036 | 95   2   3   0   0   0|45.5  50.7
 344k   19M|  35k 6063 | 96   2   2   0   0   0|45.5  52.9
 380k   17M|  36k 5980 | 95   2   3   0   0   0|48.7  46.0
 685k   21M|  39k 6163 | 94   2   4   0   0   1|87.5  57.8
 632k   18M|  34k 5885 | 95   2   3   0   0   0|63.8  53.1
 795k   19M|  34k 5634 | 95   2   2   0   0   0|75.7  53.4
 869k   15M|  40k   13k| 94   2   4   0   0   1|91.6  47.8
 730k   16M|  54k   30k| 93   2   5   0   0   1|81.6  48.3
 651k   15M|  61k   40k| 89   3   7   0   0   1|74.3  47.1
 782k   15M|  78k   76k| 87   4   8   0   0   1|57.6  41.8
1284k   18M|  67k   47k| 94   3   2   0   0   1| 128  58.6
1279k   19M|  40k 5963 | 96   2   2   0   0   0| 107  56.3
1110k   18M|  38k 5986 | 96   2   2   0   0   0| 114  49.2
1286k   21M|  39k 5773 | 96   2   1   0   0   0| 109  58.0
2701k   21M|  50k 6534 | 91   2   5   0   0   1| 282  68.3
1760k   17M|  40k 5498 | 94   2   3   0   0   1| 234  48.3
1295k   18M|  42k 5610 | 95   2   3   0   0   0| 136  53.1
1315k   19M|  44k 5387 | 96   2   2   0   0   0|97.4  55.1
 214k 2818k|7171  6043 | 20   0  79   0   0   0|13.8  7.80
  16k 4864B|1263   200 |  6   0  94   0   0   0|0.50  0.60
   0 0 |1226   166 |  6   0  94   0   0   0|   0 0
   0   449k|1217   162 |  6   0  94   0   0   0|   0  1.80
   0    12k|1213   155 |  6   0  94   0   0   0|   0  0.90
   0 0 |1237   170 |  6   0  94   0   0   0|   0 0
 239k    0 |1305   278 |  6   0  94   0   0   0|8.30 0
   0    16k|1202   147 |  6   0  94   0   0   0|   0  1.30
{code}
I will try again nevertheless.


was (Author: jalbersdorfer):
Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 1TB SSD, I didn't change
# memtable_flush_writers: 2
and also left
# memtable_heap_space_in_mb: 1048# memtable_offheap_space_in_mb: 1048
defaulting to 25% of Heap.

I cannot see any IO Pressure on the System during the whole bootstrap Process:
{code:java}
system ---load-avg--- ---procs--- --memory-usage- ---paging-- 
-dsk/total- ---system-- total-cpu-usage --io/total- -net/total-
 time | 1m   5m  15m |run blk new| used  buff  cach  free|  in   out | 
read  writ| int   csw |usr sys idl wai hiq siq| read  writ| recv  send
10-04 16:32:07| 118  128  133| 44 8.3 0.8|78.2G    0  15.9G  314M|   0 0 | 
200k 4458k|  23k   11k| 

[jira] [Comment Edited] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433463#comment-16433463
 ] 

Jürgen Albersdorfer edited comment on CASSANDRA-14239 at 4/11/18 6:30 AM:
--

Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 1TB SSD, I didn't change
 # memtable_flush_writers: 2
 and also left
 # memtable_heap_space_in_mb: 1048# memtable_offheap_space_in_mb: 1048
 defaulting to 25% of Heap.

I cannot see any IO Pressure on the System during the whole bootstrap Process:
{code:java}
-dsk/total- ---system-- total-cpu-usage --io/total-
 read  writ| int   csw |usr sys idl wai hiq siq| read  writ
 200k 4458k|  23k   11k| 59   1  40   0   0   0|31.7  14.0
   0  3123B|1509   214 |  6   0  94   0   0   0|   0  0.50
   0 0 |2312   203 |  6   0  94   0   0   0|   0 0
   0   121k|1259   198 |  6   0  94   0   0   0|   0  1.20
   0    37k|1240   184 |  6   0  94   0   0   0|   0  2.20
   0 0 |1240   175 |  6   0  94   0   0   0|   0 0
   0 0 |1218   153 |  6   0  94   0   0   0|   0 0
   0    21k|1198   141 |  6   0  94   0   0   0|   0  1.40
   0 0 |1188   122 |  6   0  94   0   0   0|   0 0
   0 0 |1176   121 |  6   0  94   0   0   0|   0 0
   0   307B|1165   120 |  6   0  94   0   0   0|   0  0.40
   0 0 |1166   116 |  6   0  94   0   0   0|   0 0
   0 0 |1169   114 |  6   0  94   0   0   0|   0 0
  20k 1382B|  20k 1648 | 58   0  42   0   0   0|1.50  0.50
 248k 5055k|  40k   27k| 96   1   3   0   0   0|37.1  18.3
 232k 2647k|  35k   29k| 98   1   1   0   0   0|33.3  7.20
 894k   17M|  80k   83k| 91   4   4   0   0   2| 119  59.8
 304k   19M|  35k 5311 | 95   2   2   0   0   1|40.4  56.1
 342k   18M|  39k 5805 | 96   2   1   0   0   1|43.6  56.2
 334k   18M|  34k 5770 | 96   2   2   0   0   0|42.5  54.2
 290k   19M|  36k 6144 | 96   2   2   0   0   0|38.0  55.1
 813k   23M|  42k 6870 | 94   2   3   0   0   1| 104  62.3
 360k   18M|  35k 5955 | 96   2   2   0   0   0|45.8  51.4
 325k   19M|  36k 6081 | 96   2   2   0   0   0|41.3  52.2
 358k   18M|  36k 6036 | 95   2   3   0   0   0|45.5  50.7
 344k   19M|  35k 6063 | 96   2   2   0   0   0|45.5  52.9
 380k   17M|  36k 5980 | 95   2   3   0   0   0|48.7  46.0
 685k   21M|  39k 6163 | 94   2   4   0   0   1|87.5  57.8
 632k   18M|  34k 5885 | 95   2   3   0   0   0|63.8  53.1
 795k   19M|  34k 5634 | 95   2   2   0   0   0|75.7  53.4
 869k   15M|  40k   13k| 94   2   4   0   0   1|91.6  47.8
 730k   16M|  54k   30k| 93   2   5   0   0   1|81.6  48.3
 651k   15M|  61k   40k| 89   3   7   0   0   1|74.3  47.1
 782k   15M|  78k   76k| 87   4   8   0   0   1|57.6  41.8
1284k   18M|  67k   47k| 94   3   2   0   0   1| 128  58.6
1279k   19M|  40k 5963 | 96   2   2   0   0   0| 107  56.3
1110k   18M|  38k 5986 | 96   2   2   0   0   0| 114  49.2
1286k   21M|  39k 5773 | 96   2   1   0   0   0| 109  58.0
2701k   21M|  50k 6534 | 91   2   5   0   0   1| 282  68.3
1760k   17M|  40k 5498 | 94   2   3   0   0   1| 234  48.3
1295k   18M|  42k 5610 | 95   2   3   0   0   0| 136  53.1
1315k   19M|  44k 5387 | 96   2   2   0   0   0|97.4  55.1
 214k 2818k|7171  6043 | 20   0  79   0   0   0|13.8  7.80
  16k 4864B|1263   200 |  6   0  94   0   0   0|0.50  0.60
   0 0 |1226   166 |  6   0  94   0   0   0|   0 0
   0   449k|1217   162 |  6   0  94   0   0   0|   0  1.80
   0    12k|1213   155 |  6   0  94   0   0   0|   0  0.90
   0 0 |1237   170 |  6   0  94   0   0   0|   0 0
 239k    0 |1305   278 |  6   0  94   0   0   0|8.30 0
   0    16k|1202   147 |  6   0  94   0   0   0|   0  1.30
{code}
I will try again with changed settings nevertheless.


was (Author: jalbersdorfer):
Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 1TB SSD, I didn't change
 # memtable_flush_writers: 2
 and also left
 # memtable_heap_space_in_mb: 1048# memtable_offheap_space_in_mb: 1048
 defaulting to 25% of Heap.

I cannot see any IO Pressure on the System during the whole bootstrap Process:
{code:java}
-dsk/total- ---system-- total-cpu-usage --io/total-
 read  writ| int   csw |usr sys idl wai hiq siq| read  writ
 200k 4458k|  23k   11k| 59   1  40   0   0   0|31.7  14.0
   0  3123B|1509   214 |  6   0  94   0   0   0|   0  0.50
   0 0 |2312   203 |  6   0  94   0   0   0|   0 0
   0   121k|1259   198 |  6   0  94   0   0   0|   0  1.20
   0    37k|1240   184 |

[jira] [Comment Edited] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433463#comment-16433463
 ] 

Jürgen Albersdorfer edited comment on CASSANDRA-14239 at 4/11/18 6:34 AM:
--

Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 10K spinning Disk for commitlog and a 1TB SSD for data, 
hints and saved_caches. - I didn't change:
{code:java}
disk_optimization_strategy: spinning # was wrong for SSD, I will change!
memtable_allocation_type: heap_buffers
#memtable_flush_writers: 2
# memtable_heap_space_in_mb: 2048
# memtable_offheap_space_in_mb: 2048
{code}
I cannot see any IO Pressure on the System during the whole bootstrap Process:
{code:java}
-dsk/total- ---system-- total-cpu-usage --io/total-
 read  writ| int   csw |usr sys idl wai hiq siq| read  writ
 200k 4458k|  23k   11k| 59   1  40   0   0   0|31.7  14.0
   0  3123B|1509   214 |  6   0  94   0   0   0|   0  0.50
   0 0 |2312   203 |  6   0  94   0   0   0|   0 0
   0   121k|1259   198 |  6   0  94   0   0   0|   0  1.20
   0    37k|1240   184 |  6   0  94   0   0   0|   0  2.20
   0 0 |1240   175 |  6   0  94   0   0   0|   0 0
   0 0 |1218   153 |  6   0  94   0   0   0|   0 0
   0    21k|1198   141 |  6   0  94   0   0   0|   0  1.40
   0 0 |1188   122 |  6   0  94   0   0   0|   0 0
   0 0 |1176   121 |  6   0  94   0   0   0|   0 0
   0   307B|1165   120 |  6   0  94   0   0   0|   0  0.40
   0 0 |1166   116 |  6   0  94   0   0   0|   0 0
   0 0 |1169   114 |  6   0  94   0   0   0|   0 0
  20k 1382B|  20k 1648 | 58   0  42   0   0   0|1.50  0.50
 248k 5055k|  40k   27k| 96   1   3   0   0   0|37.1  18.3
 232k 2647k|  35k   29k| 98   1   1   0   0   0|33.3  7.20
 894k   17M|  80k   83k| 91   4   4   0   0   2| 119  59.8
 304k   19M|  35k 5311 | 95   2   2   0   0   1|40.4  56.1
 342k   18M|  39k 5805 | 96   2   1   0   0   1|43.6  56.2
 334k   18M|  34k 5770 | 96   2   2   0   0   0|42.5  54.2
 290k   19M|  36k 6144 | 96   2   2   0   0   0|38.0  55.1
 813k   23M|  42k 6870 | 94   2   3   0   0   1| 104  62.3
 360k   18M|  35k 5955 | 96   2   2   0   0   0|45.8  51.4
 325k   19M|  36k 6081 | 96   2   2   0   0   0|41.3  52.2
 358k   18M|  36k 6036 | 95   2   3   0   0   0|45.5  50.7
 344k   19M|  35k 6063 | 96   2   2   0   0   0|45.5  52.9
 380k   17M|  36k 5980 | 95   2   3   0   0   0|48.7  46.0
 685k   21M|  39k 6163 | 94   2   4   0   0   1|87.5  57.8
 632k   18M|  34k 5885 | 95   2   3   0   0   0|63.8  53.1
 795k   19M|  34k 5634 | 95   2   2   0   0   0|75.7  53.4
 869k   15M|  40k   13k| 94   2   4   0   0   1|91.6  47.8
 730k   16M|  54k   30k| 93   2   5   0   0   1|81.6  48.3
 651k   15M|  61k   40k| 89   3   7   0   0   1|74.3  47.1
 782k   15M|  78k   76k| 87   4   8   0   0   1|57.6  41.8
1284k   18M|  67k   47k| 94   3   2   0   0   1| 128  58.6
1279k   19M|  40k 5963 | 96   2   2   0   0   0| 107  56.3
1110k   18M|  38k 5986 | 96   2   2   0   0   0| 114  49.2
1286k   21M|  39k 5773 | 96   2   1   0   0   0| 109  58.0
2701k   21M|  50k 6534 | 91   2   5   0   0   1| 282  68.3
1760k   17M|  40k 5498 | 94   2   3   0   0   1| 234  48.3
1295k   18M|  42k 5610 | 95   2   3   0   0   0| 136  53.1
1315k   19M|  44k 5387 | 96   2   2   0   0   0|97.4  55.1
 214k 2818k|7171  6043 | 20   0  79   0   0   0|13.8  7.80
  16k 4864B|1263   200 |  6   0  94   0   0   0|0.50  0.60
   0 0 |1226   166 |  6   0  94   0   0   0|   0 0
   0   449k|1217   162 |  6   0  94   0   0   0|   0  1.80
   0    12k|1213   155 |  6   0  94   0   0   0|   0  0.90
   0 0 |1237   170 |  6   0  94   0   0   0|   0 0
 239k    0 |1305   278 |  6   0  94   0   0   0|8.30 0
   0    16k|1202   147 |  6   0  94   0   0   0|   0  1.30
{code}
I will try again with changed settings nevertheless.


was (Author: jalbersdorfer):
Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 1TB SSD, I didn't change
 # memtable_flush_writers: 2
 and also left
 # memtable_heap_space_in_mb: 1048# memtable_offheap_space_in_mb: 1048
 defaulting to 25% of Heap.

I cannot see any IO Pressure on the System during the whole bootstrap Process:
{code:java}
-dsk/total- ---system-- total-cpu-usage --io/total-
 read  writ| int   csw |usr sys idl wai hiq siq| read  writ
 200k 4458k|  23k   11k| 59   1  40   0   0   0|31.7  14.0
   0  3123B|1509   214 |  6   0  94  

[jira] [Comment Edited] (CASSANDRA-14239) OutOfMemoryError when bootstrapping with less than 100GB RAM

2018-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433463#comment-16433463
 ] 

Jürgen Albersdorfer edited comment on CASSANDRA-14239 at 4/11/18 6:37 AM:
--

Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 10K spinning Disk for commitlog and a 1TB SSD for data, 
hints and saved_caches. - I didn't change:
{code:java}
disk_optimization_strategy: spinning # was wrong for SSD, I will change!
memtable_allocation_type: heap_buffers
#memtable_flush_writers: 2
# memtable_heap_space_in_mb: 2048
# memtable_offheap_space_in_mb: 2048
{code}
I cannot see any IO Pressure on the System during the whole bootstrap Process:
{code:java}
-dsk/total- ---system-- total-cpu-usage --io/total-
 read  writ| int   csw |usr sys idl wai hiq siq| read  writ
 200k 4458k|  23k   11k| 59   1  40   0   0   0|31.7  14.0
   0  3123B|1509   214 |  6   0  94   0   0   0|   0  0.50
   0 0 |2312   203 |  6   0  94   0   0   0|   0 0
   0   121k|1259   198 |  6   0  94   0   0   0|   0  1.20
   0    37k|1240   184 |  6   0  94   0   0   0|   0  2.20
   0 0 |1240   175 |  6   0  94   0   0   0|   0 0
   0 0 |1218   153 |  6   0  94   0   0   0|   0 0
   0    21k|1198   141 |  6   0  94   0   0   0|   0  1.40
   0 0 |1188   122 |  6   0  94   0   0   0|   0 0
   0 0 |1176   121 |  6   0  94   0   0   0|   0 0
   0   307B|1165   120 |  6   0  94   0   0   0|   0  0.40
   0 0 |1166   116 |  6   0  94   0   0   0|   0 0
   0 0 |1169   114 |  6   0  94   0   0   0|   0 0
  20k 1382B|  20k 1648 | 58   0  42   0   0   0|1.50  0.50
 248k 5055k|  40k   27k| 96   1   3   0   0   0|37.1  18.3
 232k 2647k|  35k   29k| 98   1   1   0   0   0|33.3  7.20
 894k   17M|  80k   83k| 91   4   4   0   0   2| 119  59.8
 304k   19M|  35k 5311 | 95   2   2   0   0   1|40.4  56.1
 342k   18M|  39k 5805 | 96   2   1   0   0   1|43.6  56.2
 334k   18M|  34k 5770 | 96   2   2   0   0   0|42.5  54.2
 290k   19M|  36k 6144 | 96   2   2   0   0   0|38.0  55.1
 813k   23M|  42k 6870 | 94   2   3   0   0   1| 104  62.3
 360k   18M|  35k 5955 | 96   2   2   0   0   0|45.8  51.4
 325k   19M|  36k 6081 | 96   2   2   0   0   0|41.3  52.2
 358k   18M|  36k 6036 | 95   2   3   0   0   0|45.5  50.7
 344k   19M|  35k 6063 | 96   2   2   0   0   0|45.5  52.9
 380k   17M|  36k 5980 | 95   2   3   0   0   0|48.7  46.0
 685k   21M|  39k 6163 | 94   2   4   0   0   1|87.5  57.8
 632k   18M|  34k 5885 | 95   2   3   0   0   0|63.8  53.1
 795k   19M|  34k 5634 | 95   2   2   0   0   0|75.7  53.4
 869k   15M|  40k   13k| 94   2   4   0   0   1|91.6  47.8
 730k   16M|  54k   30k| 93   2   5   0   0   1|81.6  48.3
 651k   15M|  61k   40k| 89   3   7   0   0   1|74.3  47.1
 782k   15M|  78k   76k| 87   4   8   0   0   1|57.6  41.8
1284k   18M|  67k   47k| 94   3   2   0   0   1| 128  58.6
1279k   19M|  40k 5963 | 96   2   2   0   0   0| 107  56.3
1110k   18M|  38k 5986 | 96   2   2   0   0   0| 114  49.2
1286k   21M|  39k 5773 | 96   2   1   0   0   0| 109  58.0
2701k   21M|  50k 6534 | 91   2   5   0   0   1| 282  68.3
1760k   17M|  40k 5498 | 94   2   3   0   0   1| 234  48.3
1295k   18M|  42k 5610 | 95   2   3   0   0   0| 136  53.1
1315k   19M|  44k 5387 | 96   2   2   0   0   0|97.4  55.1
 214k 2818k|7171  6043 | 20   0  79   0   0   0|13.8  7.80
  16k 4864B|1263   200 |  6   0  94   0   0   0|0.50  0.60
   0 0 |1226   166 |  6   0  94   0   0   0|   0 0
   0   449k|1217   162 |  6   0  94   0   0   0|   0  1.80
   0    12k|1213   155 |  6   0  94   0   0   0|   0  0.90
   0 0 |1237   170 |  6   0  94   0   0   0|   0 0
 239k    0 |1305   278 |  6   0  94   0   0   0|8.30 0
   0    16k|1202   147 |  6   0  94   0   0   0|   0  1.30
{code}
I will try again with some other settings nevertheless.

GC was G1GC with the following Settings:
{code:java}
-XX:+UseG1GC
-XX:MaxGCPauseMillis=500
-XX:ParallelGCThreads=10  # have 16 logical CPU's
-XX:ConcGCThreads=5
-XX:+UseStringDeduplication
-XX:+UseCompressedClassPointers
-XX:+UseCompressedOops
-XX:+ExplicitGCInvokesConcurrent
-XX:MetaspaceSize=500M
-XX:+ParallelRefProcEnabled
-XX:SoftRefLRUPolicyMSPerMB=100
-XX:+UnlockDiagnosticVMOptions
-XX:+UnlockExperimentalVMOptions

{code}


was (Author: jalbersdorfer):
Had again to join a new node - giving it 72GB of Heap - caused again OOM.

I have a GC Log this time. For me, this smells strong like a Memory Leak.

Throw the attached [^gc.log.0.current.zip] against 
[http://gceasy.io|http://gceasy.io/] and you will immediatelly see what I mean.

This Node has a fast 10K spinning Disk for commitlog and 

[jira] [Commented] (CASSANDRA-14167) IndexOutOfBoundsException when selecting column counter and consistency quorum

2018-04-10 Thread Orga Shih (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433473#comment-16433473
 ] 

Orga Shih commented on CASSANDRA-14167:
---

Hi, I got the same warn log with Cassandra 3.11.1, it also comes with another 
error log which seems to be related.
{code:java}
ERROR [ReadRepairStage:112269] 2018-04-11 05:44:46,563 CassandraDaemon.java:228 
- Exception in thread Thread[ReadRepairStage:112269,5,main]
java.lang.IndexOutOfBoundsException: null
at java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_151]
at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) 
~[na:1.8.0_151]
at 
org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:173)
 ~[apache-cassandra-3.11.1.jar:3.11.1]
at 
org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:696)
 ~[apache-cassandra-3.11.1.jar:3.11.1]
at 
org.apache.cassandra.db.rows.AbstractCell.digest(AbstractCell.java:126) 
~[apache-cassandra-3.11.1.jar:3.11.1]
at org.apache.cassandra.db.rows.AbstractRow.digest(AbstractRow.java:73) 
~[apache-cassandra-3.11.1.jar:3.11.1]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:181)
 ~[apache-cassandra-3.11.1.jar:3.11.1]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:263)
 ~[apache-cassandra-3.11.1.jar:3.11.1]
at 
org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:120) 
~[apache-cassandra-3.11.1.jar:3.11.1]
at 
org.apache.cassandra.db.ReadResponse$DataResponse.digest(ReadResponse.java:225) 
~[apache-cassandra-3.11.1.jar:3.11.1]
at 
org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:87)
 ~[apache-cassandra-3.11.1.jar:3.11.1]
at 
org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233)
 ~[apache-cassandra-3.11.1.jar:3.11.1]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_151]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_151]
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
 ~[apache-cassandra-3.11.1.jar:3.11.1]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_151]
{code}
The environment Info:
 * Cluster nodes: 3
 * RF: 3
 * Some counter tables
 * Consistency Level

 * 
 ** Read: ONE
 ** Write: LOCAL_QUORUM

The problem is I can't find the exact query command to reproduce this issue so 
far. And I didn't get any error from the application level.
Is there any way to locate the query? Could It be triggered Cassandra internal 
jobs or write action?

Any suggestion is appreciated.

 

> IndexOutOfBoundsException when selecting column counter and consistency quorum
> --
>
> Key: CASSANDRA-14167
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14167
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11.1
> Ubuntu 14-04
>Reporter: Tristan Last
>Priority: Major
>
> This morning I upgraded my cluster from 3.11.0 to 3.11.1 and it appears when 
> I perform a query on a counter specifying the column name cassandra throws 
> the following exception:
> {code:java}
> WARN [ReadStage-1] 2018-01-15 10:58:30,121 
> AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
> Thread[ReadStage-1,5,main]: {}
> java.lang.IndexOutOfBoundsException: null
> java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_144]
> java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) ~[na:1.8.0_144]
> org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:173)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:696)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> org.apache.cassandra.db.rows.AbstractCell.digest(AbstractCell.java:126) 
> ~[apache-cassandra-3.11.1.jar:3.11.1]
> org.apache.cassandra.db.rows.AbstractRow.digest(AbstractRow.java:73) 
> ~[apache-cassandra-3.11.1.jar:3.11.1]
> org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:181)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:263)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:120) 
> ~[apache-cassandra-3.11.1.jar:3.11.1]
> org.apache.cassandra.db.ReadResponse.createDigestResponse(ReadResponse.java:87)
>  ~[apache-cassandra-3.11.1.jar:3.11.1]
> org.apache.cassandra.db.ReadCommand.createRespo

[jira] [Commented] (CASSANDRA-14350) RHEL 7.4 compatibilty with: Apache cassandra 2.x and 3.x version Apache Zookeeper 3.x version Apache spark 1.x and 2.x version spark cassandra connector 1.x an

2018-04-10 Thread Michael Burman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431810#comment-16431810
 ] 

Michael Burman commented on CASSANDRA-14350:


This isn't a place for support questions, please use user-mailing-list for such 
purposes. If you have questions regarding your RHEL subscription compatibility, 
you should probably create a ticket at access.redhat.com (which is unrelated to 
this project).

> RHEL 7.4 compatibilty with:  Apache cassandra 2.x and 3.x version  Apache 
> Zookeeper 3.x version  Apache spark 1.x and 2.x version  spark cassandra 
> connector 1.x and 2.x version
> 
>
> Key: CASSANDRA-14350
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14350
> Project: Cassandra
>  Issue Type: Task
>Reporter: Apoorva Maheshwari
>Priority: Critical
>
> RHEL 7.4 compatibilty with:
> Apache cassandra 2.x and 3.x version
> Apache Zookeeper 3.x version
> Apache spark 1.x and 2.x version
> spark cassandra connector 1.x and 2.x version  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14350) RHEL 7.4 compatibilty with: Apache cassandra 2.x and 3.x version Apache Zookeeper 3.x version Apache spark 1.x and 2.x version spark cassandra connector 1.x an

2018-04-10 Thread Dinesh Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431820#comment-16431820
 ] 

Dinesh Joshi commented on CASSANDRA-14350:
--

Hi [~Apoorva17], if you have questions or comments regarding Apache Cassandra's 
compatibility with specific operating systems either email the user list or ask 
in the IRC. Here's how: http://cassandra.apache.org/community/ Please do not 
open jiras.

> RHEL 7.4 compatibilty with:  Apache cassandra 2.x and 3.x version  Apache 
> Zookeeper 3.x version  Apache spark 1.x and 2.x version  spark cassandra 
> connector 1.x and 2.x version
> 
>
> Key: CASSANDRA-14350
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14350
> Project: Cassandra
>  Issue Type: Task
>Reporter: Apoorva Maheshwari
>Priority: Critical
>
> RHEL 7.4 compatibilty with:
> Apache cassandra 2.x and 3.x version
> Apache Zookeeper 3.x version
> Apache spark 1.x and 2.x version
> spark cassandra connector 1.x and 2.x version  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14350) RHEL 7.4 compatibilty with: Apache cassandra 2.x and 3.x version Apache Zookeeper 3.x version Apache spark 1.x and 2.x version spark cassandra connector 1.x and

2018-04-10 Thread Dinesh Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi resolved CASSANDRA-14350.
--
Resolution: Invalid

> RHEL 7.4 compatibilty with:  Apache cassandra 2.x and 3.x version  Apache 
> Zookeeper 3.x version  Apache spark 1.x and 2.x version  spark cassandra 
> connector 1.x and 2.x version
> 
>
> Key: CASSANDRA-14350
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14350
> Project: Cassandra
>  Issue Type: Task
>Reporter: Apoorva Maheshwari
>Priority: Critical
>
> RHEL 7.4 compatibilty with:
> Apache cassandra 2.x and 3.x version
> Apache Zookeeper 3.x version
> Apache spark 1.x and 2.x version
> spark cassandra connector 1.x and 2.x version  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10496) Make DTCS/TWCS split partitions based on time during compaction

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431831#comment-16431831
 ] 

ASF GitHub Bot commented on CASSANDRA-10496:


Github user iksaif commented on the issue:

https://github.com/apache/cassandra/pull/147
  
I likely won't have time to finish, and 
`unsafe_aggressive_sstable_expiration` is good enough for our usecase now.


> Make DTCS/TWCS split partitions based on time during compaction
> ---
>
> Key: CASSANDRA-10496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10496
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Priority: Major
>  Labels: dtcs
> Fix For: 4.x
>
>
> To avoid getting old data in new time windows with DTCS (or related, like 
> [TWCS|CASSANDRA-9666]), we need to split out old data into its own sstable 
> during compaction.
> My initial idea is to just create two sstables, when we create the compaction 
> task we state the start and end times for the window, and any data older than 
> the window will be put in its own sstable.
> By creating a single sstable with old data, we will incrementally get the 
> windows correct - say we have an sstable with these timestamps:
> {{[100, 99, 98, 97, 75, 50, 10]}}
> and we are compacting in window {{[100, 80]}} - we would create two sstables:
> {{[100, 99, 98, 97]}}, {{[75, 50, 10]}}, and the first window is now 
> 'correct'. The next compaction would compact in window {{[80, 60]}} and 
> create sstables {{[75]}}, {{[50, 10]}} etc.
> We will probably also want to base the windows on the newest data in the 
> sstables so that we actually have older data than the window.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14310) Don't allow nodetool refresh before cfs is opened

2018-04-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431830#comment-16431830
 ] 

Marcus Eriksson commented on CASSANDRA-14310:
-

bq.  is it necessary for ColumnFamilyStore.loadNewSSTables to be synchronized 
on ColumnFamilyStore.class?
hmm, no, looks like it isn't, nice catch, pushed a commit that removes it

Keeping the initialized-check - don't think we should allow refresh before 
fully started

tests: https://circleci.com/gh/krummas/cassandra/tree/marcuse%2F14310
and a dtest: https://github.com/krummas/cassandra-dtest/commits/marcuse/14310

> Don't allow nodetool refresh before cfs is opened
> -
>
> Key: CASSANDRA-14310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14310
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> There is a potential deadlock in during startup if nodetool refresh is called 
> while sstables are being opened. We should not allow refresh to be called 
> before everything is initialized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14373) Allow using custom script for chronicle queue BinLog archival

2018-04-10 Thread Stefan Podkowinski (JIRA)
Stefan Podkowinski created CASSANDRA-14373:
--

 Summary: Allow using custom script for chronicle queue BinLog 
archival
 Key: CASSANDRA-14373
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14373
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stefan Podkowinski
 Fix For: 4.x


It would be nice to allow the user to configure an archival script that will be 
executed in {{BinLog.onReleased(cycle, file)}} for every deleted bin log, just 
as we do in {{CommitLogArchiver}}. The script should be able to copy the 
released file to an external location or do whatever the author hand in mind. 
Deleting the log file should be delegated to the script as well.

See CASSANDRA-13983, CASSANDRA-12151 for use cases.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[11/15] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2018-04-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73ca0e1e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73ca0e1e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73ca0e1e

Branch: refs/heads/cassandra-3.11
Commit: 73ca0e1e131bdf14177c026a60f19e33c379ffd4
Parents: 41f3b96 b3ac793
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:54:27 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:57:43 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 58 ++--
 2 files changed, 30 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/CHANGES.txt
--
diff --cc CHANGES.txt
index 7917712,5221b1e..1564fa3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,52 -1,12 +1,53 @@@
 -2.2.13
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 +3.0.17
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.2.12
 +
 +3.0.16
 + * Fix unit test failures in ViewComplexTest (CASSANDRA-14219)
 + * Add MinGW uname check to start scripts (CASSANDRA-12940)
 + * Protect against overflow of local expiration time (CASSANDRA-14092)
 + * Use the correct digest file and reload sstable metadata in nodetool verify 
(CASSANDRA-14217)
 + * Handle failure when mutating repaired status in Verifier (CASSANDRA-13933)
 + * Close socket on error during connect on OutboundTcpConnection 
(CASSANDRA-9630)
 + * Set encoding for javadoc generation (CASSANDRA-14154)
 + * Fix index target computation for dense composite tables with dropped 
compact storage (CASSANDRA-14104)
 + * Improve commit log chain marker updating (CASSANDRA-14108)
 + * Extra range tombstone bound creates double rows (CASSANDRA-14008)
 + * Fix SStable ordering by max timestamp in SinglePartitionReadCommand 
(CASSANDRA-14010)
 + * Accept role names containing forward-slash (CASSANDRA-14088)
 + * Optimize CRC check chance probability calculations (CASSANDRA-14094)
 + * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 + * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
 + * More frequent commitlog chained markers (CASSANDRA-13987)
 + * Fix serialized size of DataLimits (CASSANDRA-14057)
 + * Add flag to allow dropping oversized read repair mutations 
(CASSANDRA-13975)
 + * Fix SSTableLoader logger message (CASSANDRA-14003)
 + * Fix repair race that caused gossip to block (CASSANDRA-13849)
 + * Tracing interferes with digest requests when using RandomPartitioner 
(CASSANDRA-13964)
 + * Add flag to disable materialized views, and warnings on creation 
(CASSANDRA-13959)
 + * Don't let user drop or generally break tables in system_distributed 
(CASSANDRA-13813)
 + * Provide a JMX call to sync schema with local storage (CASSANDRA-13954)
 + * Mishandling of cells for removed/dropped columns when reading legacy files 
(CASSANDRA-13939)
 + * Deserialise sstable metadata in nodetool verify (CASSANDRA-13922)
 +Merged from 2.2:
   * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
   * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
   * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/src/java/

[08/15] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2018-04-10 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3ac7937
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3ac7937
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3ac7937

Branch: refs/heads/cassandra-3.0
Commit: b3ac7937edce41a341d1d01c7f3201592e1caa8f
Parents: 2e5e11d 34a1d5d
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:51:02 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:52:18 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 52 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/CHANGES.txt
--
diff --cc CHANGES.txt
index 527975c,aeb3009..5221b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,8 +1,17 @@@
 -2.1.21
 +2.2.13
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 + * Backport circleci yaml (CASSANDRA-14240)
 +Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.1.20
 +2.2.12
 + * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
 + * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
 + * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)
 + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873)
 +Merged from 2.1:
   * Protect against overflow of local expiration time (CASSANDRA-14092)
   * More PEP8 compliance for cqlsh (CASSANDRA-14021)
   * RPM package spec: fix permissions for installed jars and config files 
(CASSANDRA-14181)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index ccfa5e7,fe90cc9..0fc96ed
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@@ -99,54 -77,7 +99,54 @@@ public class CompressedRandomAccessRead
  {
  try
  {
 -decompressChunk(metadata.chunkFor(current));
 +long position = current();
 +assert position < metadata.dataLength;
 +
 +CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 +
 +if (compressed.capacity() < chunk.length)
 +compressed = allocateBuffer(chunk.length, 
metadata.compressor().preferredBufferType());
 +else
 +compressed.clear();
 +compressed.limit(chunk.length);
 +
 +if (channel.read(compressed, chunk.offset) != chunk.length)
 +throw new CorruptBlockException(getPath(), chunk);
 +compressed.flip();
 +buffer.clear();
 +
++if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
++{
++FBUtilities.directCheckSum(checksum, compressed);
++
++if (checksum(chunk) != (int) checksum.getValue())
++throw new CorruptBlockException(getPath(), chunk);
++
++// reset checksum object back to the original (blank) state
++checksum.reset();
++compressed.rewind();
++}
++
 +try
 +{
 +metadata.compressor().uncompress(compressed, buffer);
 +}
 +catch (IOException e)
 +{
 +throw new CorruptBlockException(getPath(), chunk);
 +}
 +finally
 +{
 +buffer.flip();
 +}
 +
- if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
- {
- compressed.rewind();
- FBUtilities.directCheckSum(checksum, compressed);
- 
- if (checksum(chunk) != (int) checksum.getValue())
- throw new CorruptBlockException(getPath(), chunk);
- 
- // reset checksum object back to the original (blank) state
- checksum.reset();
- }
- 
 +// buffer offset is always aligned
 +bufferOffset = position & ~(buffer.capacity() - 1);
 +buffer.position((int) (position - bufferOffset));
 +// the leng

[12/15] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2018-04-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73ca0e1e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73ca0e1e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73ca0e1e

Branch: refs/heads/cassandra-3.0
Commit: 73ca0e1e131bdf14177c026a60f19e33c379ffd4
Parents: 41f3b96 b3ac793
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:54:27 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:57:43 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 58 ++--
 2 files changed, 30 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/CHANGES.txt
--
diff --cc CHANGES.txt
index 7917712,5221b1e..1564fa3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,52 -1,12 +1,53 @@@
 -2.2.13
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 +3.0.17
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.2.12
 +
 +3.0.16
 + * Fix unit test failures in ViewComplexTest (CASSANDRA-14219)
 + * Add MinGW uname check to start scripts (CASSANDRA-12940)
 + * Protect against overflow of local expiration time (CASSANDRA-14092)
 + * Use the correct digest file and reload sstable metadata in nodetool verify 
(CASSANDRA-14217)
 + * Handle failure when mutating repaired status in Verifier (CASSANDRA-13933)
 + * Close socket on error during connect on OutboundTcpConnection 
(CASSANDRA-9630)
 + * Set encoding for javadoc generation (CASSANDRA-14154)
 + * Fix index target computation for dense composite tables with dropped 
compact storage (CASSANDRA-14104)
 + * Improve commit log chain marker updating (CASSANDRA-14108)
 + * Extra range tombstone bound creates double rows (CASSANDRA-14008)
 + * Fix SStable ordering by max timestamp in SinglePartitionReadCommand 
(CASSANDRA-14010)
 + * Accept role names containing forward-slash (CASSANDRA-14088)
 + * Optimize CRC check chance probability calculations (CASSANDRA-14094)
 + * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 + * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
 + * More frequent commitlog chained markers (CASSANDRA-13987)
 + * Fix serialized size of DataLimits (CASSANDRA-14057)
 + * Add flag to allow dropping oversized read repair mutations 
(CASSANDRA-13975)
 + * Fix SSTableLoader logger message (CASSANDRA-14003)
 + * Fix repair race that caused gossip to block (CASSANDRA-13849)
 + * Tracing interferes with digest requests when using RandomPartitioner 
(CASSANDRA-13964)
 + * Add flag to disable materialized views, and warnings on creation 
(CASSANDRA-13959)
 + * Don't let user drop or generally break tables in system_distributed 
(CASSANDRA-13813)
 + * Provide a JMX call to sync schema with local storage (CASSANDRA-13954)
 + * Mishandling of cells for removed/dropped columns when reading legacy files 
(CASSANDRA-13939)
 + * Deserialise sstable metadata in nodetool verify (CASSANDRA-13922)
 +Merged from 2.2:
   * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
   * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
   * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/src/java/o

[07/15] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2018-04-10 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3ac7937
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3ac7937
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3ac7937

Branch: refs/heads/cassandra-2.2
Commit: b3ac7937edce41a341d1d01c7f3201592e1caa8f
Parents: 2e5e11d 34a1d5d
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:51:02 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:52:18 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 52 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/CHANGES.txt
--
diff --cc CHANGES.txt
index 527975c,aeb3009..5221b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,8 +1,17 @@@
 -2.1.21
 +2.2.13
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 + * Backport circleci yaml (CASSANDRA-14240)
 +Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.1.20
 +2.2.12
 + * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
 + * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
 + * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)
 + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873)
 +Merged from 2.1:
   * Protect against overflow of local expiration time (CASSANDRA-14092)
   * More PEP8 compliance for cqlsh (CASSANDRA-14021)
   * RPM package spec: fix permissions for installed jars and config files 
(CASSANDRA-14181)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index ccfa5e7,fe90cc9..0fc96ed
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@@ -99,54 -77,7 +99,54 @@@ public class CompressedRandomAccessRead
  {
  try
  {
 -decompressChunk(metadata.chunkFor(current));
 +long position = current();
 +assert position < metadata.dataLength;
 +
 +CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 +
 +if (compressed.capacity() < chunk.length)
 +compressed = allocateBuffer(chunk.length, 
metadata.compressor().preferredBufferType());
 +else
 +compressed.clear();
 +compressed.limit(chunk.length);
 +
 +if (channel.read(compressed, chunk.offset) != chunk.length)
 +throw new CorruptBlockException(getPath(), chunk);
 +compressed.flip();
 +buffer.clear();
 +
++if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
++{
++FBUtilities.directCheckSum(checksum, compressed);
++
++if (checksum(chunk) != (int) checksum.getValue())
++throw new CorruptBlockException(getPath(), chunk);
++
++// reset checksum object back to the original (blank) state
++checksum.reset();
++compressed.rewind();
++}
++
 +try
 +{
 +metadata.compressor().uncompress(compressed, buffer);
 +}
 +catch (IOException e)
 +{
 +throw new CorruptBlockException(getPath(), chunk);
 +}
 +finally
 +{
 +buffer.flip();
 +}
 +
- if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
- {
- compressed.rewind();
- FBUtilities.directCheckSum(checksum, compressed);
- 
- if (checksum(chunk) != (int) checksum.getValue())
- throw new CorruptBlockException(getPath(), chunk);
- 
- // reset checksum object back to the original (blank) state
- checksum.reset();
- }
- 
 +// buffer offset is always aligned
 +bufferOffset = position & ~(buffer.capacity() - 1);
 +buffer.position((int) (position - bufferOffset));
 +// the leng

[10/15] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2018-04-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73ca0e1e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73ca0e1e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73ca0e1e

Branch: refs/heads/trunk
Commit: 73ca0e1e131bdf14177c026a60f19e33c379ffd4
Parents: 41f3b96 b3ac793
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:54:27 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:57:43 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 58 ++--
 2 files changed, 30 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/CHANGES.txt
--
diff --cc CHANGES.txt
index 7917712,5221b1e..1564fa3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,52 -1,12 +1,53 @@@
 -2.2.13
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 +3.0.17
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.2.12
 +
 +3.0.16
 + * Fix unit test failures in ViewComplexTest (CASSANDRA-14219)
 + * Add MinGW uname check to start scripts (CASSANDRA-12940)
 + * Protect against overflow of local expiration time (CASSANDRA-14092)
 + * Use the correct digest file and reload sstable metadata in nodetool verify 
(CASSANDRA-14217)
 + * Handle failure when mutating repaired status in Verifier (CASSANDRA-13933)
 + * Close socket on error during connect on OutboundTcpConnection 
(CASSANDRA-9630)
 + * Set encoding for javadoc generation (CASSANDRA-14154)
 + * Fix index target computation for dense composite tables with dropped 
compact storage (CASSANDRA-14104)
 + * Improve commit log chain marker updating (CASSANDRA-14108)
 + * Extra range tombstone bound creates double rows (CASSANDRA-14008)
 + * Fix SStable ordering by max timestamp in SinglePartitionReadCommand 
(CASSANDRA-14010)
 + * Accept role names containing forward-slash (CASSANDRA-14088)
 + * Optimize CRC check chance probability calculations (CASSANDRA-14094)
 + * Fix cleanup on keyspace with no replicas (CASSANDRA-13526)
 + * Fix updating base table rows with TTL not removing materialized view 
entries (CASSANDRA-14071)
 + * Reduce garbage created by DynamicSnitch (CASSANDRA-14091)
 + * More frequent commitlog chained markers (CASSANDRA-13987)
 + * Fix serialized size of DataLimits (CASSANDRA-14057)
 + * Add flag to allow dropping oversized read repair mutations 
(CASSANDRA-13975)
 + * Fix SSTableLoader logger message (CASSANDRA-14003)
 + * Fix repair race that caused gossip to block (CASSANDRA-13849)
 + * Tracing interferes with digest requests when using RandomPartitioner 
(CASSANDRA-13964)
 + * Add flag to disable materialized views, and warnings on creation 
(CASSANDRA-13959)
 + * Don't let user drop or generally break tables in system_distributed 
(CASSANDRA-13813)
 + * Provide a JMX call to sync schema with local storage (CASSANDRA-13954)
 + * Mishandling of cells for removed/dropped columns when reading legacy files 
(CASSANDRA-13939)
 + * Deserialise sstable metadata in nodetool verify (CASSANDRA-13922)
 +Merged from 2.2:
   * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
   * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
   * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ca0e1e/src/java/org/apach

[13/15] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2018-04-10 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1020d62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1020d62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1020d62

Branch: refs/heads/cassandra-3.11
Commit: c1020d62ed05f7fa5735af6f09915cdc6850dbeb
Parents: b3e9908 73ca0e1
Author: Benjamin Lerer 
Authored: Tue Apr 10 10:02:36 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 10:03:32 2018 +0200

--
 CHANGES.txt |  1 +
 .../io/util/CompressedChunkReader.java  | 65 +++-
 2 files changed, 38 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1020d62/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1020d62/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
index 0919c29,000..177afb0
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
+++ b/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
@@@ -1,229 -1,0 +1,238 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.cassandra.io.util;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.concurrent.ThreadLocalRandom;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.primitives.Ints;
 +
 +import org.apache.cassandra.io.compress.BufferType;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.compress.CorruptBlockException;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +
 +public abstract class CompressedChunkReader extends AbstractReaderFileProxy 
implements ChunkReader
 +{
 +final CompressionMetadata metadata;
 +
 +protected CompressedChunkReader(ChannelProxy channel, CompressionMetadata 
metadata)
 +{
 +super(channel, metadata.dataLength);
 +this.metadata = metadata;
 +assert Integer.bitCount(metadata.chunkLength()) == 1; //must be a 
power of two
 +}
 +
 +@VisibleForTesting
 +public double getCrcCheckChance()
 +{
 +return metadata.parameters.getCrcCheckChance();
 +}
 +
++protected final boolean shouldCheckCrc()
++{
++return getCrcCheckChance() >= 1d || getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble();
++}
++
 +@Override
 +public String toString()
 +{
 +return String.format("CompressedChunkReader.%s(%s - %s, chunk length 
%d, data length %d)",
 + getClass().getSimpleName(),
 + channel.filePath(),
 + metadata.compressor().getClass().getSimpleName(),
 + metadata.chunkLength(),
 + metadata.dataLength);
 +}
 +
 +@Override
 +public int chunkSize()
 +{
 +return metadata.chunkLength();
 +}
 +
 +@Override
 +public BufferType preferredBufferType()
 +{
 +return metadata.compressor().preferredBufferType();
 +}
 +
 +@Override
 +public Rebufferer instantiateRebufferer()
 +{
 +return new BufferManagingRebufferer.Aligned(this);
 +}
 +
 +public static class Standard extends CompressedChunkReader
 +{
 +// we read the raw compressed bytes into this buffer, then 
uncompressed them into the provided one.
 +private final ThreadLocal compressedHolder;
 +
 +public Standard(ChannelProxy channel, CompressionMetadata metadata)
 +{
 +super(channel, metadata);
 +compressedHolder = ThreadLocal.withInitial(this::allocateBuffer);
 +}
 +
 +public ByteBuff

[02/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/cassandra-2.2
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// reset checksum object back to the original (blank) state
+checksum.reset();
+}
+}
+
 private int checksum(CompressionMetadata.Chunk chunk) throws IOException
 {
 assert channel.position() == chunk.offset + chunk.length;



[01/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 19d26bcb8 -> 34a1d5da5
  refs/heads/cassandra-2.2 2e5e11d66 -> b3ac7937e
  refs/heads/cassandra-3.0 41f3b96f8 -> 73ca0e1e1
  refs/heads/cassandra-3.11 b3e99085a -> c1020d62e
  refs/heads/trunk b65b28a9e -> 0b16546f6


Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/cassandra-2.1
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// r

[05/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/cassandra-3.11
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// reset checksum object back to the original (blank) state
+checksum.reset();
+}
+}
+
 private int checksum(CompressionMetadata.Chunk chunk) throws IOException
 {
 assert channel.position() == chunk.offset + chunk.length;


---

[04/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/cassandra-3.0
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// reset checksum object back to the original (blank) state
+checksum.reset();
+}
+}
+
 private int checksum(CompressionMetadata.Chunk chunk) throws IOException
 {
 assert channel.position() == chunk.offset + chunk.length;



[14/15] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2018-04-10 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1020d62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1020d62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1020d62

Branch: refs/heads/trunk
Commit: c1020d62ed05f7fa5735af6f09915cdc6850dbeb
Parents: b3e9908 73ca0e1
Author: Benjamin Lerer 
Authored: Tue Apr 10 10:02:36 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 10:03:32 2018 +0200

--
 CHANGES.txt |  1 +
 .../io/util/CompressedChunkReader.java  | 65 +++-
 2 files changed, 38 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1020d62/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1020d62/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
index 0919c29,000..177afb0
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
+++ b/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
@@@ -1,229 -1,0 +1,238 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.cassandra.io.util;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.concurrent.ThreadLocalRandom;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +import com.google.common.primitives.Ints;
 +
 +import org.apache.cassandra.io.compress.BufferType;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.compress.CorruptBlockException;
 +import org.apache.cassandra.io.sstable.CorruptSSTableException;
 +
 +public abstract class CompressedChunkReader extends AbstractReaderFileProxy 
implements ChunkReader
 +{
 +final CompressionMetadata metadata;
 +
 +protected CompressedChunkReader(ChannelProxy channel, CompressionMetadata 
metadata)
 +{
 +super(channel, metadata.dataLength);
 +this.metadata = metadata;
 +assert Integer.bitCount(metadata.chunkLength()) == 1; //must be a 
power of two
 +}
 +
 +@VisibleForTesting
 +public double getCrcCheckChance()
 +{
 +return metadata.parameters.getCrcCheckChance();
 +}
 +
++protected final boolean shouldCheckCrc()
++{
++return getCrcCheckChance() >= 1d || getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble();
++}
++
 +@Override
 +public String toString()
 +{
 +return String.format("CompressedChunkReader.%s(%s - %s, chunk length 
%d, data length %d)",
 + getClass().getSimpleName(),
 + channel.filePath(),
 + metadata.compressor().getClass().getSimpleName(),
 + metadata.chunkLength(),
 + metadata.dataLength);
 +}
 +
 +@Override
 +public int chunkSize()
 +{
 +return metadata.chunkLength();
 +}
 +
 +@Override
 +public BufferType preferredBufferType()
 +{
 +return metadata.compressor().preferredBufferType();
 +}
 +
 +@Override
 +public Rebufferer instantiateRebufferer()
 +{
 +return new BufferManagingRebufferer.Aligned(this);
 +}
 +
 +public static class Standard extends CompressedChunkReader
 +{
 +// we read the raw compressed bytes into this buffer, then 
uncompressed them into the provided one.
 +private final ThreadLocal compressedHolder;
 +
 +public Standard(ChannelProxy channel, CompressionMetadata metadata)
 +{
 +super(channel, metadata);
 +compressedHolder = ThreadLocal.withInitial(this::allocateBuffer);
 +}
 +
 +public ByteBuffer alloca

[15/15] cassandra git commit: Merge branch cassandra-3.11 into trunk

2018-04-10 Thread blerer
Merge branch cassandra-3.11 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0b16546f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0b16546f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0b16546f

Branch: refs/heads/trunk
Commit: 0b16546f6500f7c33db2f94957d6b5a8e0c108d1
Parents: b65b28a c1020d6
Author: Benjamin Lerer 
Authored: Tue Apr 10 10:09:05 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 10:10:47 2018 +0200

--
 CHANGES.txt |  2 +
 .../io/util/CompressedChunkReader.java  | 83 
 2 files changed, 52 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b16546f/CHANGES.txt
--
diff --cc CHANGES.txt
index d191810,c4f05d5..e68518d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -236,6 -21,11 +236,8 @@@ Merged from 3.0
   * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 -Merged from 2.2:
 - * Backport circleci yaml (CASSANDRA-14240)
 -Merged from 2.1:
++ Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
 - * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
  
  3.11.2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b16546f/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
--
diff --cc src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
index 5ae083b,177afb0..daec6c4
--- a/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
+++ b/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java
@@@ -96,8 -94,7 +96,12 @@@ public abstract class CompressedChunkRe
  
  public ByteBuffer allocateBuffer()
  {
- return allocateBuffer(Math.min(maxCompressedLength,
-
metadata.compressor().initialCompressedBufferLength(metadata.chunkLength(;
 -return 
allocateBuffer(metadata.compressor().initialCompressedBufferLength(metadata.chunkLength()));
++int compressedLength = Math.min(maxCompressedLength,
++
metadata.compressor().initialCompressedBufferLength(metadata.chunkLength()));
++
++int checksumLength = Integer.BYTES;
++
++return allocateBuffer(compressedLength + checksumLength);
  }
  
  public ByteBuffer allocateBuffer(int size)
@@@ -115,35 -112,54 +119,63 @@@
  assert position <= fileLength;
  
  CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 -ByteBuffer compressed = compressedHolder.get();
 -
+ boolean shouldCheckCrc = shouldCheckCrc();
++int length = shouldCheckCrc ? chunk.length + Integer.BYTES // 
compressed length + checksum length
++: chunk.length;
+ 
 -int length = shouldCheckCrc ? chunk.length + Integer.BYTES : 
chunk.length;
 -
 -if (compressed.capacity() < length)
 +if (chunk.length < maxCompressedLength)
  {
 -compressed = allocateBuffer(length);
 -compressedHolder.set(compressed);
 -}
 -else
 -{
 -compressed.clear();
 -}
 +ByteBuffer compressed = compressedHolder.get();
- assert compressed.capacity() >= chunk.length;
- compressed.clear().limit(chunk.length);
- if (channel.read(compressed, chunk.offset) != 
chunk.length)
+ 
 -compressed.limit(length);
 -if (channel.read(compressed, chunk.offset) != length)
 -throw new CorruptBlockException(channel.filePath(), 
chunk);
 -
 -compressed.flip();
 -uncompressed.clear();
 -
 -compressed.position(0).limit(chunk.length);
++assert compressed.capacity() >= length;
++compressed.clear().limit(length);
++if (channel.read(compressed, chunk.offset) != length)
 +throw new CorruptBlockException(channel.filePath(), 
chunk);
  
 -if (shouldCheckCrc)
 +compressed.flip();
++compressed.limit(chunk.length);
 +uncompressed.clear();
 +
++  

[09/15] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2018-04-10 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3ac7937
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3ac7937
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3ac7937

Branch: refs/heads/cassandra-3.11
Commit: b3ac7937edce41a341d1d01c7f3201592e1caa8f
Parents: 2e5e11d 34a1d5d
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:51:02 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:52:18 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 52 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/CHANGES.txt
--
diff --cc CHANGES.txt
index 527975c,aeb3009..5221b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,8 +1,17 @@@
 -2.1.21
 +2.2.13
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 + * Backport circleci yaml (CASSANDRA-14240)
 +Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.1.20
 +2.2.12
 + * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
 + * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
 + * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)
 + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873)
 +Merged from 2.1:
   * Protect against overflow of local expiration time (CASSANDRA-14092)
   * More PEP8 compliance for cqlsh (CASSANDRA-14021)
   * RPM package spec: fix permissions for installed jars and config files 
(CASSANDRA-14181)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index ccfa5e7,fe90cc9..0fc96ed
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@@ -99,54 -77,7 +99,54 @@@ public class CompressedRandomAccessRead
  {
  try
  {
 -decompressChunk(metadata.chunkFor(current));
 +long position = current();
 +assert position < metadata.dataLength;
 +
 +CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 +
 +if (compressed.capacity() < chunk.length)
 +compressed = allocateBuffer(chunk.length, 
metadata.compressor().preferredBufferType());
 +else
 +compressed.clear();
 +compressed.limit(chunk.length);
 +
 +if (channel.read(compressed, chunk.offset) != chunk.length)
 +throw new CorruptBlockException(getPath(), chunk);
 +compressed.flip();
 +buffer.clear();
 +
++if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
++{
++FBUtilities.directCheckSum(checksum, compressed);
++
++if (checksum(chunk) != (int) checksum.getValue())
++throw new CorruptBlockException(getPath(), chunk);
++
++// reset checksum object back to the original (blank) state
++checksum.reset();
++compressed.rewind();
++}
++
 +try
 +{
 +metadata.compressor().uncompress(compressed, buffer);
 +}
 +catch (IOException e)
 +{
 +throw new CorruptBlockException(getPath(), chunk);
 +}
 +finally
 +{
 +buffer.flip();
 +}
 +
- if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
- {
- compressed.rewind();
- FBUtilities.directCheckSum(checksum, compressed);
- 
- if (checksum(chunk) != (int) checksum.getValue())
- throw new CorruptBlockException(getPath(), chunk);
- 
- // reset checksum object back to the original (blank) state
- checksum.reset();
- }
- 
 +// buffer offset is always aligned
 +bufferOffset = position & ~(buffer.capacity() - 1);
 +buffer.position((int) (position - bufferOffset));
 +// the len

[03/15] cassandra git commit: Check checksum before decompressing data

2018-04-10 Thread blerer
Check checksum before decompressing data

patch by Benjamin Lerer; reviewed by Branimir Lambov and  Gil Tene for 
CASSANDRA-14284


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34a1d5da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34a1d5da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34a1d5da

Branch: refs/heads/trunk
Commit: 34a1d5da58fb8edcad39633084541bb4162f5ede
Parents: 19d26bc
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:42:52 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:42:52 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 37 ++--
 2 files changed, 20 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c25388..aeb3009 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.21
+ * Check checksum before decompressing data (CASSANDRA-14284)
  * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
 
 2.1.20

http://git-wip-us.apache.org/repos/asf/cassandra/blob/34a1d5da/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index 184db9c..fe90cc9 100644
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@ -29,7 +29,6 @@ import 
org.apache.cassandra.io.sstable.CorruptSSTableException;
 import org.apache.cassandra.io.util.CompressedPoolingSegmentedFile;
 import org.apache.cassandra.io.util.PoolingSegmentedFile;
 import org.apache.cassandra.io.util.RandomAccessReader;
-import org.apache.cassandra.utils.FBUtilities;
 
 /**
  * CRAR extends RAR to transparently uncompress blocks from the file into 
RAR.buffer.  Most of the RAR
@@ -107,6 +106,11 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 // technically flip() is unnecessary since all the remaining work uses 
the raw array, but if that changes
 // in the future this will save a lot of hair-pulling
 compressed.flip();
+
+// If the checksum is on compressed data we want to check it before 
uncompressing the data
+if (metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, compressed.array(), chunk.length);
+
 try
 {
 validBufferBytes = 
metadata.compressor().uncompress(compressed.array(), 0, chunk.length, buffer, 
0);
@@ -116,24 +120,9 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 throw new CorruptBlockException(getPath(), chunk, e);
 }
 
-if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
-{
-
-if (metadata.hasPostCompressionAdlerChecksums)
-{
-checksum.update(compressed.array(), 0, chunk.length);
-}
-else
-{
-checksum.update(buffer, 0, validBufferBytes);
-}
+if (!metadata.hasPostCompressionAdlerChecksums)
+checkChecksumIfNeeded(chunk, buffer, validBufferBytes);
 
-if (checksum(chunk) != (int) checksum.getValue())
-throw new CorruptBlockException(getPath(), chunk);
-
-// reset checksum object back to the original (blank) state
-checksum.reset();
-}
 
 // buffer offset is always aligned
 bufferOffset = current & ~(buffer.length - 1);
@@ -143,6 +132,18 @@ public class CompressedRandomAccessReader extends 
RandomAccessReader
 validBufferBytes = (int)(length() - bufferOffset);
 }
 
+private void checkChecksumIfNeeded(CompressionMetadata.Chunk chunk, byte[] 
bytes, int length) throws IOException
+{
+if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
+{
+checksum.update(bytes, 0, length);
+if (checksum(chunk) != (int) checksum.getValue())
+throw new CorruptBlockException(getPath(), chunk);
+// reset checksum object back to the original (blank) state
+checksum.reset();
+}
+}
+
 private int checksum(CompressionMetadata.Chunk chunk) throws IOException
 {
 assert channel.position() == chunk.offset + chunk.length;



[06/15] cassandra git commit: Merge branch cassandra-2.1 into cassandra-2.2

2018-04-10 Thread blerer
Merge branch cassandra-2.1 into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b3ac7937
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b3ac7937
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b3ac7937

Branch: refs/heads/trunk
Commit: b3ac7937edce41a341d1d01c7f3201592e1caa8f
Parents: 2e5e11d 34a1d5d
Author: Benjamin Lerer 
Authored: Tue Apr 10 09:51:02 2018 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 10 09:52:18 2018 +0200

--
 CHANGES.txt |  1 +
 .../compress/CompressedRandomAccessReader.java  | 52 ++--
 2 files changed, 27 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/CHANGES.txt
--
diff --cc CHANGES.txt
index 527975c,aeb3009..5221b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,8 +1,17 @@@
 -2.1.21
 +2.2.13
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
 + * Backport circleci yaml (CASSANDRA-14240)
 +Merged from 2.1:
+  * Check checksum before decompressing data (CASSANDRA-14284)
   * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
 -2.1.20
 +2.2.12
 + * Fix the inspectJvmOptions startup check (CASSANDRA-14112)
 + * Fix race that prevents submitting compaction for a table when executor is 
full (CASSANDRA-13801)
 + * Rely on the JVM to handle OutOfMemoryErrors (CASSANDRA-13006)
 + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873)
 +Merged from 2.1:
   * Protect against overflow of local expiration time (CASSANDRA-14092)
   * More PEP8 compliance for cqlsh (CASSANDRA-14021)
   * RPM package spec: fix permissions for installed jars and config files 
(CASSANDRA-14181)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b3ac7937/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
--
diff --cc 
src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
index ccfa5e7,fe90cc9..0fc96ed
--- 
a/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
+++ 
b/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java
@@@ -99,54 -77,7 +99,54 @@@ public class CompressedRandomAccessRead
  {
  try
  {
 -decompressChunk(metadata.chunkFor(current));
 +long position = current();
 +assert position < metadata.dataLength;
 +
 +CompressionMetadata.Chunk chunk = metadata.chunkFor(position);
 +
 +if (compressed.capacity() < chunk.length)
 +compressed = allocateBuffer(chunk.length, 
metadata.compressor().preferredBufferType());
 +else
 +compressed.clear();
 +compressed.limit(chunk.length);
 +
 +if (channel.read(compressed, chunk.offset) != chunk.length)
 +throw new CorruptBlockException(getPath(), chunk);
 +compressed.flip();
 +buffer.clear();
 +
++if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
++{
++FBUtilities.directCheckSum(checksum, compressed);
++
++if (checksum(chunk) != (int) checksum.getValue())
++throw new CorruptBlockException(getPath(), chunk);
++
++// reset checksum object back to the original (blank) state
++checksum.reset();
++compressed.rewind();
++}
++
 +try
 +{
 +metadata.compressor().uncompress(compressed, buffer);
 +}
 +catch (IOException e)
 +{
 +throw new CorruptBlockException(getPath(), chunk);
 +}
 +finally
 +{
 +buffer.flip();
 +}
 +
- if (metadata.parameters.getCrcCheckChance() > 
ThreadLocalRandom.current().nextDouble())
- {
- compressed.rewind();
- FBUtilities.directCheckSum(checksum, compressed);
- 
- if (checksum(chunk) != (int) checksum.getValue())
- throw new CorruptBlockException(getPath(), chunk);
- 
- // reset checksum object back to the original (blank) state
- checksum.reset();
- }
- 
 +// buffer offset is always aligned
 +bufferOffset = position & ~(buffer.capacity() - 1);
 +buffer.position((int) (position - bufferOffset));
 +// the length() can

[jira] [Commented] (CASSANDRA-12151) Audit logging for database activity

2018-04-10 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431899#comment-16431899
 ] 

Stefan Podkowinski commented on CASSANDRA-12151:


The native transport integration for diagnostic events is basically just 
pushing events one by one over the control connection to the subscribed client. 
It's not really designed as a generic, fully scalable server-side data push 
solution. There are also no delivery guarantees, as it's really just supposed 
for debugging and analysis and not for implementing any control instances on 
top if it. The use case I have in mind is to have 1-2 clients subscribing to 
some kind of event, either ad-hoc or constantly running in the background. But 
I don't really see any use case for having a large fanout of e.g. compaction 
events. For that, the solution proposed in CASSANDRA-13459 should be 
sufficient. But we should probably discuss further details there, as it's 
slightly off-topic for this ticket.
{quote}I think specifying a shell script is probably OK although if someone 
specifies the script we should run it immediately once Chronicle rolls the 
file. Also if the script is specified we probably shouldn't delete artifacts.
{quote}
I've created a new ticket (CASSANDRA-14373) for this, as it's not strictly an 
auditing feature.

> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.x
>
> Attachments: 12151.txt, CASSANDRA_12151-benchmark.html, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13460) Diag. Events: Add local persistency

2018-04-10 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431911#comment-16431911
 ] 

Stefan Podkowinski commented on CASSANDRA-13460:


The proposed solution should be reconsidered using the chronicle queue based 
BinLog, instead of writing to a local keyspace. This should be a better 
solution for storing temporary, time based and sequentially retrieved events. 
We also get better portability by being able to simply copy already rolled over 
log files and read them on external systems. E.g. you could ask a user to 
enable diag event logging for compactions and have him send you an archive with 
all bin logs the next day, just by working with files.

> Diag. Events: Add local persistency
> ---
>
> Key: CASSANDRA-13460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13460
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>
> Some generated events will be rather less frequent but very useful for 
> retroactive troubleshooting. E.g. all events related to bootstraping and 
> gossip would probably be worth saving, as they might provide valuable 
> insights and will consume very little resources in low quantities. Imaging if 
> we could e.g. in case of CASSANDRA-13348 just ask the user to -run a tool 
> like {{./bin/diagdump BootstrapEvent}} on each host, to get us a detailed log 
> of all relevant events-  provide a dump of all events as described in the 
> [documentation|https://github.com/spodkowinski/cassandra/blob/WIP-13460/doc/source/operating/diag_events.rst].
>  
> This could be done by saving events white-listed in cassandra.yaml to a local 
> table. Maybe using a TTL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13460) Diag. Events: Add local persistency

2018-04-10 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13460:
---
Status: In Progress  (was: Patch Available)

> Diag. Events: Add local persistency
> ---
>
> Key: CASSANDRA-13460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13460
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>
> Some generated events will be rather less frequent but very useful for 
> retroactive troubleshooting. E.g. all events related to bootstraping and 
> gossip would probably be worth saving, as they might provide valuable 
> insights and will consume very little resources in low quantities. Imaging if 
> we could e.g. in case of CASSANDRA-13348 just ask the user to -run a tool 
> like {{./bin/diagdump BootstrapEvent}} on each host, to get us a detailed log 
> of all relevant events-  provide a dump of all events as described in the 
> [documentation|https://github.com/spodkowinski/cassandra/blob/WIP-13460/doc/source/operating/diag_events.rst].
>  
> This could be done by saving events white-listed in cassandra.yaml to a local 
> table. Maybe using a TTL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14303) NetworkTopologyStrategy could have a "default replication" option

2018-04-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431931#comment-16431931
 ] 

Robert Stupp commented on CASSANDRA-14303:
--

While I really like the idea, I want to note a few "edge cases" that would 
become quite problematic.
 * Requests with CL {{EACH_QUORUM}} would fail as long as any new DC has less 
than 2 live nodes (considering a default-RF or 3).
 * Requests with CL {{ALL}} will fail if not all DCs have at least 'default-RF' 
nodes
 * Requests with CL {{QUORUM&SERIAL}} will fail when adding the 2nd DC while 
that one has less than 2 nodes (considering a default-RF of 3) and one existing 
node fails. It's problematic, because the intention of {{QUORUM}} is to allow 
node failures. Note that we use CL {{QUORUM}} for the default {{cassandra}} 
user - i.e. auth for {{cassandra}} would fail in that case.
 * {{LOCAL_QUORUM/SERIAL}} are obviously problematic against a new DC - but 
that's probably ok

 

> NetworkTopologyStrategy could have a "default replication" option
> -
>
> Key: CASSANDRA-14303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14303
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
>
> Right now when creating a keyspace with {{NetworkTopologyStrategy}} the user 
> has to manually specify the datacenters they want their data replicated to 
> with parameters, e.g.:
> {noformat}
>  CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 3, 'dc2': 3}{noformat}
> This is a poor user interface because it requires the creator of the keyspace 
> (typically a developer) to know the layout of the Cassandra cluster (which 
> may or may not be controlled by them). Also, at least in my experience, folks 
> typo the datacenters _all_ the time. To work around this I see a number of 
> users creating automation around this where the automation describes the 
> Cassandra cluster and automatically expands out to all the dcs that Cassandra 
> knows about. Why can't Cassandra just do this for us, re-using the previously 
> forbidden {{replication_factor}} option (for backwards compatibility):
> {noformat}
>  CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'replication_factor': 3}{noformat}
> This would automatically replicate this Keyspace to all datacenters that are 
> present in the cluster. If you need to _override_ the default you could 
> supply a datacenter name, e.g.:
> {noformat}
> > CREATE KEYSPACE test WITH replication = {'class': 
> > 'NetworkTopologyStrategy', 'replication_factor': 3, 'dc1': 2}
> > DESCRIBE KEYSPACE test
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '2', 'dc2': 3} AND durable_writes = true;
> {noformat}
> On the implementation side I think this may be reasonably straightforward to 
> do an auto-expansion at the time of keyspace creation (or alter), where the 
> above would automatically expand to list out the datacenters. We could allow 
> this to be recomputed whenever an AlterKeyspaceStatement runs so that to add 
> datacenters you would just run:
> {noformat}
> ALTER KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'replication_factor': 3}{noformat}
> and this would check that if the dc's in the current schema are different you 
> add in the new ones (_for safety reasons we'd never remove non explicitly 
> supplied zero dcs when auto-generating dcs_). Removing a datacenter becomes 
> an alter that includes an override for the dc you want to remove (or of 
> course you can always not use the auto-expansion and just use the old way):
> {noformat}
> // Tell it explicitly not to replicate to dc2
> > ALTER KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> > 'replication_factor': 3, 'dc2': 0}
> > DESCRIBE KEYSPACE test
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '3'} AND durable_writes = true;{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/2] cassandra-dtest git commit: increase ttl to make sure self.update_view does not take longer than the ttl

2018-04-10 Thread marcuse
increase ttl to make sure self.update_view does not take longer than the ttl

Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-14148


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/3a4b5d98
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/3a4b5d98
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/3a4b5d98

Branch: refs/heads/master
Commit: 3a4b5d98e60f0087508df26dd75ab24c032c7760
Parents: af2e55e
Author: Marcus Eriksson 
Authored: Thu Jan 18 16:27:08 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 14:10:11 2018 +0200

--
 materialized_views_test.py | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/3a4b5d98/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 3836ef7..eaae8dd 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -1414,19 +1414,19 @@ class TestMaterializedViews(Tester):
 assert_one(session, "SELECT * FROM t", [1, 1, 1, None, None, None])
 assert_one(session, "SELECT * FROM mv", [1, 1, 1, None])
 
-# add selected with ttl=5
-self.update_view(session, "UPDATE t USING TTL 10 SET a=1 WHERE k=1 AND 
c=1;", flush)
+# add selected with ttl=20 (we apparently need a long ttl because the 
flushing etc that self.update_view does can take a long time)
+self.update_view(session, "UPDATE t USING TTL 20 SET a=1 WHERE k=1 AND 
c=1;", flush)
 assert_one(session, "SELECT * FROM t", [1, 1, 1, None, None, None])
 assert_one(session, "SELECT * FROM mv", [1, 1, 1, None])
 
-time.sleep(10)
+time.sleep(20)
 
 # update unselected with ttl=10, view row should be alive
-self.update_view(session, "UPDATE t USING TTL 10 SET f=1 WHERE k=1 AND 
c=1;", flush)
+self.update_view(session, "UPDATE t USING TTL 20 SET f=1 WHERE k=1 AND 
c=1;", flush)
 assert_one(session, "SELECT * FROM t", [1, 1, None, None, None, 1])
 assert_one(session, "SELECT * FROM mv", [1, 1, None, None])
 
-time.sleep(10)
+time.sleep(20)
 
 # view row still alive due to base livenessInfo
 assert_none(session, "SELECT * FROM t")


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/2] cassandra-dtest git commit: cant add DC before it has any nodes, also need to run queries at LOCAL_ONE to make sure we dont read from dc1. And to get the data to dc2 we need to run rebuild

2018-04-10 Thread marcuse
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master dac3d7535 -> 3a4b5d98e


cant add DC before it has any nodes, also need to run queries at LOCAL_ONE to 
make sure we dont read from dc1. And to get the data to dc2 we need to run 
rebuild

Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-14023


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/af2e55ea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/af2e55ea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/af2e55ea

Branch: refs/heads/master
Commit: af2e55eae12a26acc07ce52d8f8c617b77bb4156
Parents: dac3d75
Author: Marcus Eriksson 
Authored: Tue Jan 16 10:45:24 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 14:08:29 2018 +0200

--
 materialized_views_test.py | 17 -
 1 file changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/af2e55ea/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 7771f9d..3836ef7 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -424,7 +424,7 @@ class TestMaterializedViews(Tester):
 result = list(session.execute("SELECT * FROM 
ks.users_by_state_birth_year WHERE state='TX' AND birth_year=1968"))
 assert len(result) == 1, "Expecting {} users, got {}".format(1, 
len(result))
 
-def _add_dc_after_mv_test(self, rf):
+def _add_dc_after_mv_test(self, rf, nts):
 """
 @jira_ticket CASSANDRA-10978
 
@@ -456,9 +456,16 @@ class TestMaterializedViews(Tester):
 
 logger.debug("Bootstrapping new node in another dc")
 node5 = new_node(self.cluster, remote_debug_port='1414', 
data_center='dc2')
-
node5.start(jvm_args=["-Dcassandra.migration_task_wait_in_seconds={}".format(MIGRATION_WAIT)])
+
node5.start(jvm_args=["-Dcassandra.migration_task_wait_in_seconds={}".format(MIGRATION_WAIT)],
 wait_other_notice=True, wait_for_binary_proto=True)
+if nts:
+session.execute("alter keyspace ks with replication = 
{'class':'NetworkTopologyStrategy', 'dc1':1, 'dc2':1}")
+session.execute("alter keyspace system_auth with replication = 
{'class':'NetworkTopologyStrategy', 'dc1':1, 'dc2':1}")
+session.execute("alter keyspace system_traces with replication = 
{'class':'NetworkTopologyStrategy', 'dc1':1, 'dc2':1}")
+node4.nodetool('rebuild dc1')
+node5.nodetool('rebuild dc1')
 
-session2 = self.patient_exclusive_cql_connection(node4)
+cl = ConsistencyLevel.LOCAL_ONE if nts else ConsistencyLevel.ONE
+session2 = self.patient_exclusive_cql_connection(node4, 
consistency_level=cl)
 
 logger.debug("Verifying data from new node in view")
 for i in range(1000):
@@ -480,7 +487,7 @@ class TestMaterializedViews(Tester):
 Test that materialized views work as expected when adding a datacenter 
with SimpleStrategy.
 """
 
-self._add_dc_after_mv_test(1)
+self._add_dc_after_mv_test(1, False)
 
 @pytest.mark.resource_intensive
 def test_add_dc_after_mv_network_replication(self):
@@ -490,7 +497,7 @@ class TestMaterializedViews(Tester):
 Test that materialized views work as expected when adding a datacenter 
with NetworkTopologyStrategy.
 """
 
-self._add_dc_after_mv_test({'dc1': 1, 'dc2': 1})
+self._add_dc_after_mv_test({'dc1': 1}, True)
 
 @pytest.mark.resource_intensive
 def test_add_node_after_mv(self):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14023) add_dc_after_mv_network_replication_test - materialized_views_test.TestMaterializedViews fails due to invalid datacenter

2018-04-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432163#comment-16432163
 ] 

Marcus Eriksson commented on CASSANDRA-14023:
-

committed, thanks!

> add_dc_after_mv_network_replication_test - 
> materialized_views_test.TestMaterializedViews fails due to invalid datacenter
> 
>
> Key: CASSANDRA-14023
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14023
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Marcus Eriksson
>Priority: Major
>
> add_dc_after_mv_network_replication_test - 
> materialized_views_test.TestMaterializedViews always fails due to:
>  message="Unrecognized strategy option {dc2} passed to NetworkTopologyStrategy 
> for keyspace ks">



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14148) test_no_base_column_in_view_pk_complex_timestamp_with_flush - materialized_views_test.TestMaterializedViews frequently fails in CI

2018-04-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14148:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed, thanks!

> test_no_base_column_in_view_pk_complex_timestamp_with_flush - 
> materialized_views_test.TestMaterializedViews frequently fails in CI
> --
>
> Key: CASSANDRA-14148
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14148
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Marcus Eriksson
>Priority: Major
>
> test_no_base_column_in_view_pk_complex_timestamp_with_flush - 
> materialized_views_test.TestMaterializedViews frequently fails in CI
> self =  0x7f849b25cf60>
> @since('3.0')
> def test_no_base_column_in_view_pk_complex_timestamp_with_flush(self):
> >   self._test_no_base_column_in_view_pk_complex_timestamp(flush=True)
> materialized_views_test.py:970: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> materialized_views_test.py:1066: in 
> _test_no_base_column_in_view_pk_complex_timestamp
> assert_one(session, "SELECT * FROM t", [1, 1, None, None, None, 1])
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> session = 
> query = 'SELECT * FROM t', expected = [1, 1, None, None, None, 1], cl = None
> def assert_one(session, query, expected, cl=None):
> """
> Assert query returns one row.
> @param session Session to use
> @param query Query to run
> @param expected Expected results from query
> @param cl Optional Consistency Level setting. Default ONE
> 
> Examples:
> assert_one(session, "LIST USERS", ['cassandra', True])
> assert_one(session, query, [0, 0])
> """
> simple_query = SimpleStatement(query, consistency_level=cl)
> res = session.execute(simple_query)
> list_res = _rows_to_list(res)
> >   assert list_res == [expected], "Expected {} from {}, but got 
> > {}".format([expected], query, list_res)
> E   AssertionError: Expected [[1, 1, None, None, None, 1]] from SELECT * 
> FROM t, but got []



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14371) dtest failure: sstablesplit_test.TestSSTableSplit.test_single_file_split

2018-04-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-14371:
---

Assignee: Patrick Bannister

> dtest failure: sstablesplit_test.TestSSTableSplit.test_single_file_split
> 
>
> Key: CASSANDRA-14371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14371
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Patrick Bannister
>Priority: Major
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-dtest/489/testReport/sstablesplit_test/TestSSTableSplit/test_single_file_split/
> {code}
> for (stdout, stderr, rc) in result:
> logger.debug(stderr)
> >   failure = stderr.find("java.lang.AssertionError: Data component 
> > is missing")
> E   TypeError: a bytes-like object is required, not 'str'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash

2018-04-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer resolved CASSANDRA-14284.

   Resolution: Fixed
Fix Version/s: 3.11.3
   3.0.17
   2.2.13
   2.1.21
   4.0

Committed into 2.1 at 34a1d5da58fb8edcad39633084541bb4162f5ede and merged into 
2.2, 3.0, 3.11 and trunk.

> Chunk checksum test needs to occur before uncompress to avoid JVM crash
> ---
>
> Key: CASSANDRA-14284
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14284
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: The check-only-after-doing-the-decompress logic appears 
> to be in all current releases.
> Here are some samples at different evolution points :
> 3.11.2:
> [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146]
> https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207
>  
> 3.5:
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135]
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196]
> 2.1.17:
>  
> [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122]
>  
>Reporter: Gil Tene
>Assignee: Benjamin Lerer
>Priority: Major
> Fix For: 4.0, 2.1.21, 2.2.13, 3.0.17, 3.11.3
>
>
> While checksums are (generally) performed on compressed data, the checksum 
> test when reading is currently (in all variants of C* 2.x, 3.x I've looked 
> at) done [on the compressed data] only after the uncompress operation has 
> completed. 
> The issue here is that LZ4_decompress_fast (as documented in e.g. 
> [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory 
> overruns when provided with malformed source data. This in turn can (and 
> does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of 
> corrupted chunks. The checksum operation would obviously detect the issue, 
> but we'd never get to it if the JVM crashes first.
> Moving the checksum test of the compressed data to before the uncompress 
> operation (in cases where the checksum is done on compressed data) will 
> resolve this issue.
> -
> The check-only-after-doing-the-decompress logic appears to be in all current 
> releases.
> Here are some samples at different evolution points :
> 3.11.2:
> [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146]
> https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207
>  
> 3.5:
>  
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135]
> [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196]
> 2.1.17:
>  
> [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14371) dtest failure: sstablesplit_test.TestSSTableSplit.test_single_file_split

2018-04-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432181#comment-16432181
 ] 

Marcus Eriksson commented on CASSANDRA-14371:
-

lets wait for the ccm fix

ping [~philipthompson]

> dtest failure: sstablesplit_test.TestSSTableSplit.test_single_file_split
> 
>
> Key: CASSANDRA-14371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14371
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Patrick Bannister
>Priority: Major
>
> https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-dtest/489/testReport/sstablesplit_test/TestSSTableSplit/test_single_file_split/
> {code}
> for (stdout, stderr, rc) in result:
> logger.debug(stderr)
> >   failure = stderr.find("java.lang.AssertionError: Data component 
> > is missing")
> E   TypeError: a bytes-like object is required, not 'str'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14369) infinite loop when decommission a node

2018-04-10 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432238#comment-16432238
 ] 

Paulo Motta commented on CASSANDRA-14369:
-

Since you are using multiple disks (JBOD) this looks similar to 
CASSANDRA-13948, would you mind upgrade to 3.11.2 and see if the issue is 
happening there?

> infinite loop when decommission a node
> --
>
> Key: CASSANDRA-14369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14369
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Daniel Woo
>Priority: Major
> Fix For: 3.11.1
>
>
> I have 6 nodes (N1 to N6), N2 to N6 are new hardwares with two SSDs on each, 
> N1 is an old box with spinning disks, and I am trying to decommission N1. 
> Then I see two nodes are trying to receive streaming from N1 infinitely. The 
> log rotates so quickly that I can only see this:
>  
> {{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,560 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,561 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,561 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}{{INFO  [CompactionExecutor:19401] 2018-04-07 13:07:56,561 
> LeveledManifest.java:474 - Adding high-level (L3) 
> BigTableReader(path='/opt/platform/data1/cassandra/data/data/contract_center_cloud/contract-2f2f9f70cd9911e7bfe87fec03576322/mc-31-big-Data.db')
>  to candidates}}
> nodetool tpstats shows some of the compactions are pending:
>  
> {{Pool Name                         Active   Pending      Completed   Blocked 
>  All time blocked}}{{ReadStage                              0         0       
>  1366419         0                 0}}{{MiscStage                             
>  0         0              0         0                 0}}{{CompactionExecutor 
>                     9         9          77739         0                 
> 0}}{{MutationStage                          0         0        7504702        
>  0                 0}}{{MemtableReclaimMemory                  0         0    
>         327         0                 0}}{{PendingRangeCalculator             
>     0         0             20         0                 0}}{{GossipStage     
>                        0         0         486365         0                 
> 0}}{{SecondaryIndexManagement               0         0              0        
>  0                 0}}
>  
> This is from the jstack output:
> {{"CompactionExecutor:1" #26533 daemon prio=1 os_prio=4 
> tid=0x7f971812f170 nid=0x6581 waiting for monitor entry 
> [0x

[1/6] cassandra git commit: Handle all exceptions when opening sstables

2018-04-10 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 73ca0e1e1 -> edcb90f08
  refs/heads/cassandra-3.11 c1020d62e -> 19e329eb5
  refs/heads/trunk 0b16546f6 -> b5dbc04bd


Handle all exceptions when opening sstables

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14202


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edcb90f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edcb90f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edcb90f0

Branch: refs/heads/cassandra-3.0
Commit: edcb90f0813b88bbd42e9ebc55507b0f03ccb7bc
Parents: 73ca0e1
Author: Marcus Eriksson 
Authored: Mon Jan 29 15:30:17 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:24:04 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1564fa3..94b2276 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Handle all exceptions when opening sstables (CASSANDRA-14202)
  * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
  * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
  * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java 
b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
index 0fe316d..93be2ee 100644
--- a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
+++ b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
@@ -23,13 +23,13 @@ public class CorruptSSTableException extends 
RuntimeException
 {
 public final File path;
 
-public CorruptSSTableException(Exception cause, File path)
+public CorruptSSTableException(Throwable cause, File path)
 {
 super("Corrupted: " + path, cause);
 this.path = path;
 }
 
-public CorruptSSTableException(Exception cause, String path)
+public CorruptSSTableException(Throwable cause, String path)
 {
 this(cause, new File(path));
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index c66fd8c..dc6940d 100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@ -466,9 +466,9 @@ public abstract class SSTableReader extends SSTable 
implements SelfRefCounted

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-04-10 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19e329eb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19e329eb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19e329eb

Branch: refs/heads/cassandra-3.11
Commit: 19e329eb5c124d2e37b52052e8622f0515f058b7
Parents: c1020d6 edcb90f
Author: Marcus Eriksson 
Authored: Tue Apr 10 15:26:30 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:26:30 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19e329eb/CHANGES.txt
--
diff --cc CHANGES.txt
index c4f05d5,94b2276..e0145d4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 -3.0.17
 +3.11.3
 + * Downgrade log level to trace for CommitLogSegmentManager (CASSANDRA-14370)
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Serialize empty buffer as empty string for json output format 
(CASSANDRA-14245)
 + * Allow logging implementation to be interchanged for embedded testing 
(CASSANDRA-13396)
 + * SASI tokenizer for simple delimiter based entries (CASSANDRA-14247)
 + * Fix Loss of digits when doing CAST from varint/bigint to decimal 
(CASSANDRA-14170)
 + * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
 + * Fix wildcard GROUP BY queries (CASSANDRA-14209)
 +Merged from 3.0:
+  * Handle all exceptions when opening sstables (CASSANDRA-14202)
   * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
   * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
   * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/19e329eb/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-04-10 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19e329eb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19e329eb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19e329eb

Branch: refs/heads/trunk
Commit: 19e329eb5c124d2e37b52052e8622f0515f058b7
Parents: c1020d6 edcb90f
Author: Marcus Eriksson 
Authored: Tue Apr 10 15:26:30 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:26:30 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19e329eb/CHANGES.txt
--
diff --cc CHANGES.txt
index c4f05d5,94b2276..e0145d4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 -3.0.17
 +3.11.3
 + * Downgrade log level to trace for CommitLogSegmentManager (CASSANDRA-14370)
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 + * Serialize empty buffer as empty string for json output format 
(CASSANDRA-14245)
 + * Allow logging implementation to be interchanged for embedded testing 
(CASSANDRA-13396)
 + * SASI tokenizer for simple delimiter based entries (CASSANDRA-14247)
 + * Fix Loss of digits when doing CAST from varint/bigint to decimal 
(CASSANDRA-14170)
 + * RateBasedBackPressure unnecessarily invokes a lock on the Guava 
RateLimiter (CASSANDRA-14163)
 + * Fix wildcard GROUP BY queries (CASSANDRA-14209)
 +Merged from 3.0:
+  * Handle all exceptions when opening sstables (CASSANDRA-14202)
   * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
   * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
   * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/19e329eb/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-04-10 Thread marcuse
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b5dbc04b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b5dbc04b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b5dbc04b

Branch: refs/heads/trunk
Commit: b5dbc04bda0479367d89d1e406b09fa187bf7aad
Parents: 0b16546 19e329e
Author: Marcus Eriksson 
Authored: Tue Apr 10 15:28:29 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:28:29 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b5dbc04b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b5dbc04b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: Handle all exceptions when opening sstables

2018-04-10 Thread marcuse
Handle all exceptions when opening sstables

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14202


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edcb90f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edcb90f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edcb90f0

Branch: refs/heads/trunk
Commit: edcb90f0813b88bbd42e9ebc55507b0f03ccb7bc
Parents: 73ca0e1
Author: Marcus Eriksson 
Authored: Mon Jan 29 15:30:17 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:24:04 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1564fa3..94b2276 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Handle all exceptions when opening sstables (CASSANDRA-14202)
  * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
  * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
  * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java 
b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
index 0fe316d..93be2ee 100644
--- a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
+++ b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
@@ -23,13 +23,13 @@ public class CorruptSSTableException extends 
RuntimeException
 {
 public final File path;
 
-public CorruptSSTableException(Exception cause, File path)
+public CorruptSSTableException(Throwable cause, File path)
 {
 super("Corrupted: " + path, cause);
 this.path = path;
 }
 
-public CorruptSSTableException(Exception cause, String path)
+public CorruptSSTableException(Throwable cause, String path)
 {
 this(cause, new File(path));
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index c66fd8c..dc6940d 100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@ -466,9 +466,9 @@ public abstract class SSTableReader extends SSTable 
implements SelfRefCounted

[2/6] cassandra git commit: Handle all exceptions when opening sstables

2018-04-10 Thread marcuse
Handle all exceptions when opening sstables

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14202


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edcb90f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edcb90f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edcb90f0

Branch: refs/heads/cassandra-3.11
Commit: edcb90f0813b88bbd42e9ebc55507b0f03ccb7bc
Parents: 73ca0e1
Author: Marcus Eriksson 
Authored: Mon Jan 29 15:30:17 2018 +0100
Committer: Marcus Eriksson 
Committed: Tue Apr 10 15:24:04 2018 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/io/sstable/CorruptSSTableException.java|  4 ++--
 .../cassandra/io/sstable/format/SSTableReader.java   | 11 +++
 3 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1564fa3..94b2276 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.17
+ * Handle all exceptions when opening sstables (CASSANDRA-14202)
  * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
  * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
  * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java 
b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
index 0fe316d..93be2ee 100644
--- a/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
+++ b/src/java/org/apache/cassandra/io/sstable/CorruptSSTableException.java
@@ -23,13 +23,13 @@ public class CorruptSSTableException extends 
RuntimeException
 {
 public final File path;
 
-public CorruptSSTableException(Exception cause, File path)
+public CorruptSSTableException(Throwable cause, File path)
 {
 super("Corrupted: " + path, cause);
 this.path = path;
 }
 
-public CorruptSSTableException(Exception cause, String path)
+public CorruptSSTableException(Throwable cause, String path)
 {
 this(cause, new File(path));
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcb90f0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
index c66fd8c..dc6940d 100644
--- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java
@@ -466,9 +466,9 @@ public abstract class SSTableReader extends SSTable 
implements SelfRefCounted

[jira] [Updated] (CASSANDRA-14202) Assertion error on sstable open during startup should invoke disk failure policy

2018-04-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14202:

   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   3.11.3
   3.0.17
   4.0
   Status: Resolved  (was: Ready to Commit)

committed as {{edcb90f0813b88bbd42e9ebc55507b0f03ccb7bc}} and merged up with a 
small change:
[this|https://github.com/krummas/cassandra/blob/marcuse/handle_throwable/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L501-L510]
 got folded up like 
[this|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L514-L518]

> Assertion error on sstable open during startup should invoke disk failure 
> policy
> 
>
> Key: CASSANDRA-14202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14202
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> We should catch all exceptions when opening sstables on startup and invoke 
> the disk failure policy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14303) NetworkTopologyStrategy could have a "default replication" option

2018-04-10 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432331#comment-16432331
 ] 

Jeremiah Jordan commented on CASSANDRA-14303:
-

[~snazy] see conversation above: 
https://issues.apache.org/jira/browse/CASSANDRA-14303?focusedCommentId=16393438&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16393438
bq. Yes, that edge case as well as others (gossip inconsistency mostly) is why 
I propose only evaluating the DCs at the time of a CREATE or ALTER statement 
execution.

> NetworkTopologyStrategy could have a "default replication" option
> -
>
> Key: CASSANDRA-14303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14303
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
>
> Right now when creating a keyspace with {{NetworkTopologyStrategy}} the user 
> has to manually specify the datacenters they want their data replicated to 
> with parameters, e.g.:
> {noformat}
>  CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 3, 'dc2': 3}{noformat}
> This is a poor user interface because it requires the creator of the keyspace 
> (typically a developer) to know the layout of the Cassandra cluster (which 
> may or may not be controlled by them). Also, at least in my experience, folks 
> typo the datacenters _all_ the time. To work around this I see a number of 
> users creating automation around this where the automation describes the 
> Cassandra cluster and automatically expands out to all the dcs that Cassandra 
> knows about. Why can't Cassandra just do this for us, re-using the previously 
> forbidden {{replication_factor}} option (for backwards compatibility):
> {noformat}
>  CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'replication_factor': 3}{noformat}
> This would automatically replicate this Keyspace to all datacenters that are 
> present in the cluster. If you need to _override_ the default you could 
> supply a datacenter name, e.g.:
> {noformat}
> > CREATE KEYSPACE test WITH replication = {'class': 
> > 'NetworkTopologyStrategy', 'replication_factor': 3, 'dc1': 2}
> > DESCRIBE KEYSPACE test
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '2', 'dc2': 3} AND durable_writes = true;
> {noformat}
> On the implementation side I think this may be reasonably straightforward to 
> do an auto-expansion at the time of keyspace creation (or alter), where the 
> above would automatically expand to list out the datacenters. We could allow 
> this to be recomputed whenever an AlterKeyspaceStatement runs so that to add 
> datacenters you would just run:
> {noformat}
> ALTER KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'replication_factor': 3}{noformat}
> and this would check that if the dc's in the current schema are different you 
> add in the new ones (_for safety reasons we'd never remove non explicitly 
> supplied zero dcs when auto-generating dcs_). Removing a datacenter becomes 
> an alter that includes an override for the dc you want to remove (or of 
> course you can always not use the auto-expansion and just use the old way):
> {noformat}
> // Tell it explicitly not to replicate to dc2
> > ALTER KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> > 'replication_factor': 3, 'dc2': 0}
> > DESCRIBE KEYSPACE test
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '3'} AND durable_writes = true;{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14310) Don't allow nodetool refresh before cfs is opened

2018-04-10 Thread Jordan West (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432529#comment-16432529
 ] 

Jordan West commented on CASSANDRA-14310:
-

+1. Agreed on keeping the initialized check as well. None of the dtest failures 
look related and the new dtest looks good.

> Don't allow nodetool refresh before cfs is opened
> -
>
> Key: CASSANDRA-14310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14310
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> There is a potential deadlock in during startup if nodetool refresh is called 
> while sstables are being opened. We should not allow refresh to be called 
> before everything is initialized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14310) Don't allow nodetool refresh before cfs is opened

2018-04-10 Thread Jordan West (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-14310:

Reviewer: Jordan West  (was: Sam Tunnicliffe)

> Don't allow nodetool refresh before cfs is opened
> -
>
> Key: CASSANDRA-14310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14310
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> There is a potential deadlock in during startup if nodetool refresh is called 
> while sstables are being opened. We should not allow refresh to be called 
> before everything is initialized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432532#comment-16432532
 ] 

Ariel Weisberg commented on CASSANDRA-13853:


We shouldn't change the output of schema versions because that already existed 
and people might be parsing it.

Also I think it might make sense to put the new output after the existing 
output and put a line break in between it. We might already break parsing that 
people are doing since they might just read to the end to get the schema 
versions, but at least that will be easier to fix.

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-6719) redesign loadnewsstables

2018-04-10 Thread Jordan West (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jordan West updated CASSANDRA-6719:
---
Reviewer: Jordan West

> redesign loadnewsstables
> 
>
> Key: CASSANDRA-6719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6719
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Marcus Eriksson
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: 6719.patch
>
>
> CFSMBean.loadNewSSTables scans data directories for new sstables dropped 
> there by an external agent.  This is dangerous because of possible filename 
> conflicts with existing or newly generated sstables.
> Instead, we should support leaving the new sstables in a separate directory 
> (specified by a parameter, or configured as a new location in yaml) and take 
> care of renaming as necessary automagically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432541#comment-16432541
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


[~aweisberg] Does the bellow output look okay? If so, I will push the patch.
{code:java}
Cluster Information:
  Name: Test Cluster
  Snitch: org.apache.cassandra.locator.SimpleSnitch
  DynamicEndPointSnitch: enabled
  Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
  Schema versions: 
b18cff54-5b52-3afd-b6cf-bb923b695e73: [127.0.0.1]

Stats for all nodes:
  Live: 1
  Joining: 0
  Moving: 0
  Leaving: 0
  Unreachable: 0
Data Centers: 
  datacenter1 #Nodes: 1 #Down: 0
Keyspaces:
  system_schema -> Replication class: LocalStrategy {}
  system -> Replication class: LocalStrategy {}
  system_auth -> Replication class: SimpleStrategy {replication_factor=1}
  system_distributed -> Replication class: SimpleStrategy 
{replication_factor=3}
  system_traces -> Replication class: SimpleStrategy 
{replication_factor=2}{code}

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432547#comment-16432547
 ] 

Jon Haddad commented on CASSANDRA-13853:


{quote}
We shouldn't change the output of schema versions because that already existed 
and people might be parsing it.
{quote}

Since this is going into 4.0, is meant to be human readable, and we already 
have a programmatic means of getting this info (jmx), I'm OK with breaking 
changes if they are an improvement to it's readability.  

Yes, people are parsing nodetool.  It's a bummer.  On the upside, I think that 
virtual tables will be able to take over most of the duty of programmatic 
interface with the DB. 

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432555#comment-16432555
 ] 

Ariel Weisberg commented on CASSANDRA-13853:


Output looks good to me. I guess you can lose the newline since people aren't 
supposed to be parsing this really. I can fix that when I commit it.

It looks like this still doesn't have the Cassandra binary versions Jon was 
asking for?

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13459) Diag. Events: Native transport integration

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432544#comment-16432544
 ] 

Ariel Weisberg commented on CASSANDRA-13459:


So I was just thinking that forward looking restricting this mechanism to 
diagnostic events might not make sense. I was thinking a more generic 
subscription mechanism where diagnostic are events is a subset of what clients 
can conditionally subscribe to means we don't end up with naming issues in the 
future.

For V1 of this functionality my only sticking point is that even with 1-2 
clients consuming diagnostic events we have to handle backpressure somehow. 
AFAIK we hold onto messages pending to a client for a while (indefinitely?). I 
am not actually sure what kind fo timeouts or health checks we do for clients.

All the other stuff I mentioned in CASSANDRA-12151 is not really necessary for 
V1 if it does what you need today.

> Diag. Events: Native transport integration
> --
>
> Key: CASSANDRA-13459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13459
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>  Labels: client-impacting
>
> Events should be consumable by clients that would received subscribed events 
> from the connected node. This functionality is designed to work on top of 
> native transport with minor modifications to the protocol standard (see 
> [original 
> proposal|https://docs.google.com/document/d/1uEk7KYgxjNA0ybC9fOuegHTcK3Yi0hCQN5nTp5cNFyQ/edit?usp=sharing]
>  for further considered options). First we have to add another value for 
> existing event types. Also, we have to extend the protocol a bit to be able 
> to specify a sub-class and sub-type value. E.g. 
> {{DIAGNOSTIC_EVENT(GossiperEvent, MAJOR_STATE_CHANGE_HANDLED)}}. This still 
> has to be worked out and I'd appreciate any feedback.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432564#comment-16432564
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


[~rustyrazorblade] [~aweisberg] So do we want to retain the old output of 
schema versions as shown in my last result above?

Also, what Cassandra binary versions are you referring?

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432577#comment-16432577
 ] 

Ariel Weisberg commented on CASSANDRA-13853:


I think we want to keep schema versions the way it 
[was|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/nodetool/DescribeCluster.java#L52].

I think he meant major, minor, and patch version of Cassandra each server is 
running. See 
https://issues.apache.org/jira/browse/CASSANDRA-13853?focusedCommentId=16216154&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16216154

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432577#comment-16432577
 ] 

Ariel Weisberg edited comment on CASSANDRA-13853 at 4/10/18 4:55 PM:
-

I think we want to keep schema versions the way it 
[was|https://github.com/apache/cassandra/blob/59b5b6bef0fa76bf5740b688fcd4d9cf525760d0/src/java/org/apache/cassandra/tools/nodetool/DescribeCluster.java#L52].

I think he meant major, minor, and patch version of Cassandra each server is 
running. See 
https://issues.apache.org/jira/browse/CASSANDRA-13853?focusedCommentId=16216154&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16216154


was (Author: aweisberg):
I think we want to keep schema versions the way it 
[was|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/nodetool/DescribeCluster.java#L52].

I think he meant major, minor, and patch version of Cassandra each server is 
running. See 
https://issues.apache.org/jira/browse/CASSANDRA-13853?focusedCommentId=16216154&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16216154

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432595#comment-16432595
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


Ah. I missed that one out. I will work on adding that and give an update. 
Thanks!

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Preetika Tyagi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432749#comment-16432749
 ] 

Preetika Tyagi commented on CASSANDRA-13853:


Here is the new output. I will upload the patch if it looks okay.
{code:java}
Cluster Information:
  Name: Test Cluster
  Snitch: org.apache.cassandra.locator.SimpleSnitch
  DynamicEndPointSnitch: enabled
  Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
  Schema versions:
b18cff54-5b52-3afd-b6cf-bb923b695e73: [127.0.0.1]

Stats for all nodes:
  Live: 1
  Joining: 0
  Moving: 0
  Leaving: 0
  Unreachable: 0

Data Centers: 
  datacenter1 #Nodes: 1 #Down: 0

Database versions:
  4.0.0: [127.0.0.1:7000]

Keyspaces:
  system_schema -> Replication class: LocalStrategy {}
  system -> Replication class: LocalStrategy {}
  system_auth -> Replication class: SimpleStrategy {replication_factor=1}
  system_distributed -> Replication class: SimpleStrategy 
{replication_factor=3}
  system_traces -> Replication class: SimpleStrategy 
{replication_factor=2}{code}

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13426) Make all DDL statements idempotent and not dependent on global state

2018-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432771#comment-16432771
 ] 

Aleksey Yeschenko commented on CASSANDRA-13426:
---

Rebased on top of most recent trunk. There are some test failures that need to 
be fixed and review feedback that still needs to be addressed, and I guess some 
extra tests to write (although most of it is covered with various unit and 
dtests).

[~ifesdjeen] You worked on {{SUPER}} and {{DENSE}} flags removal.. when you 
have time, can you look at a small commit 
[here|https://github.com/iamaleksey/cassandra/commits/13426] titled 'Get rid of 
COMPACT STORAGE logic in DDL statements' please? Not referencing the sha as I'm 
still force-pushing here occasionally. Thanks.

> Make all DDL statements idempotent and not dependent on global state
> 
>
> Key: CASSANDRA-13426
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13426
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 4.0
>
>
> A follow-up to CASSANDRA-9425 and a pre-requisite for CASSANDRA-10699.
> It's necessary for the latter to be able to apply any DDL statement several 
> times without side-effects. As part of the ticket I think we should also 
> clean up validation logic and our error texts. One example is varying 
> treatment of missing keyspace for DROP TABLE/INDEX/etc. statements with IF 
> EXISTS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)
Paulo Motta created CASSANDRA-14374:
---

 Summary: Speculative retry parsing breaks on non-english locale
 Key: CASSANDRA-14374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
 Project: Cassandra
  Issue Type: Bug
Reporter: Paulo Motta
Assignee: Paulo Motta


I was getting the following error when running unit tests on my machine:
{code:none}
Error setting schema for test (query was: CREATE TABLE 
cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
at 
org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
Speculative Retry Policy [99,00p] is not supported
at 
org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
at 
org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
at 
org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
at 
org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
at 
org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
at 
org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
at 
org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
at 
org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
{code}
It turns out that my machine is configured with {{pt_BR}} locale, which uses 
comma instead of dot for decimal separator, so the speculative retry option 
parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
working.

To reproduce on Linux:
{code:none}
export LC_CTYPE=pt_BR.UTF-8
ant test -Dtest.name="DeleteTest"
ant test -Dtest.name="SpeculativeRetryParseTest"
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13853) nodetool describecluster should be more informative

2018-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432814#comment-16432814
 ] 

Ariel Weisberg commented on CASSANDRA-13853:


That looks good. Upload the patch and I will try out the dtest. 3 dtests is a 
bit much to add for this. They are very very slow and I don't want to add that 
many if I can avoid it. I think it should also go into the existing 
nodetool_test.py? It's no that big yet so I don't think we need to break up 
nodetool tests into multiple files.


Maybe just add the 3 datacenter case?

> nodetool describecluster should be more informative
> ---
>
> Key: CASSANDRA-13853
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13853
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
>Reporter: Jon Haddad
>Assignee: Preetika Tyagi
>Priority: Minor
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: cassandra-13853-v5.patch, 
> nodetool_describecluster_test.py
>
>
> Additional information we should be displaying:
> * Total node count
> * List of datacenters, RF, with number of nodes per dc, how many are down, 
> * Version(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Reproduced In: 4.0
   Status: Patch Available  (was: Open)

Attached patch that forces {{US}} locale when generating 
{{PercentileSpeculativeRetryPolicy}} representation. Mind reviewing 
[~iamaleksey] or [~mkjellman] ?

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14374:
--
Reviewer: Aleksey Yeschenko

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432844#comment-16432844
 ] 

Aleksey Yeschenko commented on CASSANDRA-14374:
---

[~pauloricardomg] Sure. Do you mind going one step further and changing that 
{{toString()}} to
{code}
return String.format("%sp", new DecimalFormat("#.").format(percentile));
{code}
?

Because the previous patch introduced a minor annoying regression, in that 99p 
for example is being serialized as {{99.00p}}. And check it in pt_BR locale as 
well as en_US?

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14374:
--
Fix Version/s: 4.0

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432844#comment-16432844
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-14374 at 4/10/18 7:42 PM:


[~pauloricardomg] Sure. Do you mind going one step further and changing that 
{{toString()}} to
{code}
return String.format("%sp", new DecimalFormat("#.").format(percentile));
{code}
?

Because the previous patch introduced a minor annoying regression, in that 99p 
for example is being serialized as {{99.00p}} (instead of {{99p}}). And check 
it in pt_BR locale as well as en_US?


was (Author: iamaleksey):
[~pauloricardomg] Sure. Do you mind going one step further and changing that 
{{toString()}} to
{code}
return String.format("%sp", new DecimalFormat("#.").format(percentile));
{code}
?

Because the previous patch introduced a minor annoying regression, in that 99p 
for example is being serialized as {{99.00p}}. And check it in pt_BR locale as 
well as en_US?

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14352) Clean up parsing speculative retry params from string

2018-04-10 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14352:

Status: Ready to Commit  (was: Patch Available)

> Clean up parsing speculative retry params from string
> -
>
> Key: CASSANDRA-14352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14352
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.x
>
>
> Follow-up to CASSANDRA-14293, to put parsing logic ({{fromString()}}) next to 
> formatting logic ({{toString()}}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14352) Clean up parsing speculative retry params from string

2018-04-10 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432897#comment-16432897
 ] 

Blake Eggleston commented on CASSANDRA-14352:
-

+1

> Clean up parsing speculative retry params from string
> -
>
> Key: CASSANDRA-14352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14352
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 4.x
>
>
> Follow-up to CASSANDRA-14293, to put parsing logic ({{fromString()}}) next to 
> formatting logic ({{toString()}}).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: (was: 
0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch)

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Clean up parsing speculative retry params from string

2018-04-10 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk b5dbc04bd -> 4991ca26a


Clean up parsing speculative retry params from string

patch by Aleksey Yeschenko; reviewed by Blake Eggleston for
CASSANDRA-14352


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4991ca26
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4991ca26
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4991ca26

Branch: refs/heads/trunk
Commit: 4991ca26aa424286ebdee89742d35e813f9e9259
Parents: b5dbc04
Author: Aleksey Yeshchenko 
Authored: Thu Mar 29 15:37:01 2018 +0100
Committer: Aleksey Yeshchenko 
Committed: Tue Apr 10 21:25:16 2018 +0100

--
 CHANGES.txt |   4 +-
 .../apache/cassandra/schema/TableParams.java|   4 +-
 .../reads/AlwaysSpeculativeRetryPolicy.java |   6 +-
 .../reads/FixedSpeculativeRetryPolicy.java  |  33 -
 .../reads/HybridSpeculativeRetryPolicy.java |  70 +--
 .../reads/NeverSpeculativeRetryPolicy.java  |   6 +-
 .../reads/PercentileSpeculativeRetryPolicy.java |  45 ++-
 .../service/reads/SpeculativeRetryPolicy.java   | 122 +++
 8 files changed, 162 insertions(+), 128 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4991ca26/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c123e6f..650f740 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,12 +1,12 @@
 4.0
+ * Add support for hybrid MIN(), MAX() speculative retry policies
+   (CASSANDRA-14293, CASSANDRA-14338, CASSANDRA-14352)
  * Fix some regressions caused by 14058 (CASSANDRA-14353)
  * Abstract repair for pluggable storage (CASSANDRA-14116)
  * Add meaningful toString() impls (CASSANDRA-13653)
  * Add sstableloader option to accept target keyspace name (CASSANDRA-13884)
  * Move processing of EchoMessage response to gossip stage (CASSANDRA-13713)
  * Add coordinator write metric per CF (CASSANDRA-14232)
- * Fix scheduling of speculative retry threshold recalculation 
(CASSANDRA-14338)
- * Add support for hybrid MIN(), MAX() speculative retry policies 
(CASSANDRA-14293)
  * Correct and clarify SSLFactory.getSslContext method and call sites 
(CASSANDRA-14314)
  * Handle static and partition deletion properly on 
ThrottledUnfilteredIterator (CASSANDRA-14315)
  * NodeTool clientstats should show SSL Cipher (CASSANDRA-14322)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4991ca26/src/java/org/apache/cassandra/schema/TableParams.java
--
diff --git a/src/java/org/apache/cassandra/schema/TableParams.java 
b/src/java/org/apache/cassandra/schema/TableParams.java
index ffa310e..895e3a7 100644
--- a/src/java/org/apache/cassandra/schema/TableParams.java
+++ b/src/java/org/apache/cassandra/schema/TableParams.java
@@ -26,6 +26,7 @@ import com.google.common.collect.ImmutableMap;
 
 import org.apache.cassandra.cql3.Attributes;
 import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.service.reads.PercentileSpeculativeRetryPolicy;
 import org.apache.cassandra.service.reads.SpeculativeRetryPolicy;
 import org.apache.cassandra.utils.BloomCalculations;
 
@@ -70,6 +71,7 @@ public final class TableParams
 public static final int DEFAULT_MIN_INDEX_INTERVAL = 128;
 public static final int DEFAULT_MAX_INDEX_INTERVAL = 2048;
 public static final double DEFAULT_CRC_CHECK_CHANCE = 1.0;
+public static final SpeculativeRetryPolicy DEFAULT_SPECULATIVE_RETRY = new 
PercentileSpeculativeRetryPolicy(99.0);
 
 public final String comment;
 public final double readRepairChance;
@@ -290,7 +292,7 @@ public final class TableParams
 private int memtableFlushPeriodInMs = 
DEFAULT_MEMTABLE_FLUSH_PERIOD_IN_MS;
 private int minIndexInterval = DEFAULT_MIN_INDEX_INTERVAL;
 private int maxIndexInterval = DEFAULT_MAX_INDEX_INTERVAL;
-private SpeculativeRetryPolicy speculativeRetry = 
SpeculativeRetryPolicy.DEFAULT;
+private SpeculativeRetryPolicy speculativeRetry = 
DEFAULT_SPECULATIVE_RETRY;
 private CachingParams caching = CachingParams.DEFAULT;
 private CompactionParams compaction = CompactionParams.DEFAULT;
 private CompressionParams compression = CompressionParams.DEFAULT;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4991ca26/src/java/org/apache/cassandra/service/reads/AlwaysSpeculativeRetryPolicy.java
--
diff --git 
a/src/java/org/apache/cassandra/service/reads/AlwaysSpeculativeRetryPolicy.java 
b/src/java/org/apache/cassandra/service/reads/AlwaysSpeculativeRetryPolicy.java
index 4623cb1..daf1ec5 100644
--- 
a/s

[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Fix schema_metadata_test on trunk

2018-04-10 Thread aleksey
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 3a4b5d98e -> 1df74a6af


Fix schema_metadata_test on trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/1df74a6a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/1df74a6a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/1df74a6a

Branch: refs/heads/master
Commit: 1df74a6afe80d26192a9310e349885114bde181d
Parents: 3a4b5d9
Author: Aleksey Yeschenko 
Authored: Mon Apr 9 13:56:37 2018 +0100
Committer: Aleksey Yeschenko 
Committed: Tue Apr 10 21:28:43 2018 +0100

--
 schema_metadata_test.py | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/1df74a6a/schema_metadata_test.py
--
diff --git a/schema_metadata_test.py b/schema_metadata_test.py
index fdfcf56..d8d727b 100644
--- a/schema_metadata_test.py
+++ b/schema_metadata_test.py
@@ -227,9 +227,14 @@ def verify_nondefault_table_settings(created_on_version, 
current_version, keyspa
 assert 20 == meta.options['max_index_interval']
 
 if created_on_version >= '3.0':
-assert '55PERCENTILE' == meta.options['speculative_retry']
 assert 2121 == meta.options['memtable_flush_period_in_ms']
 
+if created_on_version >= '3.0':
+if created_on_version >= '4.0':
+assert '55p' == meta.options['speculative_retry']
+else:
+assert '55PERCENTILE' == meta.options['speculative_retry']
+
 if current_version >= '3.0':
 assert 'org.apache.cassandra.io.compress.DeflateCompressor' == 
meta.options['compression']['class']
 assert '128' == meta.options['compression']['chunk_length_in_kb']


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14374) Speculative retry parsing breaks on non-english locale

2018-04-10 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-14374:

Attachment: (was: 
0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch)

> Speculative retry parsing breaks on non-english locale
> --
>
> Key: CASSANDRA-14374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 
> 0001-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch, 
> 0002-Use-Locale.US-on-PercentileSpeculativeRetryPolicy.to.patch
>
>
> I was getting the following error when running unit tests on my machine:
> {code:none}
> Error setting schema for test (query was: CREATE TABLE 
> cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> TABLE cql_test_keyspace.table_32 (a int, b int, c text, primary key (a, b)))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:819)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:632)
>   at org.apache.cassandra.cql3.CQLTester.createTable(CQLTester.java:624)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithNonoverlappingRange(DeleteTest.java:663)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: Specified 
> Speculative Retry Policy [99,00p] is not supported
>   at 
> org.apache.cassandra.service.reads.SpeculativeRetryPolicy.fromString(SpeculativeRetryPolicy.java:135)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:1006)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:981)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:941)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:900)
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1301)
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:608)
>   at 
> org.apache.cassandra.schema.MigrationManager.announce(MigrationManager.java:425)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:239)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:224)
>   at 
> org.apache.cassandra.schema.MigrationManager.announceNewTable(MigrationManager.java:204)
>   at 
> org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:88)
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.executeInternal(SchemaAlteringStatement.java:120)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:814)
> {code}
> It turns out that my machine is configured with {{pt_BR}} locale, which uses 
> comma instead of dot for decimal separator, so the speculative retry option 
> parsing introduced by CASSANDRA-14293, which assumed {{en_US}} locale was not 
> working.
> To reproduce on Linux:
> {code:none}
> export LC_CTYPE=pt_BR.UTF-8
> ant test -Dtest.name="DeleteTest"
> ant test -Dtest.name="SpeculativeRetryParseTest"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   >