cassandra git commit: fix hints serialized size calculation patch by dbrosius reviewed by thobbs for cassandra-8587

2015-01-09 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 df1f5ead0 - e4fc39524


fix hints serialized size calculation
patch by dbrosius reviewed by thobbs for cassandra-8587


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4fc3952
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4fc3952
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4fc3952

Branch: refs/heads/cassandra-2.0
Commit: e4fc395242ee81a85141eda616ba97e937d1c604
Parents: df1f5ea
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 19:50:54 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 19:50:54 2015 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/net/MessageOut.java | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4fc3952/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c7e9a2..fc43dfa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -59,6 +59,7 @@
  * Add DC-aware sequential repair (CASSANDRA-8193)
  * Improve JBOD disk utilization (CASSANDRA-7386)
  * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
+ * Fix hints serialized size calculation (CASSANDRA-8587)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4fc3952/src/java/org/apache/cassandra/net/MessageOut.java
--
diff --git a/src/java/org/apache/cassandra/net/MessageOut.java 
b/src/java/org/apache/cassandra/net/MessageOut.java
index dd6cae8..f49e3f7 100644
--- a/src/java/org/apache/cassandra/net/MessageOut.java
+++ b/src/java/org/apache/cassandra/net/MessageOut.java
@@ -128,8 +128,8 @@ public class MessageOutT
 size += TypeSizes.NATIVE.sizeof(parameters.size());
 for (Map.EntryString, byte[] entry : parameters.entrySet())
 {
-TypeSizes.NATIVE.sizeof(entry.getKey());
-TypeSizes.NATIVE.sizeof(entry.getValue().length);
+size += TypeSizes.NATIVE.sizeof(entry.getKey());
+size += TypeSizes.NATIVE.sizeof(entry.getValue().length);
 size += entry.getValue().length;
 }
 



[jira] [Updated] (CASSANDRA-8502) Static columns returning null for pages after first

2015-01-09 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8502:
---
Fix Version/s: 2.0.13
   2.1.3

 Static columns returning null for pages after first
 ---

 Key: CASSANDRA-8502
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8502
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Flavien Charlon
Assignee: Tyler Hobbs
 Fix For: 2.1.3, 2.0.13

 Attachments: null-static-column.txt


 When paging is used for a query containing a static column, the first page 
 contains the right value for the static column, but subsequent pages have 
 null null for the static column instead of the expected value.
 Repro steps:
 - Create a table with a static column
 - Create a partition with 500 cells
 - Using cqlsh, query that partition
 Actual result:
 - You will see that first, the static column appears as expected, but if you 
 press a key after ---MORE---, the static columns will appear as null.
 See the attached file for a repro of the output.
 I am using a single node cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-09 Thread dbrosius
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c04c50c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c04c50c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c04c50c9

Branch: refs/heads/trunk
Commit: c04c50c95baaf3be6c7069b3aa617a0a066cd792
Parents: fa0cc90 5364083
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 20:03:06 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 20:03:06 2015 -0500

--
 .../org/apache/cassandra/cql3/statements/DropTypeStatement.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c04c50c9/src/java/org/apache/cassandra/cql3/statements/DropTypeStatement.java
--



[1/6] cassandra git commit: prep for 2.0.12 release

2015-01-09 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk c6525da86 - fa0cc9039


prep for 2.0.12 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5b66997f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5b66997f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5b66997f

Branch: refs/heads/trunk
Commit: 5b66997fa8be961dd17cdc93b29f2b61491f2cbb
Parents: dd62f7b
Author: T Jake Luciani j...@apache.org
Authored: Fri Jan 9 15:21:49 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Fri Jan 9 15:21:49 2015 -0500

--
 NEWS.txt | 9 +
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 6f6b795..2bc4fe6 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,15 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.0.12
+==
+
+Upgrading
+-
+- Nothing specific to this release, but refer to previous entries if you
+  are upgrading from a previous version.
+
+
 2.0.11
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/build.xml
--
diff --git a/build.xml b/build.xml
index 8c23407..9bbb54f 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=2.0.11/
+property name=base.version value=2.0.12/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 39d9520..9853818 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.0.12); urgency=medium
+
+  * New release 
+
+ -- Jake Luciani j...@apache.org  Fri, 09 Jan 2015 15:20:30 -0500
+
 cassandra (2.0.11) unstable; urgency=medium
 
   * New release



[6/6] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-09 Thread dbrosius
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fa0cc903
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fa0cc903
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fa0cc903

Branch: refs/heads/trunk
Commit: fa0cc90393d079aee40e91d02846f915093efe13
Parents: c6525da e906192
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 19:59:14 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 19:59:14 2015 -0500

--
 CHANGES.txt   | 2 +-
 src/java/org/apache/cassandra/net/MessageOut.java | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fa0cc903/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fa0cc903/src/java/org/apache/cassandra/net/MessageOut.java
--



[4/6] cassandra git commit: fix hints serialized size calculation patch by dbrosius reviewed by thobbs for cassandra-8587

2015-01-09 Thread dbrosius
fix hints serialized size calculation
patch by dbrosius reviewed by thobbs for cassandra-8587


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4fc3952
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4fc3952
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4fc3952

Branch: refs/heads/trunk
Commit: e4fc395242ee81a85141eda616ba97e937d1c604
Parents: df1f5ea
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 19:50:54 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 19:50:54 2015 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/net/MessageOut.java | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4fc3952/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c7e9a2..fc43dfa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -59,6 +59,7 @@
  * Add DC-aware sequential repair (CASSANDRA-8193)
  * Improve JBOD disk utilization (CASSANDRA-7386)
  * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
+ * Fix hints serialized size calculation (CASSANDRA-8587)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4fc3952/src/java/org/apache/cassandra/net/MessageOut.java
--
diff --git a/src/java/org/apache/cassandra/net/MessageOut.java 
b/src/java/org/apache/cassandra/net/MessageOut.java
index dd6cae8..f49e3f7 100644
--- a/src/java/org/apache/cassandra/net/MessageOut.java
+++ b/src/java/org/apache/cassandra/net/MessageOut.java
@@ -128,8 +128,8 @@ public class MessageOutT
 size += TypeSizes.NATIVE.sizeof(parameters.size());
 for (Map.EntryString, byte[] entry : parameters.entrySet())
 {
-TypeSizes.NATIVE.sizeof(entry.getKey());
-TypeSizes.NATIVE.sizeof(entry.getValue().length);
+size += TypeSizes.NATIVE.sizeof(entry.getKey());
+size += TypeSizes.NATIVE.sizeof(entry.getValue().length);
 size += entry.getValue().length;
 }
 



[2/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-09 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/49d5c8d9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/49d5c8d9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/49d5c8d9

Branch: refs/heads/trunk
Commit: 49d5c8d979f70be3bfe70625e82efac31d4f58c4
Parents: 7f62e29 5b66997
Author: T Jake Luciani j...@apache.org
Authored: Fri Jan 9 15:23:25 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Fri Jan 9 15:23:25 2015 -0500

--

--




[3/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-09 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9061922
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9061922
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9061922

Branch: refs/heads/cassandra-2.1
Commit: e9061922da418930dd1d607da7f2499dc067bac2
Parents: 49d5c8d e4fc395
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 19:58:44 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 19:58:44 2015 -0500

--
 CHANGES.txt   | 2 +-
 src/java/org/apache/cassandra/net/MessageOut.java | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9061922/CHANGES.txt
--
diff --cc CHANGES.txt
index abe3fce,fc43dfa..55ca55d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -96,36 -44,7 +96,36 @@@ Merged from 2.0
   * Avoid overlap in L1 when L0 contains many nonoverlapping
 sstables (CASSANDRA-8211)
   * Improve PropertyFileSnitch logging (CASSANDRA-8183)
 - * Abort liveRatio calculation if the memtable is flushed (CASSANDRA-8164)
 + * Add DC-aware sequential repair (CASSANDRA-8193)
 + * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
- 
++ * Fix hints serialized size calculation (CASSANDRA-8587)
 +
 +2.1.2
 + * (cqlsh) parse_for_table_meta errors out on queries with undefined
 +   grammars (CASSANDRA-8262)
 + * (cqlsh) Fix SELECT ... TOKEN() function broken in C* 2.1.1 (CASSANDRA-8258)
 + * Fix Cassandra crash when running on JDK8 update 40 (CASSANDRA-8209)
 + * Optimize partitioner tokens (CASSANDRA-8230)
 + * Improve compaction of repaired/unrepaired sstables (CASSANDRA-8004)
 + * Make cache serializers pluggable (CASSANDRA-8096)
 + * Fix issues with CONTAINS (KEY) queries on secondary indexes
 +   (CASSANDRA-8147)
 + * Fix read-rate tracking of sstables for some queries (CASSANDRA-8239)
 + * Fix default timestamp in QueryOptions (CASSANDRA-8246)
 + * Set socket timeout when reading remote version (CASSANDRA-8188)
 + * Refactor how we track live size (CASSANDRA-7852)
 + * Make sure unfinished compaction files are removed (CASSANDRA-8124)
 + * Fix shutdown when run as Windows service (CASSANDRA-8136)
 + * Fix DESCRIBE TABLE with custom indexes (CASSANDRA-8031)
 + * Fix race in RecoveryManagerTest (CASSANDRA-8176)
 + * Avoid IllegalArgumentException while sorting sstables in
 +   IndexSummaryManager (CASSANDRA-8182)
 + * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)
 + * Add 'die' policy for commit log and disk failure (CASSANDRA-7927)
 + * Fix installing as service on Windows (CASSANDRA-8115)
 + * Fix CREATE TABLE for CQL2 (CASSANDRA-8144)
 + * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
 +Merged from 2.0:
   * Correctly handle non-text column names in cql3 (CASSANDRA-8178)
   * Fix deletion for indexes on primary key columns (CASSANDRA-8206)
   * Add 'nodetool statusgossip' (CASSANDRA-8125)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9061922/src/java/org/apache/cassandra/net/MessageOut.java
--



[1/3] cassandra git commit: fix debian changelog

2015-01-09 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 49d5c8d97 - e9061922d


fix debian changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df1f5ead
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df1f5ead
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df1f5ead

Branch: refs/heads/cassandra-2.1
Commit: df1f5ead0950d4d3058cf6fe0fcae9ef528014fa
Parents: 5b66997
Author: T Jake Luciani j...@apache.org
Authored: Fri Jan 9 15:48:48 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Fri Jan 9 15:48:48 2015 -0500

--
 debian/changelog | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df1f5ead/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 9853818..53fa20f 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,4 +1,4 @@
-cassandra (2.0.12); urgency=medium
+cassandra (2.0.12) unstable; urgency=medium
 
   * New release 
 



[5/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-09 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9061922
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9061922
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9061922

Branch: refs/heads/trunk
Commit: e9061922da418930dd1d607da7f2499dc067bac2
Parents: 49d5c8d e4fc395
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 19:58:44 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 19:58:44 2015 -0500

--
 CHANGES.txt   | 2 +-
 src/java/org/apache/cassandra/net/MessageOut.java | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9061922/CHANGES.txt
--
diff --cc CHANGES.txt
index abe3fce,fc43dfa..55ca55d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -96,36 -44,7 +96,36 @@@ Merged from 2.0
   * Avoid overlap in L1 when L0 contains many nonoverlapping
 sstables (CASSANDRA-8211)
   * Improve PropertyFileSnitch logging (CASSANDRA-8183)
 - * Abort liveRatio calculation if the memtable is flushed (CASSANDRA-8164)
 + * Add DC-aware sequential repair (CASSANDRA-8193)
 + * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
- 
++ * Fix hints serialized size calculation (CASSANDRA-8587)
 +
 +2.1.2
 + * (cqlsh) parse_for_table_meta errors out on queries with undefined
 +   grammars (CASSANDRA-8262)
 + * (cqlsh) Fix SELECT ... TOKEN() function broken in C* 2.1.1 (CASSANDRA-8258)
 + * Fix Cassandra crash when running on JDK8 update 40 (CASSANDRA-8209)
 + * Optimize partitioner tokens (CASSANDRA-8230)
 + * Improve compaction of repaired/unrepaired sstables (CASSANDRA-8004)
 + * Make cache serializers pluggable (CASSANDRA-8096)
 + * Fix issues with CONTAINS (KEY) queries on secondary indexes
 +   (CASSANDRA-8147)
 + * Fix read-rate tracking of sstables for some queries (CASSANDRA-8239)
 + * Fix default timestamp in QueryOptions (CASSANDRA-8246)
 + * Set socket timeout when reading remote version (CASSANDRA-8188)
 + * Refactor how we track live size (CASSANDRA-7852)
 + * Make sure unfinished compaction files are removed (CASSANDRA-8124)
 + * Fix shutdown when run as Windows service (CASSANDRA-8136)
 + * Fix DESCRIBE TABLE with custom indexes (CASSANDRA-8031)
 + * Fix race in RecoveryManagerTest (CASSANDRA-8176)
 + * Avoid IllegalArgumentException while sorting sstables in
 +   IndexSummaryManager (CASSANDRA-8182)
 + * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)
 + * Add 'die' policy for commit log and disk failure (CASSANDRA-7927)
 + * Fix installing as service on Windows (CASSANDRA-8115)
 + * Fix CREATE TABLE for CQL2 (CASSANDRA-8144)
 + * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
 +Merged from 2.0:
   * Correctly handle non-text column names in cql3 (CASSANDRA-8178)
   * Fix deletion for indexes on primary key columns (CASSANDRA-8206)
   * Add 'nodetool statusgossip' (CASSANDRA-8125)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9061922/src/java/org/apache/cassandra/net/MessageOut.java
--



[2/3] cassandra git commit: fix hints serialized size calculation patch by dbrosius reviewed by thobbs for cassandra-8587

2015-01-09 Thread dbrosius
fix hints serialized size calculation
patch by dbrosius reviewed by thobbs for cassandra-8587


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4fc3952
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4fc3952
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4fc3952

Branch: refs/heads/cassandra-2.1
Commit: e4fc395242ee81a85141eda616ba97e937d1c604
Parents: df1f5ea
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 19:50:54 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 19:50:54 2015 -0500

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/net/MessageOut.java | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4fc3952/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c7e9a2..fc43dfa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -59,6 +59,7 @@
  * Add DC-aware sequential repair (CASSANDRA-8193)
  * Improve JBOD disk utilization (CASSANDRA-7386)
  * Use live sstables in snapshot repair if possible (CASSANDRA-8312)
+ * Fix hints serialized size calculation (CASSANDRA-8587)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4fc3952/src/java/org/apache/cassandra/net/MessageOut.java
--
diff --git a/src/java/org/apache/cassandra/net/MessageOut.java 
b/src/java/org/apache/cassandra/net/MessageOut.java
index dd6cae8..f49e3f7 100644
--- a/src/java/org/apache/cassandra/net/MessageOut.java
+++ b/src/java/org/apache/cassandra/net/MessageOut.java
@@ -128,8 +128,8 @@ public class MessageOutT
 size += TypeSizes.NATIVE.sizeof(parameters.size());
 for (Map.EntryString, byte[] entry : parameters.entrySet())
 {
-TypeSizes.NATIVE.sizeof(entry.getKey());
-TypeSizes.NATIVE.sizeof(entry.getValue().length);
+size += TypeSizes.NATIVE.sizeof(entry.getKey());
+size += TypeSizes.NATIVE.sizeof(entry.getValue().length);
 size += entry.getValue().length;
 }
 



cassandra git commit: Include a Maps value type in DropTypeStatement's isUsedBy patch by dbrosius reviewed by thobbs for cassandra-8588

2015-01-09 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 e9061922d - 536408380


Include a Maps value type in DropTypeStatement's isUsedBy
patch by dbrosius reviewed by thobbs for cassandra-8588


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53640838
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53640838
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53640838

Branch: refs/heads/cassandra-2.1
Commit: 536408380aa07853bb8a4d4d96af1b0cd06bbe31
Parents: e906192
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri Jan 9 20:00:16 2015 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri Jan 9 20:00:16 2015 -0500

--
 .../org/apache/cassandra/cql3/statements/DropTypeStatement.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/53640838/src/java/org/apache/cassandra/cql3/statements/DropTypeStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/DropTypeStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/DropTypeStatement.java
index a3b82a4..94edd01 100644
--- a/src/java/org/apache/cassandra/cql3/statements/DropTypeStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/DropTypeStatement.java
@@ -121,7 +121,7 @@ public class DropTypeStatement extends 
SchemaAlteringStatement
 else if (toCheck instanceof SetType)
 return isUsedBy(((SetType)toCheck).getElementsType());
 else
-return isUsedBy(((MapType)toCheck).getKeysType()) || 
isUsedBy(((MapType)toCheck).getKeysType());
+return isUsedBy(((MapType)toCheck).getKeysType()) || 
isUsedBy(((MapType)toCheck).getValuesType());
 }
 return false;
 }



[jira] [Commented] (CASSANDRA-7653) Add role based access control to Cassandra

2015-01-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272146#comment-14272146
 ] 

Aleksey Yeschenko commented on CASSANDRA-7653:
--

Oh, one more thing.

Please include a NEWS.txt entry for the API change.

 Add role based access control to Cassandra
 --

 Key: CASSANDRA-7653
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7653
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Mike Adamson
Assignee: Sam Tunnicliffe
 Fix For: 3.0

 Attachments: 7653.patch, CQLSmokeTest.java, cql_smoke_test.py


 The current authentication model supports granting permissions to individual 
 users. While this is OK for small or medium organizations wanting to 
 implement authorization, it does not work well in large organizations because 
 of the overhead of having to maintain the permissions for each user.
 Introducing roles into the authentication model would allow sets of 
 permissions to be controlled in one place as a role and then the role granted 
 to users. Roles should also be able to be granted to other roles to allow 
 hierarchical sets of permissions to be built up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8303) Provide strict mode for CQL Queries

2015-01-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272145#comment-14272145
 ] 

Aleksey Yeschenko commented on CASSANDRA-8303:
--

After some thinking, I'm with Sylvain on this: the least bad way to implement 
this is via authz alone.

Also, any permissions we add must fit the hierarchy. That means no per-DC 
permissions - there is no place for DCs in resource hierarchy. Use different 
users with different roles if you need to - a separate role for Spark that can 
do whatever it wants, with a user that only operates on the analytics DC should 
solve the issue.

SELECT and MODIFY would have to be split into more granular permissions for 
this whole thing to make any coherent sense. For example (just an example, 
please don't debate naming, or the set itself), for SELECT:
- GRANT INDEXING ON .. TO ..
- GRANT FILTERING ON .. TO ..
- GRANT SINGLE PARTITION SELECT ON .. TO ..
- GRANT MULTI PARTITION SELECT ON .. TO ..

SELECT itself would become an alias, just like ALL is currently. GRANT SELECT 
would grant those 4 permissions under the hood.

Similar stuff with MODIFY.

If you agree in principle, then we should start debating granularity and 
naming, because converting these (SELECT and MODIFY into actual permissions) 
would have to be done on 2.1-3.0 upgrade step of CASSANDRA-7653, and 3.0 is 
coming up soon.





 Provide strict mode for CQL Queries
 -

 Key: CASSANDRA-8303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8303
 Project: Cassandra
  Issue Type: Improvement
Reporter: Anupam Arora
 Fix For: 3.0


 Please provide a strict mode option in cassandra that will kick out any CQL 
 queries that are expensive, e.g. any query with ALLOWS FILTERING, 
 multi-partition queries, secondary index queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8523) Writes should be sent to a replacement node while it is streaming in data

2015-01-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8523:
---
Fix Version/s: 2.1.3
   2.0.12
   Issue Type: Improvement  (was: Bug)

I've checked with [~brandon.williams], and this is intended behavior, so I'm 
marking this as an Improvement, not a bug. Do note that the replacement node 
will receive all writes while it was streaming that were captured as hints, so 
if the stream takes less than the hint window, you should not see too much 
discrepancy.

 Writes should be sent to a replacement node while it is streaming in data
 -

 Key: CASSANDRA-8523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8523
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Wagner
 Fix For: 2.0.12, 2.1.3


 In our operations, we make heavy use of replace_address (or 
 replace_address_first_boot) in order to replace broken nodes. We now realize 
 that writes are not sent to the replacement nodes while they are in hibernate 
 state and streaming in data. This runs counter to what our expectations were, 
 especially since we know that writes ARE sent to nodes when they are 
 bootstrapped into the ring.
 It seems like cassandra should arrange to send writes to a node that is in 
 the process of replacing another node, just like it does for a nodes that are 
 bootstraping. I hesitate to phrase this as we should send writes to a node 
 in hibernate because the concept of hibernate may be useful in other 
 contexts, as per CASSANDRA-8336. Maybe a new state is needed here?
 Among other things, the fact that we don't get writes during this period 
 makes subsequent repairs more expensive, proportional to the number of writes 
 that we miss (and depending on the amount of data that needs to be streamed 
 during replacement and the time it may take to rebuild secondary indexes, we 
 could miss many many hours worth of writes). It also leaves us more exposed 
 to consistency violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8548) Nodetool Cleanup - java.lang.AssertionError

2015-01-09 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271461#comment-14271461
 ] 

Yuki Morishita commented on CASSANDRA-8548:
---

+1

 Nodetool Cleanup - java.lang.AssertionError
 ---

 Key: CASSANDRA-8548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8548
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebastian Estevez
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 0001-make-sure-we-unmark-compacting.patch


 Needed to free up some space on a node but getting the dump below when 
 running nodetool cleanup.
 Tried turning on debug to try to obtain additional details in the logs but 
 nothing gets added to the logs when running cleanup. Added: 
 log4j.logger.org.apache.cassandra.db=DEBUG 
 in log4j-server.properties
 See the stack trace below:
 root@cassandra-019:~# nodetool cleanup
 {code}Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:228)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:266)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1112)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2162)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.IllegalArgumentException
 at java.nio.Buffer.limit(Buffer.java:267)
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:108)

[jira] [Commented] (CASSANDRA-8128) Exception when executing UPSERT

2015-01-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271141#comment-14271141
 ] 

Sylvain Lebresne commented on CASSANDRA-8128:
-

What the stack trace is saying is that there is more bind marker in the query 
than there is values (in other words, the client somehow didn't sent enough 
values). This should have failed with a meaningful error, but said meaningful 
error is thrown by {{QueryProcess.process()}} but it it appears that DSE bypass 
this method using it's own {{DseQueryHandler.process}} and that might be 
missing the proper check. So I'd first report this through the DataStax support 
so it gets fixed there (provided I'm right that the check is missing).

Now I suppose your real problem is why less values than needed were sent. It's 
a good question but it's unlikely a server-side problem. It could a problem in 
your code, or it could be a problem with your driver triggered by large 
batches. For instance, one thing that comes to mind is that there is a hard 
limit in the protocol of 64K values per statement and the java driver used to 
not validate that properly 
([JAVA-515|https://datastax-oss.atlassian.net/browse/JAVA-515] which is fixed 
in more recent versions of the driver), so if your statement ends up having 
more than that, it could trigger an overflow that silently trigger this problem.

Anyway, it's unlikely a server side problem (except for the validation problem, 
but unless proved otherwise, Apache Cassandra does properly validate this case) 
so closing this. If you have further elements that seems to indicate that 
Cassandra is to blame, feel free to reopen with those elements.

 Exception when executing UPSERT
 ---

 Key: CASSANDRA-8128
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8128
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jens Rantil
Priority: Critical
  Labels: cql3
 Fix For: 2.0.12


 I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
 for a single partition key with up to ~3000 clustering keys. I understand to 
 large upsert aren't recommended, but I wouldn't expect to be getting the 
 following exception anyway:
 {noformat}
 ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
 ErrorMessage.java (line 222) Unexpected exception during request
 java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
 at java.util.ArrayList.rangeCheck(ArrayList.java:635)
 at java.util.ArrayList.get(ArrayList.java:411)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
 at 
 com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
 at 
 com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
 at 
 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8128) Exception when executing UPSERT

2015-01-09 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-8128.
-
Resolution: Not a Problem

 Exception when executing UPSERT
 ---

 Key: CASSANDRA-8128
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8128
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jens Rantil
Priority: Critical
  Labels: cql3
 Fix For: 2.0.12


 I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
 for a single partition key with up to ~3000 clustering keys. I understand to 
 large upsert aren't recommended, but I wouldn't expect to be getting the 
 following exception anyway:
 {noformat}
 ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
 ErrorMessage.java (line 222) Unexpected exception during request
 java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
 at java.util.ArrayList.rangeCheck(ArrayList.java:635)
 at java.util.ArrayList.get(ArrayList.java:411)
 at 
 org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
 at 
 org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
 at 
 org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
 at 
 org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
 at 
 com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
 at 
 com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
 at 
 org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 at 
 org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
 at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at 
 org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
 at 
 org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8523) Writes should be sent to a replacement node while it is streaming in data

2015-01-09 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-8523:
---

Assignee: Brandon Williams

 Writes should be sent to a replacement node while it is streaming in data
 -

 Key: CASSANDRA-8523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8523
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Wagner
Assignee: Brandon Williams
 Fix For: 2.0.12, 2.1.3


 In our operations, we make heavy use of replace_address (or 
 replace_address_first_boot) in order to replace broken nodes. We now realize 
 that writes are not sent to the replacement nodes while they are in hibernate 
 state and streaming in data. This runs counter to what our expectations were, 
 especially since we know that writes ARE sent to nodes when they are 
 bootstrapped into the ring.
 It seems like cassandra should arrange to send writes to a node that is in 
 the process of replacing another node, just like it does for a nodes that are 
 bootstraping. I hesitate to phrase this as we should send writes to a node 
 in hibernate because the concept of hibernate may be useful in other 
 contexts, as per CASSANDRA-8336. Maybe a new state is needed here?
 Among other things, the fact that we don't get writes during this period 
 makes subsequent repairs more expensive, proportional to the number of writes 
 that we miss (and depending on the amount of data that needs to be streamed 
 during replacement and the time it may take to rebuild secondary indexes, we 
 could miss many many hours worth of writes). It also leaves us more exposed 
 to consistency violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8523) Writes should be sent to a replacement node while it is streaming in data

2015-01-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271464#comment-14271464
 ] 

Brandon Williams commented on CASSANDRA-8523:
-

I completely agree this is an improvement, but it's going to be pretty tricky, 
especially since we can't use the FD to determine if the node has died, at 
least not in its current form since that would mark the node as UP.

 Writes should be sent to a replacement node while it is streaming in data
 -

 Key: CASSANDRA-8523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8523
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Wagner
 Fix For: 2.0.12, 2.1.3


 In our operations, we make heavy use of replace_address (or 
 replace_address_first_boot) in order to replace broken nodes. We now realize 
 that writes are not sent to the replacement nodes while they are in hibernate 
 state and streaming in data. This runs counter to what our expectations were, 
 especially since we know that writes ARE sent to nodes when they are 
 bootstrapped into the ring.
 It seems like cassandra should arrange to send writes to a node that is in 
 the process of replacing another node, just like it does for a nodes that are 
 bootstraping. I hesitate to phrase this as we should send writes to a node 
 in hibernate because the concept of hibernate may be useful in other 
 contexts, as per CASSANDRA-8336. Maybe a new state is needed here?
 Among other things, the fact that we don't get writes during this period 
 makes subsequent repairs more expensive, proportional to the number of writes 
 that we miss (and depending on the amount of data that needs to be streamed 
 during replacement and the time it may take to rebuild secondary indexes, we 
 could miss many many hours worth of writes). It also leaves us more exposed 
 to consistency violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8490) DISTINCT queries with LIMITs or paging are incorrect when partitions are deleted

2015-01-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270824#comment-14270824
 ] 

Sylvain Lebresne commented on CASSANDRA-8490:
-

+1

 DISTINCT queries with LIMITs or paging are incorrect when partitions are 
 deleted
 

 Key: CASSANDRA-8490
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8490
 Project: Cassandra
  Issue Type: Bug
 Environment: Driver version: 2.1.3.
 Cassandra version: 2.0.11/2.1.2.
Reporter: Frank Limstrand
Assignee: Tyler Hobbs
 Fix For: 2.0.12, 2.1.3

 Attachments: 8490-2.0-v2.txt, 8490-2.0.txt, 8490-trunk-v2.txt, 
 8490-trunk.txt


 Using paging demo code from 
 https://github.com/PatrickCallaghan/datastax-paging-demo
 The code creates and populates a table with 1000 entries and pages through 
 them with setFetchSize set to 100. If we then delete one entry with 'cqlsh':
 {noformat}
 cqlsh:datastax_paging_demo delete from datastax_paging_demo.products  where 
 productId = 'P142'; (The specified productid is number 6 in the resultset.)
 {noformat}
 and run the same query (Select * from) again we get:
 {noformat}
 [com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging 
 demo took 0 secs. Total Products : 999
 {noformat}
 which is what we would expect.
 If we then change the select statement in dao/ProductDao.java (line 70) 
 from Select * from  to Select DISTINCT productid from  we get this result:
 {noformat}
 [com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging 
 demo took 0 secs. Total Products : 99
 {noformat}
 So it looks like the tombstone stops the paging behaviour. Is this a bug?
 {noformat}
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,431 Message.java 
 (line 319) Received: QUERY Select DISTINCT productid from 
 datastax_paging_demo.products, v=2
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
 AbstractQueryPager.java (line 98) Fetched 99 live rows
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
 AbstractQueryPager.java (line 115) Got result (99) smaller than page size 
 (100), considering pager exhausted
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8582) Descriptor.fromFilename seems broken for BIG format

2015-01-09 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270766#comment-14270766
 ] 

Benjamin Lerer commented on CASSANDRA-8582:
---

[~tjake] I do not really understand the 'Not a Problem' resolution.

From the Descriptor class API, I would expect the following behavior:
{code}
Descriptor original = new Descriptor(tempDataDir, ksname, cfname, 1, 
Descriptor.Type.TEMP, SSTableFormat.Type.BIG);
String file = original.filenameFor(Component.DATA);
Descriptor clone = Descriptor.fromFilename(file );
assertEquals(clone, original);
{code}

This behavior used to be valid as it was used within 
{{SSTableSimpleUnsortedWriter}} and the change actually broke 
{{CQLSSTableWriter}}. Base on that I am a bit afraid that the change of 
behavior broke other part of the code too. 

 Descriptor.fromFilename seems broken for BIG format
 ---

 Key: CASSANDRA-8582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8582
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: T Jake Luciani

 The problem can be reproduced in {{DescriptorTest}} by adding the following 
 unit test:
 {code}
 @Test
 public void testFromFileNameWithBIGFormat()
 {
 checkFromFilename(new Descriptor(tempDataDir, ksname, cfname, 1, 
 Descriptor.Type.TEMP, SSTableFormat.Type.BIG), false);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8531) metric CommitLog.PendingTasks is always growing

2015-01-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-8531.
---
Resolution: Incomplete

[~mabrek], please feel free to reopen this with further details on your 
problem, if you need.

 metric CommitLog.PendingTasks is always growing
 ---

 Key: CASSANDRA-8531
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8531
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra21-2.1.1-1.noarch
 Linux  2.6.32-431.el6.x86_64
 RHEL 6.5
 java version 1.7.0_67
 Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
 Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
Reporter: Anton Lebedevich

 org.apache.cassandra.db.commitlog.AbstractCommitLogService increments pending 
 task counter each time it is being read (see method getPendingTasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8515) Hang at startup when no commitlog space

2015-01-09 Thread Richard Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271668#comment-14271668
 ] 

Richard Low commented on CASSANDRA-8515:


I think #5737 was marked as invalid because it was thought to be a bug outside 
of Cassandra. But understanding the cause means we can do something about it, 
and I think logging and stopping would be the right approach, as you say.

 Hang at startup when no commitlog space
 ---

 Key: CASSANDRA-8515
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8515
 Project: Cassandra
  Issue Type: Bug
Reporter: Richard Low
 Fix For: 2.0.12


 If the commit log directory has no free space, Cassandra hangs on startup.
 The main thread is waiting:
 {code}
 main prio=9 tid=0x7fefe400f800 nid=0x1303 waiting on condition 
 [0x00010b9c1000]
java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   - parking to wait for  0x0007dc8c5fc8 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator.fetchSegment(CommitLogAllocator.java:137)
   at 
 org.apache.cassandra.db.commitlog.CommitLog.activateNextSegment(CommitLog.java:299)
   at org.apache.cassandra.db.commitlog.CommitLog.init(CommitLog.java:73)
   at 
 org.apache.cassandra.db.commitlog.CommitLog.clinit(CommitLog.java:53)
   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:360)
   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:339)
   at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:211)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:699)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:208)
   at 
 org.apache.cassandra.db.SystemKeyspace.updateSchemaVersion(SystemKeyspace.java:390)
   - locked 0x0007de2f2ce0 (a java.lang.Class for 
 org.apache.cassandra.db.SystemKeyspace)
   at org.apache.cassandra.config.Schema.updateVersion(Schema.java:384)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:532)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 {code}
 but COMMIT-LOG-ALLOCATOR is RUNNABLE:
 {code}
 COMMIT-LOG-ALLOCATOR prio=9 tid=0x7fefe5402800 nid=0x7513 in 
 Object.wait() [0x000118252000]
java.lang.Thread.State: RUNNABLE
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:116)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 but making no progress.
 This behaviour has change since 1.2 (see CASSANDRA-5737).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8515) Hang at startup when no commitlog space

2015-01-09 Thread Richard Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271668#comment-14271668
 ] 

Richard Low edited comment on CASSANDRA-8515 at 1/9/15 6:38 PM:


I think CASSANDRA-5737 was marked as invalid because it was thought to be a bug 
outside of Cassandra. But understanding the cause means we can do something 
about it, and I think logging and stopping would be the right approach, as you 
say.


was (Author: rlow):
I think #5737 was marked as invalid because it was thought to be a bug outside 
of Cassandra. But understanding the cause means we can do something about it, 
and I think logging and stopping would be the right approach, as you say.

 Hang at startup when no commitlog space
 ---

 Key: CASSANDRA-8515
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8515
 Project: Cassandra
  Issue Type: Bug
Reporter: Richard Low
 Fix For: 2.0.12


 If the commit log directory has no free space, Cassandra hangs on startup.
 The main thread is waiting:
 {code}
 main prio=9 tid=0x7fefe400f800 nid=0x1303 waiting on condition 
 [0x00010b9c1000]
java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   - parking to wait for  0x0007dc8c5fc8 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator.fetchSegment(CommitLogAllocator.java:137)
   at 
 org.apache.cassandra.db.commitlog.CommitLog.activateNextSegment(CommitLog.java:299)
   at org.apache.cassandra.db.commitlog.CommitLog.init(CommitLog.java:73)
   at 
 org.apache.cassandra.db.commitlog.CommitLog.clinit(CommitLog.java:53)
   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:360)
   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:339)
   at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:211)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:699)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:208)
   at 
 org.apache.cassandra.db.SystemKeyspace.updateSchemaVersion(SystemKeyspace.java:390)
   - locked 0x0007de2f2ce0 (a java.lang.Class for 
 org.apache.cassandra.db.SystemKeyspace)
   at org.apache.cassandra.config.Schema.updateVersion(Schema.java:384)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:532)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 {code}
 but COMMIT-LOG-ALLOCATOR is RUNNABLE:
 {code}
 COMMIT-LOG-ALLOCATOR prio=9 tid=0x7fefe5402800 nid=0x7513 in 
 Object.wait() [0x000118252000]
java.lang.Thread.State: RUNNABLE
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:116)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 but making no progress.
 This behaviour has change since 1.2 (see CASSANDRA-5737).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8457) nio MessagingService

2015-01-09 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271701#comment-14271701
 ] 

Ariel Weisberg commented on CASSANDRA-8457:
---

Took a stab at writing an adaptive approach to coalescing based on a moving 
average. Numbers look good for the workloads tested.
Code 
https://github.com/aweisberg/cassandra/compare/6be33289f34782e12229a7621022bb5ce66b2f1b...e48133c4d5acbaa6563ea48a0ca118c278b2f6f7

Testing in AWS, 14 servers 6 clients.

Using a fixed coalescing window at low concurrency there is a drop of 
performance from 6746 to 3929. With adaptive coalescing I got 6758.

At medium concurrency (5 threads per client, 6 clients) I got 31097 with 
coalescing disable and 31120 with coalescing.

At high concurrency (500 threads per client, 6 clients) I got 479532 with 
coalescing and 166010 without. This is with a maximum coalescing window of 200 
milliseconds.

I added debug output to log when coalescing starts and stops and it's 
interesting. At the beginning of the benchmark things flap, but they don't flap 
madly. After a few minutes it settles. I also notice a strange thing where CPU 
utilization at the start of a benchmark is 500% or so and then after a while it 
climbs. Like something somewhere is warming up or balancing. I recall seeing 
this in GCE as well.

I had one of the OutboundTcpConnections (first to get the permit) log a trace 
of all outgoing message times. I threw that into a histogram for informational 
purposes. 50% of messages are sent within 100 microseconds of each other and 
92% are sent within one millisecond. This is without any coalescing.

{noformat}
   Value Percentile TotalCount 1/(1-Percentile)

   0.000 0.   5554   1.00
   5.703 0.1000 124565   1.11
  13.263 0.2000 249128   1.25
  24.143 0.3000 373630   1.43
  40.607 0.4000 498108   1.67
  94.015 0.5000 622664   2.00
 158.463 0.5500 684867   2.22
 244.351 0.6000 747137   2.50
 305.407 0.6500 809631   2.86
 362.239 0.7000 871641   3.33
 428.031 0.7500 933978   4.00
 467.711 0.7750 965085   4.44
 520.703 0.8000 996254   5.00
 595.967 0.82501027359   5.71
 672.767 0.85001058457   6.67
 743.935 0.87501089573   8.00
 780.799 0.88751105290   8.89
 821.247 0.90001120774  10.00
 868.351 0.91251136261  11.43
 928.767 0.92501151889  13.33
1006.079 0.93751167421  16.00
1049.599 0.943750001175260  17.78
1095.679 0.95001183041  20.00
1143.807 0.956250001190779  22.86
1198.079 0.96251198542  26.67
1264.639 0.968750001206301  32.00
1305.599 0.971875001210228  35.56
1354.751 0.97501214090  40.00
1407.999 0.978125001217975  45.71
1470.463 0.981250001221854  53.33
1542.143 0.984375001225759  64.00
1586.175 0.985937501227720  71.11
1634.303 0.98751229643  80.00
1688.575 0.989062501231596  91.43
1756.159 0.990625001233523 106.67
1839.103 0.992187501235464 128.00
1887.231 0.992968751236430 142.22
1944.575 0.993750001237409 160.00
2007.039 0.994531251238384 182.86
2084.863 0.995312501239358 213.33
2174.975 0.996093751240326 256.00
2230.271 0.9964843750001240818 284.44
2293.759 0.996875001241292 320.00
2369.535 0.9972656250001241785 365.71
2455.551 0.997656251242271 426.67
2578.431 0.9980468750001242752 512.00
2656.255 0.9982421875001242999 568.89
2740.223 0.998437501243244 640.00
2834.431 0.9986328125001243482 731.43
2957.311 0.9988281250001243725 853.33
3131.391 0.99902343750012439691024.00
3235.839 0.99912109375012440911137.78
3336.191 0.9992187512442121280.00
3471.359 0.99931640625012443321462.86
3641.343 0.99941406250012444551706.67
3837.951 0.99951171875012445762048.00
4001.791 0.99956054687512446362275.56
4136.959 0.99960937500012446972560.00
4399.103 0.99965820312512447582925.71
4628.479 

[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService

2015-01-09 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271701#comment-14271701
 ] 

Ariel Weisberg edited comment on CASSANDRA-8457 at 1/9/15 7:08 PM:
---

Took a stab at writing an adaptive approach to coalescing based on a moving 
average. Numbers look good for the workloads tested.
Code 
https://github.com/aweisberg/cassandra/compare/6be33289f34782e12229a7621022bb5ce66b2f1b...e48133c4d5acbaa6563ea48a0ca118c278b2f6f7

Testing in AWS, 14 servers 6 clients.

Using a fixed coalescing window at low concurrency there is a drop of 
performance from 6746 to 3929. With adaptive coalescing I got 6758.

At medium concurrency (5 threads per client, 6 clients) I got 31097 with 
coalescing disable and 31120 with coalescing.

At high concurrency (500 threads per client, 6 clients) I got 479532 with 
coalescing and 166010 without. This is with a maximum coalescing window of 200 
milliseconds.

I added debug output to log when coalescing starts and stops and it's 
interesting. At the beginning of the benchmark things flap, but they don't flap 
madly. After a few minutes it settles. I also notice a strange thing where CPU 
utilization at the start of a benchmark is 500% or so and then after a while it 
climbs. Like something somewhere is warming up or balancing. I recall seeing 
this in GCE as well.

I had one of the OutboundTcpConnections (first to get the permit) log a trace 
of all outgoing message times. I threw that into a histogram for informational 
purposes. 50% of messages are sent within 100 microseconds of each other and 
92% are sent within one millisecond. This is without any coalescing.

{noformat}
   Value Percentile TotalCount 1/(1-Percentile)

   0.000 0.   5554   1.00
   5.703 0.1000 124565   1.11
  13.263 0.2000 249128   1.25
  24.143 0.3000 373630   1.43
  40.607 0.4000 498108   1.67
  94.015 0.5000 622664   2.00
 158.463 0.5500 684867   2.22
 244.351 0.6000 747137   2.50
 305.407 0.6500 809631   2.86
 362.239 0.7000 871641   3.33
 428.031 0.7500 933978   4.00
 467.711 0.7750 965085   4.44
 520.703 0.8000 996254   5.00
 595.967 0.82501027359   5.71
 672.767 0.85001058457   6.67
 743.935 0.87501089573   8.00
 780.799 0.88751105290   8.89
 821.247 0.90001120774  10.00
 868.351 0.91251136261  11.43
 928.767 0.92501151889  13.33
1006.079 0.93751167421  16.00
1049.599 0.943750001175260  17.78
1095.679 0.95001183041  20.00
1143.807 0.956250001190779  22.86
1198.079 0.96251198542  26.67
1264.639 0.968750001206301  32.00
1305.599 0.971875001210228  35.56
1354.751 0.97501214090  40.00
1407.999 0.978125001217975  45.71
1470.463 0.981250001221854  53.33
1542.143 0.984375001225759  64.00
1586.175 0.985937501227720  71.11
1634.303 0.98751229643  80.00
1688.575 0.989062501231596  91.43
1756.159 0.990625001233523 106.67
1839.103 0.992187501235464 128.00
1887.231 0.992968751236430 142.22
1944.575 0.993750001237409 160.00
2007.039 0.994531251238384 182.86
2084.863 0.995312501239358 213.33
2174.975 0.996093751240326 256.00
2230.271 0.9964843750001240818 284.44
2293.759 0.996875001241292 320.00
2369.535 0.9972656250001241785 365.71
2455.551 0.997656251242271 426.67
2578.431 0.9980468750001242752 512.00
2656.255 0.9982421875001242999 568.89
2740.223 0.998437501243244 640.00
2834.431 0.9986328125001243482 731.43
2957.311 0.9988281250001243725 853.33
3131.391 0.99902343750012439691024.00
3235.839 0.99912109375012440911137.78
3336.191 0.9992187512442121280.00
3471.359 0.99931640625012443321462.86
3641.343 0.99941406250012444551706.67
3837.951 0.99951171875012445762048.00
4001.791 0.99956054687512446362275.56
4136.959 0.99960937500012446972560.00
4399.103 

[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2015-01-09 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271745#comment-14271745
 ] 

Robert Stupp commented on CASSANDRA-7438:
-

Note: OHC how has cache-loader support (https://github.com/snazy/ohc/issues/3). 
Could be an alternative for RowCacheSentinel.

 Serializing Row cache alternative (Fully off heap)
 --

 Key: CASSANDRA-7438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Linux
Reporter: Vijay
Assignee: Robert Stupp
  Labels: performance
 Fix For: 3.0

 Attachments: 0001-CASSANDRA-7438.patch, tests.zip


 Currently SerializingCache is partially off heap, keys are still stored in 
 JVM heap as BB, 
 * There is a higher GC costs for a reasonably big cache.
 * Some users have used the row cache efficiently in production for better 
 results, but this requires careful tunning.
 * Overhead in Memory for the cache entries are relatively high.
 So the proposal for this ticket is to move the LRU cache logic completely off 
 heap and use JNI to interact with cache. We might want to ensure that the new 
 implementation match the existing API's (ICache), and the implementation 
 needs to have safe memory access, low overhead in memory and less memcpy's 
 (As much as possible).
 We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2015-01-09 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271745#comment-14271745
 ] 

Robert Stupp edited comment on CASSANDRA-7438 at 1/9/15 7:21 PM:
-

Note: OHC how has cache-loader support (https://github.com/snazy/ohc/issues/3). 
Could be an alternative for RowCacheSentinel.
EDIT: in a C* follow-up ticket


was (Author: snazy):
Note: OHC how has cache-loader support (https://github.com/snazy/ohc/issues/3). 
Could be an alternative for RowCacheSentinel.

 Serializing Row cache alternative (Fully off heap)
 --

 Key: CASSANDRA-7438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Linux
Reporter: Vijay
Assignee: Robert Stupp
  Labels: performance
 Fix For: 3.0

 Attachments: 0001-CASSANDRA-7438.patch, tests.zip


 Currently SerializingCache is partially off heap, keys are still stored in 
 JVM heap as BB, 
 * There is a higher GC costs for a reasonably big cache.
 * Some users have used the row cache efficiently in production for better 
 results, but this requires careful tunning.
 * Overhead in Memory for the cache entries are relatively high.
 So the proposal for this ticket is to move the LRU cache logic completely off 
 heap and use JNI to interact with cache. We might want to ensure that the new 
 implementation match the existing API's (ICache), and the implementation 
 needs to have safe memory access, low overhead in memory and less memcpy's 
 (As much as possible).
 We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7032) Improve vnode allocation

2015-01-09 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-7032:
---
Attachment: TestVNodeAllocation.java

A work-in-progress algorithm for selecting vnodes in the replicated case is 
attached. The main idea of the algorithm is to select token positions for each 
new vnode in such a way as to get best improvement in replicated load variance 
(i.e. standard deviation) across nodes and vnodes *1. More specifically, it 
prepares a selection of token positions to try (by picking the middle positions 
between existing vnodes *2), evaluates the expected improvement in variance for 
each selection and chooses the best *3, continuing until all the vnodes of the 
new node have assigned tokens. To improve average performance, the expected 
improvement for all choices is calculated once; for the second and later vnode 
we only recalculate it for the best candidate until we find one that does not 
deteriorate to worse than the next option in the list *4.

Tested with simple factor-3 replication, it maintains the following utilization 
ranges: 
- 1 vnode: 70% - 135%
- 4 vnodes: 80% - 115%
- 16 vnodes: 83 - 106%
- 64 vnodes: 86 - 103%
- 256 vnodes: 87 - 102%

Unlike random allocation, the overutilization does not grow with the number of 
nodes, and a much smaller number of vnodes suffice (4 or 8 vnodes would 
probably be enough for most usecases).

The underutilization for this algorithm is affected less by the number of 
vnodes; this is due to the effect of replication on newly added vnodes: they 
necessarily have to take the share of one fewer vnode replica than the vnode 
they split (regardless of the algorithm we use, if we add a new node to a large 
enough perfectly balanced cluster where all vnodes are responsible for the same 
share of tokens, the new node will necessarily have at most 2/3 (for RF=3) of 
the average load.). This could possibly be improved if we manage to keep enough 
individual tokens with load closer to RF / (RF - 1), which I've yet to try.

The algorithm is implemented in the {{ReplicationAwareTokenDistributor}} in the 
attached file. Running the file simulates the effect of adding nodes using this 
algorithm on a randomly-generated cluster and prints out the minimum and 
maximum per-node and per-token replicated load after each step, as well as the 
standard deviation of the load. Sample results:
{code}
Random generation of 500 nodes with 8 tokens each
Size 500   node max 1.88 min 0.51 stddev 0.22193
Adding 1 node(s) using ReplicationAwareTokenDistributor
Size 501   node max 1.90 min 0.51 stddev 0.21922Simple 3 replicas
Adding 4 node(s) using ReplicationAwareTokenDistributor
Size 505   node max 1.63 min 0.51 stddev 0.20580   token max 3.72 min 0.01 
stddev 0.58768Simple 3 replicas
Adding 15 node(s) using ReplicationAwareTokenDistributor
Size 520   node max 1.51 min 0.53 stddev 0.17369   token max 3.83 min 0.01 
stddev 0.54526Simple 3 replicas
Adding 105 node(s) using ReplicationAwareTokenDistributor
Size 625   node max 1.15 min 0.63 stddev 0.08069   token max 2.73 min 0.00 
stddev 0.40190Simple 3 replicas
Adding 375 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.84 stddev 0.03041   token max 1.99 min 0.00 
stddev 0.22341Simple 3 replicas
Losing 1 nodes
Size 999   node max 1.09 min 0.84 stddev 0.03081   token max 1.98 min 0.00 
stddev 0.22429Simple 3 replicas
Adding 1 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.84 stddev 0.03019   token max 1.99 min 0.00 
stddev 0.22335Simple 3 replicas
Losing 5 nodes
Size 995   node max 1.17 min 0.83 stddev 0.03380   token max 2.01 min 0.00 
stddev 0.22565Simple 3 replicas
Adding 5 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.84 stddev 0.03000   token max 1.99 min 0.00 
stddev 0.22181Simple 3 replicas
Losing 20 nodes
Size 980   node max 1.19 min 0.88 stddev 0.04362   token max 2.44 min 0.00 
stddev 0.23370Simple 3 replicas
Adding 20 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.89 stddev 0.02962   token max 1.99 min 0.00 
stddev 0.21681Simple 3 replicas
Losing 125 nodes
Size 875   node max 1.31 min 0.79 stddev 0.08499   token max 2.81 min 0.00 
stddev 0.28763Simple 3 replicas
Adding 125 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.90 stddev 0.02805   token max 1.85 min 0.00 
stddev 0.19258Simple 3 replicas
{code}

This is far from finished as it is much slower than I'd like it to be.



Notes / other things I've tried:
 - *1 Only controlling individual vnode load: Because of the replication effect 
mentioned above, the ratio between largest and smallest node has to necessarily 
be at best 3:2 (for RF=3). If we don't control overall node size, about 30% 
over/underutilization is the best we can 

[jira] [Comment Edited] (CASSANDRA-7032) Improve vnode allocation

2015-01-09 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271501#comment-14271501
 ] 

Branimir Lambov edited comment on CASSANDRA-7032 at 1/9/15 4:41 PM:


A work-in-progress algorithm for selecting vnodes in the replicated case is 
attached. The main idea of the algorithm is to select token positions for each 
new vnode in such a way as to get best improvement in replicated load variance 
(i.e. standard deviation) across nodes and vnodes *1. More specifically, it 
prepares a selection of token positions to try (by picking the middle positions 
between existing vnodes *2), evaluates the expected improvement in variance for 
each selection and chooses the best *3, continuing until all the vnodes of the 
new node have assigned tokens. To improve average performance, the expected 
improvement for all choices is calculated once; for the second and later vnode 
we only recalculate it for the best candidate until we find one that does not 
deteriorate to worse than the next option in the list *4.

Tested with simple factor-3 replication, it maintains the following utilization 
ranges: 
- 1 vnode: 70% - 135%
- 4 vnodes: 80% - 115%
- 16 vnodes: 83 - 106%
- 64 vnodes: 86 - 103%
- 256 vnodes: 87 - 102%

Unlike random allocation, the overutilization does not grow with the number of 
nodes, and a much smaller number of vnodes suffice (4 or 8 vnodes would 
probably be enough for most usecases).

The underutilization for this algorithm is affected less by the number of 
vnodes; this is due to the effect of replication on newly added vnodes: they 
necessarily have to take the share of one fewer vnode replica than the vnode 
they split (regardless of the algorithm we use, if we add a new node to a large 
enough perfectly balanced cluster where all vnodes are responsible for the same 
share of tokens, the new node will necessarily have at most 2/3 (for RF=3) of 
the average load.). This could possibly be improved if we manage to keep enough 
individual tokens with load closer to RF / (RF - 1), which I've yet to try.

The algorithm is implemented in the {{ReplicationAwareTokenDistributor}} in the 
attached file. Running the file simulates the effect of adding nodes using this 
algorithm on a randomly-generated cluster and prints out the minimum and 
maximum per-node and per-token replicated load after each step, as well as the 
standard deviation of the load. Sample results:
{code}
Random generation of 500 nodes with 8 tokens each
Size 500   node max 1.88 min 0.51 stddev 0.22193
Adding 1 node(s) using ReplicationAwareTokenDistributor
Size 501   node max 1.90 min 0.51 stddev 0.21922   Simple 3 replicas
Adding 4 node(s) using ReplicationAwareTokenDistributor
Size 505   node max 1.63 min 0.51 stddev 0.20580   token max 3.72 min 0.01 
stddev 0.58768   Simple 3 replicas
Adding 15 node(s) using ReplicationAwareTokenDistributor
Size 520   node max 1.51 min 0.53 stddev 0.17369   token max 3.83 min 0.01 
stddev 0.54526   Simple 3 replicas
Adding 105 node(s) using ReplicationAwareTokenDistributor
Size 625   node max 1.15 min 0.63 stddev 0.08069   token max 2.73 min 0.00 
stddev 0.40190   Simple 3 replicas
Adding 375 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.84 stddev 0.03041   token max 1.99 min 0.00 
stddev 0.22341   Simple 3 replicas
Losing 1 nodes
Size 999   node max 1.09 min 0.84 stddev 0.03081   token max 1.98 min 0.00 
stddev 0.22429   Simple 3 replicas
Adding 1 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.84 stddev 0.03019   token max 1.99 min 0.00 
stddev 0.22335   Simple 3 replicas
Losing 5 nodes
Size 995   node max 1.17 min 0.83 stddev 0.03380   token max 2.01 min 0.00 
stddev 0.22565   Simple 3 replicas
Adding 5 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.84 stddev 0.03000   token max 1.99 min 0.00 
stddev 0.22181   Simple 3 replicas
Losing 20 nodes
Size 980   node max 1.19 min 0.88 stddev 0.04362   token max 2.44 min 0.00 
stddev 0.23370   Simple 3 replicas
Adding 20 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.89 stddev 0.02962   token max 1.99 min 0.00 
stddev 0.21681   Simple 3 replicas
Losing 125 nodes
Size 875   node max 1.31 min 0.79 stddev 0.08499   token max 2.81 min 0.00 
stddev 0.28763   Simple 3 replicas
Adding 125 node(s) using ReplicationAwareTokenDistributor
Size 1000   node max 1.08 min 0.90 stddev 0.02805   token max 1.85 min 0.00 
stddev 0.19258   Simple 3 replicas
{code}

This is far from finished as it is much slower than I'd like it to be.



Notes / other things I've tried:
 - *1 Only controlling individual vnode load: Because of the replication effect 
mentioned above, the ratio between largest and smallest node has to necessarily 
be at best 3:2 (for RF=3). If we don't control overall node size, about 

[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-09 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/service/StorageProxy.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f62e292
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f62e292
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f62e292

Branch: refs/heads/trunk
Commit: 7f62e292867bb6159592bfc8b0423f89f518a2b5
Parents: 14b2d7a dd62f7b
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Jan 9 11:19:37 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Jan 9 11:19:37 2015 -0600

--
 CHANGES.txt  |  2 ++
 .../cassandra/cql3/statements/SelectStatement.java   |  6 +-
 .../apache/cassandra/db/AbstractRangeCommand.java| 13 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java   |  4 +++-
 src/java/org/apache/cassandra/db/DataRange.java  | 12 
 .../apache/cassandra/db/filter/ExtendedFilter.java   |  6 ++
 .../apache/cassandra/db/filter/SliceQueryFilter.java |  6 ++
 .../org/apache/cassandra/service/StorageProxy.java   | 15 ---
 8 files changed, 55 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f62e292/CHANGES.txt
--
diff --cc CHANGES.txt
index 2028633,0c7e9a2..abe3fce
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,56 -1,6 +1,58 @@@
 -2.0.12:
 +2.1.3
 + * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/last keys in sstable when giving out positions (CASSANDRA-8458)
 + * Disable mmap on Windows (CASSANDRA-6993)
 + * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read defrag async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair 

[jira] [Updated] (CASSANDRA-7281) SELECT on tuple relations are broken for mixed ASC/DESC clustering order

2015-01-09 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7281:
-
Fix Version/s: (was: 2.0.12)
   2.0.13

 SELECT on tuple relations are broken for mixed ASC/DESC clustering order
 

 Key: CASSANDRA-7281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7281
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Marcin Szymaniuk
 Fix For: 2.1.3, 2.0.13

 Attachments: 
 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-.patch, 
 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v2.patch, 
 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v3.patch


 As noted on 
 [CASSANDRA-6875|https://issues.apache.org/jira/browse/CASSANDRA-6875?focusedCommentId=13992153page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13992153],
  the tuple notation is broken when the clustering order mixes ASC and DESC 
 directives because the range of data they describe don't correspond to a 
 single continuous slice internally. To copy the example from CASSANDRA-6875:
 {noformat}
 cqlsh:ks create table foo (a int, b int, c int, PRIMARY KEY (a, b, c)) WITH 
 CLUSTERING ORDER BY (b DESC, c ASC);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 2, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 1);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 0, 0);
 cqlsh:ks SELECT * FROM foo WHERE a=0;
  a | b | c
 ---+---+---
  0 | 2 | 0
  0 | 1 | 0
  0 | 1 | 1
  0 | 0 | 0
 (4 rows)
 cqlsh:ks SELECT * FROM foo WHERE a=0 AND (b, c)  (1, 0);
  a | b | c
 ---+---+---
  0 | 2 | 0
 (1 rows)
 {noformat}
 The last query should really return {{(0, 2, 0)}} and {{(0, 1, 1)}}.
 For that specific example we should generate 2 internal slices, but I believe 
 that with more clustering columns we may have more slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8522) Getting partial set of columns in a 'select *' query

2015-01-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271623#comment-14271623
 ] 

Aleksey Yeschenko commented on CASSANDRA-8522:
--

Not yet.

On the node that has issues with the column, is that column present in 
system.schema_columns?

 Getting partial set of columns in a 'select *' query
 

 Key: CASSANDRA-8522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8522
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Fabiano C. Botelho
Assignee: Aleksey Yeschenko
 Fix For: 2.0.12

 Attachments: systemlogs.zip


 Configuration:
3 node cluster, where two nodes are fine and just one sees the issue 
 reported here. It is an in-memory state  on the server that gets cleared with 
 a cassandra restart on the problematic  node.
 Problem:
 Scenario (this is a run through on the problematic node after at least 6 
 hours the problem had surfaced):
 1. After schema had been installed, one can do a  'describe table events' and 
 that shows all the columns in the table, see below:
 {code}
 Use HELP for help.
 cqlsh:sd DESCRIBE TABLE events
 CREATE TABLE events (
   dayhour text,
   id text,
   event_info text,
   event_series_id text,
   event_type text,
   internal_timestamp bigint,
   is_read boolean,
   is_user_visible boolean,
   link text,
   node_id text,
   time timestamp,
   PRIMARY KEY ((dayhour), id)
 ) WITH
   bloom_filter_fp_chance=0.10 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.10 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'LeveledCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX events_id_idx ON events (id);
 CREATE INDEX events_event_series_id_idx ON events (event_series_id);
 {code}
 2. run a query selecting all columns on the same table above:
 {code}
 cqlsh:sd select * from events limit 10;
  dayhour   | id   | event_series_id   
| is_user_visible
 ---+--+--+-
  2014-12-19:12 | 3a70e8f8-0b04-4485-bf8f-c3d4031687ed | 
 7c129287-2b3d-4342-8f2b-f1eba61267f6 |   False
  2014-12-19:12 | 49a854fb-0e6c-43e9-830e-6f833689df0b | 
 1a130faf-d755-4e52-9f93-82a380d86f31 |   False
  2014-12-19:12 | 6df0b844-d810-423e-8e43-5b3d44213699 | 
 7c129287-2b3d-4342-8f2b-f1eba61267f6 |   False
  2014-12-19:12 | 92d55ff9-724a-4bc4-a57f-dfeee09e46a4 | 
 1a130faf-d755-4e52-9f93-82a380d86f31 |   False
  2014-12-19:17 | 2e0ea98c-4d5a-4ad2-b386-bc181e2e7cec | 
 a9cf80e9-b8de-4154-9a37-13ed95459a91 |   False
  2014-12-19:17 | 8837dc3f-abae-45e6-80cb-c3dffd3f08aa | 
 cb0e4867-0f27-47e3-acde-26b105e0fdc9 |   False
  2014-12-19:17 | b36baa5b-b084-4596-a8a5-d85671952313 | 
 cb0e4867-0f27-47e3-acde-26b105e0fdc9 |   False
  2014-12-19:17 | f73f9438-cba7-4961-880e-77e134175390 | 
 a9cf80e9-b8de-4154-9a37-13ed95459a91 |   False
  2014-12-19:16 | 47b47745-c4f6-496b-a976-381a545f7326 | 
 4bc7979f-2c68-4d65-91a1-e1999a3bbc7a |   False
  2014-12-19:16 | 5708098f-0c0a-4372-be03-ea7057a3bd44 | 
 10ac9312-9487-4de9-b706-0d0af18bf9fd |   False
 {code}
 Note that not all columns show up in the result.
 3. Try a query that refers to at least one of the missing columns in the 
 result above, but off course one that is in the schema.
 {code}
 cqlsh:sd select dayhour, id, event_info from events
   ... ;
 Bad Request: Undefined name event_info in selection clause
 {code}
 Note that it failed saying that 'event_info' was not defined.
 This problem goes away with a restart of cassandra in the problematic node. 
 This does not seem to be the java-320 bug where the fix is supposed to be 
 fixed in driver 2.0.2. We are using driver version 2.0.1. Note that this 
 issue surfaces both with the driver as well as with cqlsh, which points to a 
 problem in the cassandra server. Would appreciate some help with a fix or a 
 quick workaround that is not simply restarting the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscaluculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements are low but not zero.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks and when we started back up, some nodes 
ran out of disk space, due to operator miscaluculation. Thereafter, we've been 
unable to bootstrap new nodes, due to unable to find sufficient sources for 
streaming range.  But bootstrapping with partial success would be far better 
than being unable to bootstrap at all, and cheaper than a repair. Our 
consistency requirements are low but not zero.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith
Priority: Minor

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.
 Same with rebuilds.
 We were doing maintenance on some disks and when we started cassandra back 
 up, some nodes ran out of disk space, due to operator miscaluculation. 
 Thereafter, we've been unable to bootstrap new nodes, due to unable to find 
 sufficient sources for streaming range.  But bootstrapping with partial 
 success would be far better than being unable to bootstrap at all, and 
 cheaper than a repair. Our consistency requirements are low but not zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8587) Fix MessageOut's serializeSize calculation

2015-01-09 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8587:
---
 Reviewer: Tyler Hobbs
Since Version: 1.2.0 beta 1

+1

(Note: this only affects hint replay throughput)

 Fix MessageOut's serializeSize calculation
 --

 Key: CASSANDRA-8587
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8587
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 2.0.12

 Attachments: ss.txt


 simple typos keep size calculation to small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks, and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscalculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements aren't high but we prefer as much consistency as 
cassandra can give us.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks, and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscalculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements are low but not zero.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith
Priority: Minor

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.
 Same with rebuilds.
 We were doing maintenance on some disks, and when we started cassandra back 
 up, some nodes ran out of disk space, due to operator miscalculation. 
 Thereafter, we've been unable to bootstrap new nodes, due to unable to find 
 sufficient sources for streaming range.  But bootstrapping with partial 
 success would be far better than being unable to bootstrap at all, and 
 cheaper than a repair. Our consistency requirements aren't high but we prefer 
 as much consistency as cassandra can give us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks, and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscalculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements are low but not zero.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscaluculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements are low but not zero.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith
Priority: Minor

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.
 Same with rebuilds.
 We were doing maintenance on some disks, and when we started cassandra back 
 up, some nodes ran out of disk space, due to operator miscalculation. 
 Thereafter, we've been unable to bootstrap new nodes, due to unable to find 
 sufficient sources for streaming range.  But bootstrapping with partial 
 success would be far better than being unable to bootstrap at all, and 
 cheaper than a repair. Our consistency requirements are low but not zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.  Faults happen.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks, and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscalculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements aren't high but we prefer as much consistency as 
cassandra can give us.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks, and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscalculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements aren't high but we prefer as much consistency as 
cassandra can give us.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith
Priority: Minor

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.  Faults happen.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.
 Same with rebuilds.
 We were doing maintenance on some disks, and when we started cassandra back 
 up, some nodes ran out of disk space, due to operator miscalculation. 
 Thereafter, we've been unable to bootstrap new nodes, due to unable to find 
 sufficient sources for streaming range.  But bootstrapping with partial 
 success would be far better than being unable to bootstrap at all, and 
 cheaper than a repair. Our consistency requirements aren't high but we prefer 
 as much consistency as cassandra can give us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService

2015-01-09 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271701#comment-14271701
 ] 

Ariel Weisberg edited comment on CASSANDRA-8457 at 1/9/15 7:24 PM:
---

Took a stab at writing an adaptive approach to coalescing based on a moving 
average. Numbers look good for the workloads tested.
Code 
https://github.com/aweisberg/cassandra/compare/6be33289f34782e12229a7621022bb5ce66b2f1b...e48133c4d5acbaa6563ea48a0ca118c278b2f6f7

The impact of coalescing on individual messages appears to introduce quite a 
bit of latency. Without I see an average latency between when the message is 
submitted to when it is written to the socket of 25 microseconds, but with 
coalescing that delay is 350-400 microseconds even though I am only requesting 
a wait of 200 microseconds at most.

Testing in AWS, 14 servers 6 clients.

Using a fixed coalescing window at low concurrency there is a drop of 
performance from 6746 to 3929. With adaptive coalescing I got 6758.

At medium concurrency (5 threads per client, 6 clients) I got 31097 with 
coalescing disable and 31120 with coalescing.

At high concurrency (500 threads per client, 6 clients) I got 479532 with 
coalescing and 166010 without. This is with a maximum coalescing window of 200 
milliseconds.

I added debug output to log when coalescing starts and stops and it's 
interesting. At the beginning of the benchmark things flap, but they don't flap 
madly. After a few minutes it settles. I also notice a strange thing where CPU 
utilization at the start of a benchmark is 500% or so and then after a while it 
climbs. Like something somewhere is warming up or balancing. I recall seeing 
this in GCE as well.

I had one of the OutboundTcpConnections (first to get the permit) log a trace 
of all outgoing message times. I threw that into a histogram for informational 
purposes. 50% of messages are sent within 100 microseconds of each other and 
92% are sent within one millisecond. This is without any coalescing.

{noformat}
   Value Percentile TotalCount 1/(1-Percentile)

   0.000 0.   5554   1.00
   5.703 0.1000 124565   1.11
  13.263 0.2000 249128   1.25
  24.143 0.3000 373630   1.43
  40.607 0.4000 498108   1.67
  94.015 0.5000 622664   2.00
 158.463 0.5500 684867   2.22
 244.351 0.6000 747137   2.50
 305.407 0.6500 809631   2.86
 362.239 0.7000 871641   3.33
 428.031 0.7500 933978   4.00
 467.711 0.7750 965085   4.44
 520.703 0.8000 996254   5.00
 595.967 0.82501027359   5.71
 672.767 0.85001058457   6.67
 743.935 0.87501089573   8.00
 780.799 0.88751105290   8.89
 821.247 0.90001120774  10.00
 868.351 0.91251136261  11.43
 928.767 0.92501151889  13.33
1006.079 0.93751167421  16.00
1049.599 0.943750001175260  17.78
1095.679 0.95001183041  20.00
1143.807 0.956250001190779  22.86
1198.079 0.96251198542  26.67
1264.639 0.968750001206301  32.00
1305.599 0.971875001210228  35.56
1354.751 0.97501214090  40.00
1407.999 0.978125001217975  45.71
1470.463 0.981250001221854  53.33
1542.143 0.984375001225759  64.00
1586.175 0.985937501227720  71.11
1634.303 0.98751229643  80.00
1688.575 0.989062501231596  91.43
1756.159 0.990625001233523 106.67
1839.103 0.992187501235464 128.00
1887.231 0.992968751236430 142.22
1944.575 0.993750001237409 160.00
2007.039 0.994531251238384 182.86
2084.863 0.995312501239358 213.33
2174.975 0.996093751240326 256.00
2230.271 0.9964843750001240818 284.44
2293.759 0.996875001241292 320.00
2369.535 0.9972656250001241785 365.71
2455.551 0.997656251242271 426.67
2578.431 0.9980468750001242752 512.00
2656.255 0.9982421875001242999 568.89
2740.223 0.998437501243244 640.00
2834.431 0.9986328125001243482 731.43
2957.311 0.9988281250001243725 853.33
3131.391 0.99902343750012439691024.00
3235.839 0.99912109375012440911137.78
  

[jira] [Updated] (CASSANDRA-8534) The default configuration URL does not have the required file:// prefix and throws an exception if cassandra.config is not set.

2015-01-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8534:
--
   Priority: Minor  (was: Major)
Environment: 
Ubuntu 14.04 64-bit
C* 2.1.2

  was:Any


 The default configuration URL does not have the required file:// prefix and 
 throws an exception if cassandra.config is not set.
 ---

 Key: CASSANDRA-8534
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8534
 Project: Cassandra
  Issue Type: Bug
  Components: Config, Core
 Environment: Ubuntu 14.04 64-bit
 C* 2.1.2
Reporter: Andrew Trimble
Priority: Minor
 Fix For: 2.1.3

 Attachments: error.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 In the class org.apache.cassandra.config.YamlConfigurationLoader, the 
 DEFAULT_CONFIGURATION is set to cassandra.yaml. This is improperly 
 formatted as it does not contain the prefix file://. If this value is used, 
 a ConfigurationException is thrown (see line 73 of the same class).
 A solution is to set the cassandra.config system property, but this does not 
 solve the underlying problem. A vanilla Cassandra installation will throw 
 this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8494) incremental bootstrap

2015-01-09 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271721#comment-14271721
 ] 

Donald Smith commented on CASSANDRA-8494:
-

Tunable consistency is related:  don't fail if a range is missing. Be fault 
tolerant and bootstrap as much as it can.

 incremental bootstrap
 -

 Key: CASSANDRA-8494
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8494
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jon Haddad
Assignee: Yuki Morishita
Priority: Minor
  Labels: density
 Fix For: 3.0


 Current bootstrapping involves (to my knowledge) picking tokens and streaming 
 data before the node is available for requests.  This can be problematic with 
 fat nodes, since it may require 20TB of data to be streamed over before the 
 machine can be useful.  This can result in a massive window of time before 
 the machine can do anything useful.
 As a potential approach to mitigate the huge window of time before a node is 
 available, I suggest modifying the bootstrap process to only acquire a single 
 initial token before being marked UP.  This would likely be a configuration 
 parameter incremental_bootstrap or something similar.
 After the node is bootstrapped with this one token, it could go into UP 
 state, and could then acquire additional tokens (one or a handful at a time), 
 which would be streamed over while the node is active and serving requests.  
 The benefit here is that with the default 256 tokens a node could become an 
 active part of the cluster with less than 1% of it's final data streamed over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.  

If it can't find sources for some ranges, it should allow bootstrapping to 
continue and should print out a report about what ranges were missing.   Allow 
the bootstrap to be tunable, under control of parameters (allow up to 100 
failures, for example).

For many apps, it's far better to bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks, and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscalculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements aren't high but we prefer as much consistency as 
cassandra can give us.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.  Faults happen.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks, and when we started cassandra back up, 
some nodes ran out of disk space, due to operator miscalculation. Thereafter, 
we've been unable to bootstrap new nodes, due to unable to find sufficient 
sources for streaming range.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair. 
Our consistency requirements aren't high but we prefer as much consistency as 
cassandra can give us.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith
Priority: Minor

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.  
 If it can't find sources for some ranges, it should allow bootstrapping to 
 continue and should print out a report about what ranges were missing.   
 Allow the bootstrap to be tunable, under control of parameters (allow up to 
 100 failures, for example).
 For many apps, it's far better to bootstrap what's available then to fail 
 flat.
 Same with rebuilds.
 We were doing maintenance on some disks, and when we started cassandra back 
 up, some nodes ran out of disk space, due to operator miscalculation. 
 Thereafter, we've been unable to bootstrap new nodes, due to unable to find 
 sufficient sources for streaming range.  But bootstrapping with partial 
 success would be far better than being unable to bootstrap at all, and 
 cheaper than a repair. Our consistency requirements aren't high but we prefer 
 as much consistency as cassandra can give us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7886) Coordinator should not wait for read timeouts when replicas hit Exceptions

2015-01-09 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7886:
---
Attachment: 7886-final.txt

 Coordinator should not wait for read timeouts when replicas hit Exceptions
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
  Labels: client-impacting, protocolv4
 Fix For: 3.0

 Attachments: 7886-final.txt, 7886_v1.txt, 7886_v2_trunk.txt, 
 7886_v3_trunk.txt, 7886_v4_trunk.txt, 7886_v5_trunk.txt, 7886_v6_trunk.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks and when we started back up, some nodes 
ran out of disk space, due to operator miscaluculation. Thereafter, we've been 
unable to bootstrap new nodes.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.
 Same with rebuilds.
 We were doing maintenance on some disks and when we started back up, some 
 nodes ran out of disk space, due to operator miscaluculation. Thereafter, 
 we've been unable to bootstrap new nodes.  But bootstrapping with partial 
 success would be far better than being unable to bootstrap at all, and 
 cheaper than a repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-09 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/service/StorageProxy.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f62e292
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f62e292
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f62e292

Branch: refs/heads/cassandra-2.1
Commit: 7f62e292867bb6159592bfc8b0423f89f518a2b5
Parents: 14b2d7a dd62f7b
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Jan 9 11:19:37 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Jan 9 11:19:37 2015 -0600

--
 CHANGES.txt  |  2 ++
 .../cassandra/cql3/statements/SelectStatement.java   |  6 +-
 .../apache/cassandra/db/AbstractRangeCommand.java| 13 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java   |  4 +++-
 src/java/org/apache/cassandra/db/DataRange.java  | 12 
 .../apache/cassandra/db/filter/ExtendedFilter.java   |  6 ++
 .../apache/cassandra/db/filter/SliceQueryFilter.java |  6 ++
 .../org/apache/cassandra/service/StorageProxy.java   | 15 ---
 8 files changed, 55 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f62e292/CHANGES.txt
--
diff --cc CHANGES.txt
index 2028633,0c7e9a2..abe3fce
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,56 -1,6 +1,58 @@@
 -2.0.12:
 +2.1.3
 + * Don't reuse the same cleanup strategy for all sstables (CASSANDRA-8537)
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/last keys in sstable when giving out positions (CASSANDRA-8458)
 + * Disable mmap on Windows (CASSANDRA-6993)
 + * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read defrag async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental 

[1/2] cassandra git commit: Fix DISTINCT queries w/ limits/paging and tombstoned partitions

2015-01-09 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 14b2d7a16 - 7f62e2928


Fix DISTINCT queries w/ limits/paging and tombstoned partitions

Patch by Tyler Hobbs; reviewed by Sylvain Lebresne for CASSANDRA-8490


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd62f7bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd62f7bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd62f7bf

Branch: refs/heads/cassandra-2.1
Commit: dd62f7bf7977dd40eedb1c81ab7900b778f84540
Parents: ed54e80
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Jan 9 11:14:54 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Jan 9 11:14:54 2015 -0600

--
 CHANGES.txt|  2 ++
 .../cassandra/cql3/statements/SelectStatement.java |  6 +-
 .../org/apache/cassandra/db/AbstractRangeCommand.java  | 13 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |  4 +++-
 src/java/org/apache/cassandra/db/DataRange.java| 12 
 .../org/apache/cassandra/db/filter/ExtendedFilter.java |  6 ++
 .../apache/cassandra/db/filter/SliceQueryFilter.java   |  6 ++
 .../org/apache/cassandra/service/StorageProxy.java | 13 +++--
 8 files changed, 54 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index adb374a..0c7e9a2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Fix DISTINCT queries with LIMITs or paging when some partitions
+   contain only tombstones (CASSANDRA-8490)
  * Introduce background cache refreshing to permissions cache
(CASSANDRA-8194)
  * Fix race condition in StreamTransferTask that could lead to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index f08f6b8..19615b6 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -450,7 +450,11 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 // For distinct, we only care about fetching the beginning of each 
partition. If we don't have
 // static columns, we in fact only care about the first cell, so 
we query only that (we don't group).
 // If we do have static columns, we do need to fetch the first 
full group (to have the static columns values).
-return new SliceQueryFilter(ColumnSlice.ALL_COLUMNS_ARRAY, false, 
1, selectsStaticColumns ? toGroup : -1);
+
+// See the comments on IGNORE_TOMBSTONED_PARTITIONS and 
CASSANDRA-8490 for why we use a special value for
+// DISTINCT queries on the partition key only.
+toGroup = selectsStaticColumns ? toGroup : 
SliceQueryFilter.IGNORE_TOMBSTONED_PARTITIONS;
+return new SliceQueryFilter(ColumnSlice.ALL_COLUMNS_ARRAY, false, 
1, toGroup);
 }
 else if (isColumnRange())
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/AbstractRangeCommand.java 
b/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
index 45302e2..4ddcb8d 100644
--- a/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
+++ b/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
@@ -57,6 +57,19 @@ public abstract class AbstractRangeCommand implements 
IReadCommand
 
 public abstract int limit();
 public abstract boolean countCQL3Rows();
+
+/**
+ * Returns true if tombstoned partitions should not be included in results 
or count towards the limit.
+ * See CASSANDRA-8490 for more details on why this is needed (and done 
this way).
+ * */
+public boolean ignoredTombstonedPartitions()
+{
+if (!(predicate instanceof SliceQueryFilter))
+return false;
+
+return ((SliceQueryFilter) predicate).compositesToGroup == 
SliceQueryFilter.IGNORE_TOMBSTONED_PARTITIONS;
+}
+
 public abstract ListRow executeLocally();
 
 public long getTimeout()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/db/ColumnFamilyStore.java

[jira] [Created] (CASSANDRA-8590) Test repairing large dataset after upgrade

2015-01-09 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-8590:
---

 Summary: Test repairing large dataset after upgrade
 Key: CASSANDRA-8590
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8590
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire


* Write large dataset in multiple tables
* upgrade
* replace a few nodes
* repair in round-robin fashion
* ensure exit codes of cmd line tools are expected
* verify data.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)
Donald Smith created CASSANDRA-8591:
---

 Summary: Tunable bootstrapping
 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith


Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters, and should print out a report about what ranges were 
missing.  For many apps, it's far better to bootstrap what's available then to 
fail flat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks and when we started back up, some nodes 
ran out of disk space, due to operator miscaluculation. Thereafter, we've been 
unable to bootstrap new nodes, due to unable to find sufficient sources for 
streaming range.  But bootstrapping with partial success would be far better 
than being unable to bootstrap at all, and cheaper than a repair. Our 
consistency requirements are low but not zero.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks and when we started back up, some nodes 
ran out of disk space, due to operator miscaluculation. Thereafter, we've been 
unable to bootstrap new nodes, due to unable to find sufficient sources for 
streaming range.  But bootstrapping with partial success would be far better 
than being unable to bootstrap at all, and cheaper than a repair.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.
 Same with rebuilds.
 We were doing maintenance on some disks and when we started back up, some 
 nodes ran out of disk space, due to operator miscaluculation. Thereafter, 
 we've been unable to bootstrap new nodes, due to unable to find sufficient 
 sources for streaming range.  But bootstrapping with partial success would 
 be far better than being unable to bootstrap at all, and cheaper than a 
 repair. Our consistency requirements are low but not zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks and when we started back up, some nodes 
ran out of disk space, due to operator miscaluculation. Thereafter, we've been 
unable to bootstrap new nodes, due to unable to find sufficient sources for 
streaming range.  But bootstrapping with partial success would be far better 
than being unable to bootstrap at all, and cheaper than a repair.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

Same with rebuilds.

We were doing maintenance on some disks and when we started back up, some nodes 
ran out of disk space, due to operator miscaluculation. Thereafter, we've been 
unable to bootstrap new nodes.  But bootstrapping with partial success would be 
far better than being unable to bootstrap at all, and cheaper than a repair.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.
 Same with rebuilds.
 We were doing maintenance on some disks and when we started back up, some 
 nodes ran out of disk space, due to operator miscaluculation. Thereafter, 
 we've been unable to bootstrap new nodes, due to unable to find sufficient 
 sources for streaming range.  But bootstrapping with partial success would 
 be far better than being unable to bootstrap at all, and cheaper than a 
 repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8414) Avoid loops over array backed iterators that call iter.remove()

2015-01-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271575#comment-14271575
 ] 

Jimmy Mårdell commented on CASSANDRA-8414:
--

Patch for 2.1 added.


 Avoid loops over array backed iterators that call iter.remove()
 ---

 Key: CASSANDRA-8414
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8414
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Richard Low
Assignee: Jimmy Mårdell
  Labels: performance
 Fix For: 2.1.3

 Attachments: cassandra-2.0-8414-1.txt, cassandra-2.0-8414-2.txt, 
 cassandra-2.0-8414-3.txt, cassandra-2.0-8414-4.txt, cassandra-2.0-8414-5.txt, 
 cassandra-2.1-8414-5.txt


 I noticed from sampling that sometimes compaction spends almost all of its 
 time in iter.remove() in ColumnFamilyStore.removeDeletedStandard. It turns 
 out that the cf object is using ArrayBackedSortedColumns, so deletes are from 
 an ArrayList. If the majority of your columns are GCable tombstones then this 
 is O(n^2). The data structure should be changed or a copy made to avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix DISTINCT queries w/ limits/paging and tombstoned partitions

2015-01-09 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 ed54e8085 - dd62f7bf7


Fix DISTINCT queries w/ limits/paging and tombstoned partitions

Patch by Tyler Hobbs; reviewed by Sylvain Lebresne for CASSANDRA-8490


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd62f7bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd62f7bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd62f7bf

Branch: refs/heads/cassandra-2.0
Commit: dd62f7bf7977dd40eedb1c81ab7900b778f84540
Parents: ed54e80
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Jan 9 11:14:54 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Jan 9 11:14:54 2015 -0600

--
 CHANGES.txt|  2 ++
 .../cassandra/cql3/statements/SelectStatement.java |  6 +-
 .../org/apache/cassandra/db/AbstractRangeCommand.java  | 13 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |  4 +++-
 src/java/org/apache/cassandra/db/DataRange.java| 12 
 .../org/apache/cassandra/db/filter/ExtendedFilter.java |  6 ++
 .../apache/cassandra/db/filter/SliceQueryFilter.java   |  6 ++
 .../org/apache/cassandra/service/StorageProxy.java | 13 +++--
 8 files changed, 54 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index adb374a..0c7e9a2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Fix DISTINCT queries with LIMITs or paging when some partitions
+   contain only tombstones (CASSANDRA-8490)
  * Introduce background cache refreshing to permissions cache
(CASSANDRA-8194)
  * Fix race condition in StreamTransferTask that could lead to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index f08f6b8..19615b6 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -450,7 +450,11 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 // For distinct, we only care about fetching the beginning of each 
partition. If we don't have
 // static columns, we in fact only care about the first cell, so 
we query only that (we don't group).
 // If we do have static columns, we do need to fetch the first 
full group (to have the static columns values).
-return new SliceQueryFilter(ColumnSlice.ALL_COLUMNS_ARRAY, false, 
1, selectsStaticColumns ? toGroup : -1);
+
+// See the comments on IGNORE_TOMBSTONED_PARTITIONS and 
CASSANDRA-8490 for why we use a special value for
+// DISTINCT queries on the partition key only.
+toGroup = selectsStaticColumns ? toGroup : 
SliceQueryFilter.IGNORE_TOMBSTONED_PARTITIONS;
+return new SliceQueryFilter(ColumnSlice.ALL_COLUMNS_ARRAY, false, 
1, toGroup);
 }
 else if (isColumnRange())
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/AbstractRangeCommand.java 
b/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
index 45302e2..4ddcb8d 100644
--- a/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
+++ b/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
@@ -57,6 +57,19 @@ public abstract class AbstractRangeCommand implements 
IReadCommand
 
 public abstract int limit();
 public abstract boolean countCQL3Rows();
+
+/**
+ * Returns true if tombstoned partitions should not be included in results 
or count towards the limit.
+ * See CASSANDRA-8490 for more details on why this is needed (and done 
this way).
+ * */
+public boolean ignoredTombstonedPartitions()
+{
+if (!(predicate instanceof SliceQueryFilter))
+return false;
+
+return ((SliceQueryFilter) predicate).compositesToGroup == 
SliceQueryFilter.IGNORE_TOMBSTONED_PARTITIONS;
+}
+
 public abstract ListRow executeLocally();
 
 public long getTimeout()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/db/ColumnFamilyStore.java

[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-09 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/cql3/statements/SelectStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1657b4fb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1657b4fb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1657b4fb

Branch: refs/heads/trunk
Commit: 1657b4fbf9d7eae1b7a1d829de882d2a86ae14c8
Parents: d1a552d 7f62e29
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Jan 9 11:22:33 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Jan 9 11:22:33 2015 -0600

--
 CHANGES.txt  |  2 ++
 .../cassandra/cql3/statements/SelectStatement.java   |  6 +-
 .../apache/cassandra/db/AbstractRangeCommand.java| 13 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java   |  4 +++-
 src/java/org/apache/cassandra/db/DataRange.java  | 12 
 .../apache/cassandra/db/filter/ExtendedFilter.java   |  6 ++
 .../apache/cassandra/db/filter/SliceQueryFilter.java |  6 ++
 .../org/apache/cassandra/service/StorageProxy.java   | 15 ---
 8 files changed, 55 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1657b4fb/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1657b4fb/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index f06055a,92a9579..de8e004
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -348,12 -432,16 +348,16 @@@ public class SelectStatement implement
  // For distinct, we only care about fetching the beginning of 
each partition. If we don't have
  // static columns, we in fact only care about the first cell, so 
we query only that (we don't group).
  // If we do have static columns, we do need to fetch the first 
full group (to have the static columns values).
- return new SliceQueryFilter(ColumnSlice.ALL_COLUMNS_ARRAY, false, 
1, selection.containsStaticColumns() ? toGroup : -1);
+ 
+ // See the comments on IGNORE_TOMBSTONED_PARTITIONS and 
CASSANDRA-8490 for why we use a special value for
+ // DISTINCT queries on the partition key only.
 -toGroup = selectsStaticColumns ? toGroup : 
SliceQueryFilter.IGNORE_TOMBSTONED_PARTITIONS;
++toGroup = selection.containsStaticColumns() ? toGroup : 
SliceQueryFilter.IGNORE_TOMBSTONED_PARTITIONS;
+ return new SliceQueryFilter(ColumnSlice.ALL_COLUMNS_ARRAY, false, 
1, toGroup);
  }
 -else if (isColumnRange())
 +else if (restrictions.isColumnRange())
  {
 -ListComposite startBounds = getRequestedBound(Bound.START, 
options);
 -ListComposite endBounds = getRequestedBound(Bound.END, options);
 +ListComposite startBounds = 
restrictions.getClusteringColumnsBoundsAsComposites(Bound.START, options);
 +ListComposite endBounds = 
restrictions.getClusteringColumnsBoundsAsComposites(Bound.END, options);
  assert startBounds.size() == endBounds.size();
  
  // Handles fetching static columns. Note that for 2i, the filter 
is just used to restrict

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1657b4fb/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1657b4fb/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1657b4fb/src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1657b4fb/src/java/org/apache/cassandra/service/StorageProxy.java
--



[1/3] cassandra git commit: Fix DISTINCT queries w/ limits/paging and tombstoned partitions

2015-01-09 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk d1a552dd7 - 1657b4fbf


Fix DISTINCT queries w/ limits/paging and tombstoned partitions

Patch by Tyler Hobbs; reviewed by Sylvain Lebresne for CASSANDRA-8490


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd62f7bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd62f7bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd62f7bf

Branch: refs/heads/trunk
Commit: dd62f7bf7977dd40eedb1c81ab7900b778f84540
Parents: ed54e80
Author: Tyler Hobbs ty...@datastax.com
Authored: Fri Jan 9 11:14:54 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Jan 9 11:14:54 2015 -0600

--
 CHANGES.txt|  2 ++
 .../cassandra/cql3/statements/SelectStatement.java |  6 +-
 .../org/apache/cassandra/db/AbstractRangeCommand.java  | 13 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |  4 +++-
 src/java/org/apache/cassandra/db/DataRange.java| 12 
 .../org/apache/cassandra/db/filter/ExtendedFilter.java |  6 ++
 .../apache/cassandra/db/filter/SliceQueryFilter.java   |  6 ++
 .../org/apache/cassandra/service/StorageProxy.java | 13 +++--
 8 files changed, 54 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index adb374a..0c7e9a2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Fix DISTINCT queries with LIMITs or paging when some partitions
+   contain only tombstones (CASSANDRA-8490)
  * Introduce background cache refreshing to permissions cache
(CASSANDRA-8194)
  * Fix race condition in StreamTransferTask that could lead to

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index f08f6b8..19615b6 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -450,7 +450,11 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 // For distinct, we only care about fetching the beginning of each 
partition. If we don't have
 // static columns, we in fact only care about the first cell, so 
we query only that (we don't group).
 // If we do have static columns, we do need to fetch the first 
full group (to have the static columns values).
-return new SliceQueryFilter(ColumnSlice.ALL_COLUMNS_ARRAY, false, 
1, selectsStaticColumns ? toGroup : -1);
+
+// See the comments on IGNORE_TOMBSTONED_PARTITIONS and 
CASSANDRA-8490 for why we use a special value for
+// DISTINCT queries on the partition key only.
+toGroup = selectsStaticColumns ? toGroup : 
SliceQueryFilter.IGNORE_TOMBSTONED_PARTITIONS;
+return new SliceQueryFilter(ColumnSlice.ALL_COLUMNS_ARRAY, false, 
1, toGroup);
 }
 else if (isColumnRange())
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/AbstractRangeCommand.java 
b/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
index 45302e2..4ddcb8d 100644
--- a/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
+++ b/src/java/org/apache/cassandra/db/AbstractRangeCommand.java
@@ -57,6 +57,19 @@ public abstract class AbstractRangeCommand implements 
IReadCommand
 
 public abstract int limit();
 public abstract boolean countCQL3Rows();
+
+/**
+ * Returns true if tombstoned partitions should not be included in results 
or count towards the limit.
+ * See CASSANDRA-8490 for more details on why this is needed (and done 
this way).
+ * */
+public boolean ignoredTombstonedPartitions()
+{
+if (!(predicate instanceof SliceQueryFilter))
+return false;
+
+return ((SliceQueryFilter) predicate).compositesToGroup == 
SliceQueryFilter.IGNORE_TOMBSTONED_PARTITIONS;
+}
+
 public abstract ListRow executeLocally();
 
 public long getTimeout()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd62f7bf/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff 

[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters (up to 100 failures, for example), and should print out a 
report about what ranges were missing.  For many apps, it's far better to 
bootstrap what's available then to fail flat.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tuneable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters, and should print out a report about what ranges were 
missing.  For many apps, it's far better to bootstrap what's available then to 
fail flat.


 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tuneable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Summary: Tuneable bootstrapping  (was: Tunable bootstrapping)

 Tuneable bootstrapping
 --

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters, and should print out a report about what ranges 
 were missing.  For many apps, it's far better to bootstrap what's available 
 then to fail flat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Summary: Tunable bootstrapping  (was: Tuneable bootstrapping)

 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tuneable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters, and should print out a report about what ranges 
 were missing.  For many apps, it's far better to bootstrap what's available 
 then to fail flat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tuneable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Description: 
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tuneable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters, and should print out a report about what ranges were 
missing.  For many apps, it's far better to bootstrap what's available then to 
fail flat.

  was:
Often bootstrapping fails due to errors like unable to find sufficient sources 
for streaming range. But cassandra is supposed to be fault tolerant, and it's 
supposed to have tunable consistency.

If it can't find some sources, it should allow bootstrapping to continue, under 
control by parameters, and should print out a report about what ranges were 
missing.  For many apps, it's far better to bootstrap what's available then to 
fail flat.


 Tuneable bootstrapping
 --

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tuneable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters, and should print out a report about what ranges 
 were missing.  For many apps, it's far better to bootstrap what's available 
 then to fail flat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8592) Add WriteFailureException

2015-01-09 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-8592:
--

 Summary: Add WriteFailureException
 Key: CASSANDRA-8592
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8592
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 3.0


Similar to what CASSANDRA-7886 did for reads, we should add a 
WriteFailureException and have replicas signal a failure while handling a write 
to the coordinator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8593) Test for leap second related bugs

2015-01-09 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-8593:
---

 Summary: Test for leap second related bugs
 Key: CASSANDRA-8593
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8593
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire


http://www.datastax.com/dev/blog/linux-cassandra-and-saturdays-leap-second-problem

Another leap second is being added in June, we need to find an old 
system/platform that does still have this issue, create a test that exercises 
it. Then we can use the test to create a list of any still affected platforms. 
Ideally, we can include a list of the affected platforms/configs inside C* and 
it will issue a warning in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: prep for 2.0.12 release

2015-01-09 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 dd62f7bf7 - 5b66997fa


prep for 2.0.12 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5b66997f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5b66997f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5b66997f

Branch: refs/heads/cassandra-2.0
Commit: 5b66997fa8be961dd17cdc93b29f2b61491f2cbb
Parents: dd62f7b
Author: T Jake Luciani j...@apache.org
Authored: Fri Jan 9 15:21:49 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Fri Jan 9 15:21:49 2015 -0500

--
 NEWS.txt | 9 +
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 6f6b795..2bc4fe6 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,15 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.0.12
+==
+
+Upgrading
+-
+- Nothing specific to this release, but refer to previous entries if you
+  are upgrading from a previous version.
+
+
 2.0.11
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/build.xml
--
diff --git a/build.xml b/build.xml
index 8c23407..9bbb54f 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=2.0.11/
+property name=base.version value=2.0.12/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 39d9520..9853818 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.0.12); urgency=medium
+
+  * New release 
+
+ -- Jake Luciani j...@apache.org  Fri, 09 Jan 2015 15:20:30 -0500
+
 cassandra (2.0.11) unstable; urgency=medium
 
   * New release



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-09 Thread jake
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/49d5c8d9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/49d5c8d9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/49d5c8d9

Branch: refs/heads/cassandra-2.1
Commit: 49d5c8d979f70be3bfe70625e82efac31d4f58c4
Parents: 7f62e29 5b66997
Author: T Jake Luciani j...@apache.org
Authored: Fri Jan 9 15:23:25 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Fri Jan 9 15:23:25 2015 -0500

--

--




[jira] [Commented] (CASSANDRA-8593) Test for leap second related bugs

2015-01-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271833#comment-14271833
 ] 

Michael Shuler commented on CASSANDRA-8593:
---

rhel6 2.6.32-220.el6.x86_64 kernel was vulnerable, fixed in 
2.6.32-358.el6.x86_64 - ref: https://access.redhat.com/articles/199563

 Test for leap second related bugs
 -

 Key: CASSANDRA-8593
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8593
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire

 http://www.datastax.com/dev/blog/linux-cassandra-and-saturdays-leap-second-problem
 Another leap second is being added in June, we need to find an old 
 system/platform that does still have this issue, create a test that exercises 
 it. Then we can use the test to create a list of any still affected 
 platforms. Ideally, we can include a list of the affected platforms/configs 
 inside C* and it will issue a warning in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-3025) PHP/PDO driver for Cassandra CQL

2015-01-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271932#comment-14271932
 ] 

Michael Shuler commented on CASSANDRA-3025:
---

Another useful resource might be to chat with the DataStax driver users/devs on 
the #datastax-drivers channel on irc.freenode.net

 PHP/PDO driver for Cassandra CQL
 

 Key: CASSANDRA-3025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3025
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Mikko Koppanen
Assignee: Mikko Koppanen
  Labels: php
 Attachments: pdo_cassandra-0.1.0.tgz, pdo_cassandra-0.1.1.tgz, 
 pdo_cassandra-0.1.2.tgz, pdo_cassandra-0.1.3.tgz, pdo_cassandra-0.2.0.tgz, 
 pdo_cassandra-0.2.1.tgz, php_test_results_20110818_2317.txt


 Hello,
 attached is the initial version of the PDO driver for Cassandra CQL language. 
 This is a native PHP extension written in what I would call a combination of 
 C and C++, due to PHP being C. The thrift API used is the C++.
 The API looks roughly following:
 {code}
 ?php
 $db = new PDO('cassandra:host=127.0.0.1;port=9160');
 $db-exec (CREATE KEYSPACE mytest with strategy_class = 'SimpleStrategy' and 
 strategy_options:replication_factor=1;);
 $db-exec (USE mytest);
 $db-exec (CREATE COLUMNFAMILY users (
   my_key varchar PRIMARY KEY,
   full_name varchar ););
   
 $stmt = $db-prepare (INSERT INTO users (my_key, full_name) VALUES (:key, 
 :full_name););
 $stmt-execute (array (':key' = 'mikko', ':full_name' = 'Mikko K' ));
 {code}
 Currently prepared statements are emulated on the client side but I 
 understand that there is a plan to add prepared statements to Cassandra CQL 
 API as well. I will add this feature in to the extension as soon as they are 
 implemented.
 Additional documentation can be found in github 
 https://github.com/mkoppanen/php-pdo_cassandra, in the form of rendered 
 MarkDown file. Tests are currently not included in the package file and they 
 can be found in the github for now as well.
 I have created documentation in docbook format as well, but have not yet 
 rendered it.
 Comments and feedback are welcome.
 Thanks,
 Mikko



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8579) sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer

2015-01-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271934#comment-14271934
 ] 

Jimmy Mårdell commented on CASSANDRA-8579:
--

We install Cassandra using the Debian package in our production environment. 
Tools such as sstable2json and sstablemetadata gets installed in /usr/bin. 
sstable2json works, but sstablemetadata doesn't (Error: Could not find or load 
main class org.apache.cassandra.tools.SSTableMetadataViewer) because the 
CLASSPATH gets incorrectly set.

The patch above simple resolves the CLASSPATH in sstablemetadata the same way 
it's resolved in sstable2json (and a few other tools). 

In the source code, these tools are located in different directories in 2.0 
(bin vs tools/bin) although these was fixed in 2.1. But the scripts still 
resolve the CLASSPATH differently in 2.1. So the patch imho is needed in both.

 sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer
 ---

 Key: CASSANDRA-8579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8579
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jimmy Mårdell
Assignee: Jimmy Mårdell
Priority: Minor
 Fix For: 2.0.12, 2.1.3

 Attachments: cassandra-2.0-8579-1.txt, cassandra-2.1-8579-1.txt


 The sstablemetadata tool only works when running from the source tree. The 
 classpath doesn't get set correctly when running on a deployed environment.
 This bug looks to exist in 2.1 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8591) Tunable bootstrapping

2015-01-09 Thread Donald Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donald Smith updated CASSANDRA-8591:

Priority: Minor  (was: Major)

 Tunable bootstrapping
 -

 Key: CASSANDRA-8591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8591
 Project: Cassandra
  Issue Type: Improvement
Reporter: Donald Smith
Priority: Minor

 Often bootstrapping fails due to errors like unable to find sufficient 
 sources for streaming range. But cassandra is supposed to be fault tolerant, 
 and it's supposed to have tunable consistency.
 If it can't find some sources, it should allow bootstrapping to continue, 
 under control by parameters (up to 100 failures, for example), and should 
 print out a report about what ranges were missing.  For many apps, it's far 
 better to bootstrap what's available then to fail flat.
 Same with rebuilds.
 We were doing maintenance on some disks and when we started back up, some 
 nodes ran out of disk space, due to operator miscaluculation. Thereafter, 
 we've been unable to bootstrap new nodes, due to unable to find sufficient 
 sources for streaming range.  But bootstrapping with partial success would 
 be far better than being unable to bootstrap at all, and cheaper than a 
 repair. Our consistency requirements are low but not zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8534) The default configuration URL does not have the required file:// prefix and throws an exception if cassandra.config is not set.

2015-01-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8534:
--
Remaining Estimate: (was: 1h)
 Original Estimate: (was: 1h)

 The default configuration URL does not have the required file:// prefix and 
 throws an exception if cassandra.config is not set.
 ---

 Key: CASSANDRA-8534
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8534
 Project: Cassandra
  Issue Type: Bug
  Components: Config, Core
 Environment: Ubuntu 14.04 64-bit
 C* 2.1.2
Reporter: Andrew Trimble
Priority: Minor
 Fix For: 2.1.3

 Attachments: error.txt


 In the class org.apache.cassandra.config.YamlConfigurationLoader, the 
 DEFAULT_CONFIGURATION is set to cassandra.yaml. This is improperly 
 formatted as it does not contain the prefix file://. If this value is used, 
 a ConfigurationException is thrown (see line 73 of the same class).
 A solution is to set the cassandra.config system property, but this does not 
 solve the underlying problem. A vanilla Cassandra installation will throw 
 this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8515) Hang at startup when no commitlog space

2015-01-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271657#comment-14271657
 ] 

Michael Shuler commented on CASSANDRA-8515:
---

[~rlow] the bug you linked, 5737, is was marked as Invalid. What would be the 
desired behavior? I would think and error logged and the service stopped, as 
opposed hanging. If you have a full disk, you may have additional problems - 
maybe the system logs are on the same spindle.

 Hang at startup when no commitlog space
 ---

 Key: CASSANDRA-8515
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8515
 Project: Cassandra
  Issue Type: Bug
Reporter: Richard Low
 Fix For: 2.0.12


 If the commit log directory has no free space, Cassandra hangs on startup.
 The main thread is waiting:
 {code}
 main prio=9 tid=0x7fefe400f800 nid=0x1303 waiting on condition 
 [0x00010b9c1000]
java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   - parking to wait for  0x0007dc8c5fc8 (a 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator.fetchSegment(CommitLogAllocator.java:137)
   at 
 org.apache.cassandra.db.commitlog.CommitLog.activateNextSegment(CommitLog.java:299)
   at org.apache.cassandra.db.commitlog.CommitLog.init(CommitLog.java:73)
   at 
 org.apache.cassandra.db.commitlog.CommitLog.clinit(CommitLog.java:53)
   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:360)
   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:339)
   at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:211)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:699)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:208)
   at 
 org.apache.cassandra.db.SystemKeyspace.updateSchemaVersion(SystemKeyspace.java:390)
   - locked 0x0007de2f2ce0 (a java.lang.Class for 
 org.apache.cassandra.db.SystemKeyspace)
   at org.apache.cassandra.config.Schema.updateVersion(Schema.java:384)
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:532)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 {code}
 but COMMIT-LOG-ALLOCATOR is RUNNABLE:
 {code}
 COMMIT-LOG-ALLOCATOR prio=9 tid=0x7fefe5402800 nid=0x7513 in 
 Object.wait() [0x000118252000]
java.lang.Thread.State: RUNNABLE
   at 
 org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:116)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 but making no progress.
 This behaviour has change since 1.2 (see CASSANDRA-5737).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8588) Fix DropTypeStatements isusedBy for maps (typo ignored values)

2015-01-09 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271662#comment-14271662
 ] 

Tyler Hobbs commented on CASSANDRA-8588:


+1

 Fix DropTypeStatements isusedBy for maps (typo ignored values)
 --

 Key: CASSANDRA-8588
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8588
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 2.1.3

 Attachments: is_used_by_maps.txt


 simple typo caused the value of maps not to be checked but instead the key 
 checked twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8515) Hang at startup when no commitlog space

2015-01-09 Thread Richard Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Low updated CASSANDRA-8515:
---
Description: 
If the commit log directory has no free space, Cassandra hangs on startup.

The main thread is waiting:

{code}
main prio=9 tid=0x7fefe400f800 nid=0x1303 waiting on condition 
[0x00010b9c1000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x0007dc8c5fc8 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator.fetchSegment(CommitLogAllocator.java:137)
at 
org.apache.cassandra.db.commitlog.CommitLog.activateNextSegment(CommitLog.java:299)
at org.apache.cassandra.db.commitlog.CommitLog.init(CommitLog.java:73)
at 
org.apache.cassandra.db.commitlog.CommitLog.clinit(CommitLog.java:53)
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:360)
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:339)
at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:211)
at 
org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:699)
at 
org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:208)
at 
org.apache.cassandra.db.SystemKeyspace.updateSchemaVersion(SystemKeyspace.java:390)
- locked 0x0007de2f2ce0 (a java.lang.Class for 
org.apache.cassandra.db.SystemKeyspace)
at org.apache.cassandra.config.Schema.updateVersion(Schema.java:384)
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:532)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
{code}

but COMMIT-LOG-ALLOCATOR is RUNNABLE:

{code}
COMMIT-LOG-ALLOCATOR prio=9 tid=0x7fefe5402800 nid=0x7513 in 
Object.wait() [0x000118252000]
   java.lang.Thread.State: RUNNABLE
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:116)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:745)
{code}

but making no progress.

This behaviour has changed since 1.2 (see CASSANDRA-5737).

  was:
If the commit log directory has no free space, Cassandra hangs on startup.

The main thread is waiting:

{code}
main prio=9 tid=0x7fefe400f800 nid=0x1303 waiting on condition 
[0x00010b9c1000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x0007dc8c5fc8 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
org.apache.cassandra.db.commitlog.CommitLogAllocator.fetchSegment(CommitLogAllocator.java:137)
at 
org.apache.cassandra.db.commitlog.CommitLog.activateNextSegment(CommitLog.java:299)
at org.apache.cassandra.db.commitlog.CommitLog.init(CommitLog.java:73)
at 
org.apache.cassandra.db.commitlog.CommitLog.clinit(CommitLog.java:53)
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:360)
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:339)
at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:211)
at 
org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:699)
at 
org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:208)
at 
org.apache.cassandra.db.SystemKeyspace.updateSchemaVersion(SystemKeyspace.java:390)
- locked 0x0007de2f2ce0 (a java.lang.Class for 
org.apache.cassandra.db.SystemKeyspace)
at org.apache.cassandra.config.Schema.updateVersion(Schema.java:384)
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:532)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 

[jira] [Updated] (CASSANDRA-7886) Coordinator should not wait for read timeouts when replicas hit Exceptions

2015-01-09 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7886:
---
Labels: client-impacting protocolv4  (was: protocolv4)

 Coordinator should not wait for read timeouts when replicas hit Exceptions
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
  Labels: client-impacting, protocolv4
 Fix For: 3.0

 Attachments: 7886_v1.txt, 7886_v2_trunk.txt, 7886_v3_trunk.txt, 
 7886_v4_trunk.txt, 7886_v5_trunk.txt, 7886_v6_trunk.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Add ReadFailureException, better TombstoneOE logging

2015-01-09 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1657b4fbf - c6525da86


Add ReadFailureException, better TombstoneOE logging

Patch by Christian Spriegel; reviewed by Tyler Hobbs for CASSANDRA-7886


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c6525da8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c6525da8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c6525da8

Branch: refs/heads/trunk
Commit: c6525da86eb1ac668206553336056f90e7bfcdaa
Parents: 1657b4f
Author: Christian Spriegel christian.sprie...@movilizer.com
Authored: Fri Jan 9 13:30:22 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Fri Jan 9 13:30:22 2015 -0600

--
 CHANGES.txt |  3 +
 doc/native_protocol_v4.spec | 17 +++-
 .../apache/cassandra/db/ReadVerbHandler.java| 17 +---
 .../apache/cassandra/db/RowIteratorFactory.java | 21 +++--
 .../cassandra/db/filter/ExtendedFilter.java | 10 +-
 .../cassandra/db/filter/SliceQueryFilter.java   | 46 +
 .../filter/TombstoneOverwhelmingException.java  | 42 +
 .../cassandra/exceptions/ExceptionCode.java |  1 +
 .../exceptions/ReadFailureException.java| 31 +++
 .../exceptions/RequestFailureException.java | 37 
 .../cassandra/metrics/ClientRequestMetrics.java |  4 +
 .../cassandra/net/MessageDeliveryTask.java  |  6 +-
 .../cassandra/service/AbstractReadExecutor.java |  9 +-
 .../service/RangeSliceVerbHandler.java  | 24 ++---
 .../apache/cassandra/service/ReadCallback.java  | 37 +++-
 .../apache/cassandra/service/StorageProxy.java  | 98 ++--
 .../cassandra/thrift/CassandraServer.java   | 24 ++---
 .../cassandra/thrift/ThriftConversion.java  | 10 +-
 .../org/apache/cassandra/transport/Server.java  |  1 +
 .../transport/messages/ErrorMessage.java| 75 +++
 20 files changed, 374 insertions(+), 139 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6525da8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0c91632..fc9ec7f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,7 @@
 3.0
+ * Add ReadFailureException to native protocol, respond
+   immediately when replicas encounter errors while handling
+   a read request (CASSANDRA-7886)
  * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
  * Allow mixing token and partition key restrictions (CASSANDRA-7016)
  * Support index key/value entries on map collections (CASSANDRA-8473)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6525da8/doc/native_protocol_v4.spec
--
diff --git a/doc/native_protocol_v4.spec b/doc/native_protocol_v4.spec
index 3764e91..0806944 100644
--- a/doc/native_protocol_v4.spec
+++ b/doc/native_protocol_v4.spec
@@ -880,7 +880,21 @@ Table of Contents
 data_present is a single byte. If its value is 0, it means
the replica that was asked for data has not
responded. Otherwise, the value is != 0.
-
+0x1300Read_failure: A non-timeout exception during a read request. The 
rest
+  of the ERROR message body will be
+clreceivedblockfornumfailuresdata_present
+  where:
+cl is the [consistency] level of the query having triggered
+ the exception.
+received is an [int] representing the number of nodes having
+   answered the request.
+blockfor is the number of replicas whose response is
+   required to achieve cl.
+numfailures is an [int] representing the number of nodes that
+  experience a failure while executing the request.
+data_present is a single byte. If its value is 0, it means
+   the replica that was asked for data had not
+   responded. Otherwise, the value is != 0.
 0x2000Syntax_error: The submitted query has a syntax error.
 0x2100Unauthorized: The logged user doesn't have the right to perform
   the query.
@@ -905,4 +919,5 @@ Table of Contents
 
   * The format of SCHEMA_CHANGE events (Section 4.2.6) (and implicitly 
Schema_change results (Section 4.2.5.5))
 has been modified, and now includes changes related to user defined 
functions and user defined aggregates.
+  * Read_failure error code was added.
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6525da8/src/java/org/apache/cassandra/db/ReadVerbHandler.java

[jira] [Commented] (CASSANDRA-8562) Fix checking available disk space before compaction starts

2015-01-09 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271752#comment-14271752
 ] 

Joshua McKenzie commented on CASSANDRA-8562:


* Delete default no-op implementation of reduceScopeForLimitedSpace from 
DiskAwareRunnable as it's no longer used
* nit: normalize use of braces in:
{code}
  if (BlacklistedDirectories.isUnwritable(getLocationForDisk(dataDir)))
continue;
  DataDirectoryCandidate candidate = new DataDirectoryCandidate(dataDir);
  // exclude directory if its total writeSize does not fit to data directory
  if (candidate.availableSpace  writeSize)
  {
continue;
  }
{code}

A last general question - is throwing an RTE the best thing for us to do when 
we don't have sufficient disk space for a node to compact files?  Seems like 
that's a pretty serious situation for a node to be in... this gets me thinking 
of something like compaction_failure_policy, similar to disk_failure or 
commit_failure where we can instruct what to do with a node if we hit these 
types of conditions.  Certainly outside the scope of this ticket but might be 
worth following up on if we don't have something along those lines already.

Other than the above 2 minor things, lgtm.

 Fix checking available disk space before compaction starts
 --

 Key: CASSANDRA-8562
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8562
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.0.12, 2.1.3

 Attachments: 
 0001-Check-for-available-disk-space-before-starting-compa.patch


 When starting a compaction we check if there is enough disk space available 
 to start it, otherwise we might (for STCS) reduce the compaction so that the 
 result could fit. Now (since CASSANDRA-8329) we check for the directory to 
 write to a lot later and this can reduce the compaction after we have created 
 the scanners.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8593) Test for leap second related bugs

2015-01-09 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271824#comment-14271824
 ] 

Russ Hatch commented on CASSANDRA-8593:
---

https://github.com/wolfcw/libfaketime is another one that might be useful for 
testing. allows faking time for specific processes (and possibly system wide).

 Test for leap second related bugs
 -

 Key: CASSANDRA-8593
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8593
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire

 http://www.datastax.com/dev/blog/linux-cassandra-and-saturdays-leap-second-problem
 Another leap second is being added in June, we need to find an old 
 system/platform that does still have this issue, create a test that exercises 
 it. Then we can use the test to create a list of any still affected 
 platforms. Ideally, we can include a list of the affected platforms/configs 
 inside C* and it will issue a warning in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8579) sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer

2015-01-09 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271841#comment-14271841
 ] 

Yuki Morishita commented on CASSANDRA-8579:
---

Do you mean you cannot invoke sstablemetadata from symlink?
I prefer finding CASSANDRA_HOME from symlink possibly using `readlink` or other 
techniques to simply let user to set correct CLASSPATH.


 sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer
 ---

 Key: CASSANDRA-8579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8579
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jimmy Mårdell
Assignee: Jimmy Mårdell
Priority: Minor
 Fix For: 2.0.12, 2.1.3

 Attachments: cassandra-2.0-8579-1.txt, cassandra-2.1-8579-1.txt


 The sstablemetadata tool only works when running from the source tree. The 
 classpath doesn't get set correctly when running on a deployed environment.
 This bug looks to exist in 2.1 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8593) Test for leap second related bugs

2015-01-09 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-8593:
---

Assignee: Ryan McGuire

 Test for leap second related bugs
 -

 Key: CASSANDRA-8593
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8593
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire

 http://www.datastax.com/dev/blog/linux-cassandra-and-saturdays-leap-second-problem
 Another leap second is being added in June, we need to find an old 
 system/platform that does still have this issue, create a test that exercises 
 it. Then we can use the test to create a list of any still affected 
 platforms. Ideally, we can include a list of the affected platforms/configs 
 inside C* and it will issue a warning in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Git Push Summary

2015-01-09 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.0.12-tentative [created] 5b66997fa


[1/2] cassandra git commit: prep for 2.0.12 release

2015-01-09 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 7f62e2928 - 49d5c8d97


prep for 2.0.12 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5b66997f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5b66997f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5b66997f

Branch: refs/heads/cassandra-2.1
Commit: 5b66997fa8be961dd17cdc93b29f2b61491f2cbb
Parents: dd62f7b
Author: T Jake Luciani j...@apache.org
Authored: Fri Jan 9 15:21:49 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Fri Jan 9 15:21:49 2015 -0500

--
 NEWS.txt | 9 +
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 6f6b795..2bc4fe6 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,15 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.0.12
+==
+
+Upgrading
+-
+- Nothing specific to this release, but refer to previous entries if you
+  are upgrading from a previous version.
+
+
 2.0.11
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/build.xml
--
diff --git a/build.xml b/build.xml
index 8c23407..9bbb54f 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=2.0.11/
+property name=base.version value=2.0.12/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b66997f/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 39d9520..9853818 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.0.12); urgency=medium
+
+  * New release 
+
+ -- Jake Luciani j...@apache.org  Fri, 09 Jan 2015 15:20:30 -0500
+
 cassandra (2.0.11) unstable; urgency=medium
 
   * New release



[jira] [Commented] (CASSANDRA-8593) Test for leap second related bugs

2015-01-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271823#comment-14271823
 ] 

Michael Shuler commented on CASSANDRA-8593:
---

https://github.com/AmadeusITGroup/NTP-Proxy looks interesting

 Test for leap second related bugs
 -

 Key: CASSANDRA-8593
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8593
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire

 http://www.datastax.com/dev/blog/linux-cassandra-and-saturdays-leap-second-problem
 Another leap second is being added in June, we need to find an old 
 system/platform that does still have this issue, create a test that exercises 
 it. Then we can use the test to create a list of any still affected 
 platforms. Ideally, we can include a list of the affected platforms/configs 
 inside C* and it will issue a warning in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8579) sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer

2015-01-09 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271961#comment-14271961
 ] 

Yuki Morishita commented on CASSANDRA-8579:
---

Oh, I see. then patch LGTM.

 sstablemetadata can't load org.apache.cassandra.tools.SSTableMetadataViewer
 ---

 Key: CASSANDRA-8579
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8579
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jimmy Mårdell
Assignee: Jimmy Mårdell
Priority: Minor
 Fix For: 2.0.12, 2.1.3

 Attachments: cassandra-2.0-8579-1.txt, cassandra-2.1-8579-1.txt


 The sstablemetadata tool only works when running from the source tree. The 
 classpath doesn't get set correctly when running on a deployed environment.
 This bug looks to exist in 2.1 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8522) Getting partial set of columns in a 'select *' query

2015-01-09 Thread Fabiano C. Botelho (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272051#comment-14272051
 ] 

Fabiano C. Botelho commented on CASSANDRA-8522:
---

It should be there because when I do:
   describe table sd.events

I see the full table.

 Getting partial set of columns in a 'select *' query
 

 Key: CASSANDRA-8522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8522
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Fabiano C. Botelho
Assignee: Aleksey Yeschenko
 Fix For: 2.0.12

 Attachments: systemlogs.zip


 Configuration:
3 node cluster, where two nodes are fine and just one sees the issue 
 reported here. It is an in-memory state  on the server that gets cleared with 
 a cassandra restart on the problematic  node.
 Problem:
 Scenario (this is a run through on the problematic node after at least 6 
 hours the problem had surfaced):
 1. After schema had been installed, one can do a  'describe table events' and 
 that shows all the columns in the table, see below:
 {code}
 Use HELP for help.
 cqlsh:sd DESCRIBE TABLE events
 CREATE TABLE events (
   dayhour text,
   id text,
   event_info text,
   event_series_id text,
   event_type text,
   internal_timestamp bigint,
   is_read boolean,
   is_user_visible boolean,
   link text,
   node_id text,
   time timestamp,
   PRIMARY KEY ((dayhour), id)
 ) WITH
   bloom_filter_fp_chance=0.10 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.10 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'LeveledCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX events_id_idx ON events (id);
 CREATE INDEX events_event_series_id_idx ON events (event_series_id);
 {code}
 2. run a query selecting all columns on the same table above:
 {code}
 cqlsh:sd select * from events limit 10;
  dayhour   | id   | event_series_id   
| is_user_visible
 ---+--+--+-
  2014-12-19:12 | 3a70e8f8-0b04-4485-bf8f-c3d4031687ed | 
 7c129287-2b3d-4342-8f2b-f1eba61267f6 |   False
  2014-12-19:12 | 49a854fb-0e6c-43e9-830e-6f833689df0b | 
 1a130faf-d755-4e52-9f93-82a380d86f31 |   False
  2014-12-19:12 | 6df0b844-d810-423e-8e43-5b3d44213699 | 
 7c129287-2b3d-4342-8f2b-f1eba61267f6 |   False
  2014-12-19:12 | 92d55ff9-724a-4bc4-a57f-dfeee09e46a4 | 
 1a130faf-d755-4e52-9f93-82a380d86f31 |   False
  2014-12-19:17 | 2e0ea98c-4d5a-4ad2-b386-bc181e2e7cec | 
 a9cf80e9-b8de-4154-9a37-13ed95459a91 |   False
  2014-12-19:17 | 8837dc3f-abae-45e6-80cb-c3dffd3f08aa | 
 cb0e4867-0f27-47e3-acde-26b105e0fdc9 |   False
  2014-12-19:17 | b36baa5b-b084-4596-a8a5-d85671952313 | 
 cb0e4867-0f27-47e3-acde-26b105e0fdc9 |   False
  2014-12-19:17 | f73f9438-cba7-4961-880e-77e134175390 | 
 a9cf80e9-b8de-4154-9a37-13ed95459a91 |   False
  2014-12-19:16 | 47b47745-c4f6-496b-a976-381a545f7326 | 
 4bc7979f-2c68-4d65-91a1-e1999a3bbc7a |   False
  2014-12-19:16 | 5708098f-0c0a-4372-be03-ea7057a3bd44 | 
 10ac9312-9487-4de9-b706-0d0af18bf9fd |   False
 {code}
 Note that not all columns show up in the result.
 3. Try a query that refers to at least one of the missing columns in the 
 result above, but off course one that is in the schema.
 {code}
 cqlsh:sd select dayhour, id, event_info from events
   ... ;
 Bad Request: Undefined name event_info in selection clause
 {code}
 Note that it failed saying that 'event_info' was not defined.
 This problem goes away with a restart of cassandra in the problematic node. 
 This does not seem to be the java-320 bug where the fix is supposed to be 
 fixed in driver 2.0.2. We are using driver version 2.0.1. Note that this 
 issue surfaces both with the driver as well as with cqlsh, which points to a 
 problem in the cassandra server. Would appreciate some help with a fix or a 
 quick workaround that is not simply restarting the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8514) ArrayIndexOutOfBoundsException in nodetool cfhistograms

2015-01-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272017#comment-14272017
 ] 

Jimmy Mårdell commented on CASSANDRA-8514:
--

Yes, I've seen this as well. Simple patch attached.

 ArrayIndexOutOfBoundsException in nodetool cfhistograms
 ---

 Key: CASSANDRA-8514
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8514
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: OSX
Reporter: Philip Thompson
 Fix For: 2.1.3

 Attachments: cassandra-2.1-8514-1.txt


 When running nodetool cfhistograms on 2.1-HEAD, I am seeing the following 
 exception:
 {code}
 04:02 PM:~/cstar/cassandra[cassandra-2.1*]$ bin/nodetool cfhistograms 
 keyspace1 standard1
 objc[58738]: Class JavaLaunchHelper is implemented in both 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/bin/java and 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/jre/lib/libinstrument.dylib.
  One of the two will be used. Which one is undefined.
 error: 0
 -- StackTrace --
 java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.cassandra.utils.EstimatedHistogram.newOffsets(EstimatedHistogram.java:75)
   at 
 org.apache.cassandra.utils.EstimatedHistogram.init(EstimatedHistogram.java:60)
   at 
 org.apache.cassandra.tools.NodeTool$CfHistograms.execute(NodeTool.java:946)
   at 
 org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:250)
   at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:164){code}
 I can reproduce this with these simple steps:
 Start a new C* 2.1-HEAD node
 Run {{cassandra-stress write n=1}}
 Run {{nodetool cfhistograms keyspace1 standard1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8514) ArrayIndexOutOfBoundsException in nodetool cfhistograms

2015-01-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Mårdell updated CASSANDRA-8514:
-
Attachment: cassandra-2.1-8514-1.txt

 ArrayIndexOutOfBoundsException in nodetool cfhistograms
 ---

 Key: CASSANDRA-8514
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8514
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: OSX
Reporter: Philip Thompson
 Fix For: 2.1.3

 Attachments: cassandra-2.1-8514-1.txt


 When running nodetool cfhistograms on 2.1-HEAD, I am seeing the following 
 exception:
 {code}
 04:02 PM:~/cstar/cassandra[cassandra-2.1*]$ bin/nodetool cfhistograms 
 keyspace1 standard1
 objc[58738]: Class JavaLaunchHelper is implemented in both 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/bin/java and 
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/jre/lib/libinstrument.dylib.
  One of the two will be used. Which one is undefined.
 error: 0
 -- StackTrace --
 java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.cassandra.utils.EstimatedHistogram.newOffsets(EstimatedHistogram.java:75)
   at 
 org.apache.cassandra.utils.EstimatedHistogram.init(EstimatedHistogram.java:60)
   at 
 org.apache.cassandra.tools.NodeTool$CfHistograms.execute(NodeTool.java:946)
   at 
 org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:250)
   at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:164){code}
 I can reproduce this with these simple steps:
 Start a new C* 2.1-HEAD node
 Run {{cassandra-stress write n=1}}
 Run {{nodetool cfhistograms keyspace1 standard1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7226) get code coverage working again (cobertura or other)

2015-01-09 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272023#comment-14272023
 ] 

Russ Hatch commented on CASSANDRA-7226:
---

Updated wiki yesterday, all set there.

 get code coverage working again (cobertura or other)
 

 Key: CASSANDRA-7226
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7226
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: Russ Hatch
  Labels: qa-resolved
 Fix For: 3.0

 Attachments: coverage.png, trunk-7226-2.txt, trunk-7226-3.txt, 
 trunk-7226-4.txt, trunk-7226.txt


 We need to sort out code coverage again, for unit and cassandra-dtest tests. 
 Preferably the same tool for both.
 Seems like cobertura project activity has dwindled. Jacoco might be a viable 
 alternative to cobertura. Jacoco can can instrument running bytecode so I 
 think it could also work for dtests (does require an agent, not sure if 
 that's a problem yet). If using an agent is problematic looks like it can 
 also work with offline bytecode though I don't see how that could benefit 
 dtests. Project seems pretty active, with a release just last week.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: fix debian changelog

2015-01-09 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 5b66997fa - df1f5ead0


fix debian changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df1f5ead
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df1f5ead
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df1f5ead

Branch: refs/heads/cassandra-2.0
Commit: df1f5ead0950d4d3058cf6fe0fcae9ef528014fa
Parents: 5b66997
Author: T Jake Luciani j...@apache.org
Authored: Fri Jan 9 15:48:48 2015 -0500
Committer: T Jake Luciani j...@apache.org
Committed: Fri Jan 9 15:48:48 2015 -0500

--
 debian/changelog | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df1f5ead/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 9853818..53fa20f 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,4 +1,4 @@
-cassandra (2.0.12); urgency=medium
+cassandra (2.0.12) unstable; urgency=medium
 
   * New release 
 



Git Push Summary

2015-01-09 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.0.12-tentative [created] df1f5ead0


Git Push Summary

2015-01-09 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.0.12-tentative [deleted] 5b66997fa


[jira] [Commented] (CASSANDRA-7520) Permit sorting sstables by raw partition key, as opposed to token

2015-01-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271862#comment-14271862
 ] 

Aleksey Yeschenko commented on CASSANDRA-7520:
--

bq. If we're not storing it in token order then we have to do a ton of random 
i/o on merkle tree build and streaming. I'm skeptical that this is going to 
help enough to be worth the extra complexity.

I'll mention that in addition to that, sorting sstables by raw partition key 
also pretty negatively affects range queries.

 Permit sorting sstables by raw partition key, as opposed to token
 -

 Key: CASSANDRA-7520
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7520
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict

 At the moment we have some counter-intuitive behaviour, which is that with a 
 hashed partitioner (recommended) the more compacted the data is, the more 
 randomly distributed it is amongst the file. This means that data access 
 locality is made pretty much as bad as possible, and we rely on the OS to do 
 its best to fix that for us with its page cache.
 [~jasobrown] mentioned this at the NGCC, but thinking on it some more it 
 seems that many use cases may benefit from dropping the token at the storage 
 level and sorting based on the raw key data. For workloads where nearness of 
 key = likelihood of being coreferenced, this could improve data locality and 
 cache hit rate dramatically. Timeseries workloads spring to mind, but I doubt 
 this is constrained to them. Most likely any non-random access pattern could 
 benefit. A random access pattern would most likely suffer from this scheme, 
 as we can index more efficiently into the hashed data. However there's no 
 reason we could not support both schemes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8281) CQLSSTableWriter close does not work

2015-01-09 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271890#comment-14271890
 ] 

Yuki Morishita commented on CASSANDRA-8281:
---

bq. Both patches make sure that Keyspace.setInitialized() is called

Is it possible to use `Keyspace.openWithoutSSTables`? It seems it bypasses 
initialized check.
Other than that it looks good to me.

 CQLSSTableWriter close does not work
 

 Key: CASSANDRA-8281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Cassandra 2.1.1
Reporter: Xu Zhongxing
Assignee: Benjamin Lerer
 Fix For: 2.1.3

 Attachments: CASSANDRA-8281-V2-2.1.txt, CASSANDRA-8281-V2-trunk.txt, 
 CASSANDRA-8281.txt


 I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
 same code works fine on Cassandra 2.0.10.
 It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
 exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8589) Reconciliation in presence of tombstone might yield state data

2015-01-09 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-8589:
---

 Summary: Reconciliation in presence of tombstone might yield state 
data
 Key: CASSANDRA-8589
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8589
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne


Consider 3 replica A, B, C (so RF=3) and consider that we do the following 
sequence of actions at {{QUORUM}} where I indicate the replicas acknowledging 
each operation (and let's assume that a replica that don't ack is a replica 
that don't get the update):
{noformat}
CREATE TABLE test (k text, t int, v int, PRIMARY KEY (k, t))

INSERT INTO test(k, t, v) VALUES ('k', 0, 0); // acked by A, B and C
INSERT INTO test(k, t, v) VALUES ('k', 1, 1); // acked by A, B and C
INSERT INTO test(k, t, v) VALUES ('k', 2, 2); // acked by A, B and C

DELETE FROM test WHERE k='k' AND t=1; // acked by A and C

UPDATE test SET v = 3 WHERE k='k' AND t=2;// acked by B and C

SELECT * FROM test WHERE k='k' LIMIT 2;   // answered by A and B
{noformat}
Every operation has achieved quorum, but on the last read, A will respond 
{{0-0, tombstone 1, 2-2}} and B will respond {{0-0, 1-1}}. As a consequence 
we'll answer {{0-0, 2-2}} which is incorrect (we should respond {{0-0, 
2-3}}).

Put another way, if we have a limit, every replica honors that limit but since 
tombstones can suppress results from other nodes, we may have some cells for 
which we actually don't get a quorum of response (even though we globally have 
a quorum of replica responses).

In practice, this probably occurs rather rarely and so the simpler fix is 
probably to do something similar to the short reads protection: detect when 
this could have happen (based on how replica response are reconciled) and do an 
additional request in that case. That detection will have potential false 
positives but I suspect we can be precise enough that those false positives 
will be very very rare (we should nonetheless track how often this code gets 
triggered and if we see that it's more often than we think, we could 
pro-actively bump user limits internally to reduce those occurrences).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8577) Values of set types not loading correctly into Pig

2015-01-09 Thread Artem Aliev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Aliev updated CASSANDRA-8577:
---
Attachment: cassandra-2.1-8577.txt

 Values of set types not loading correctly into Pig
 --

 Key: CASSANDRA-8577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8577
 Project: Cassandra
  Issue Type: Bug
Reporter: Oksana Danylyshyn
Assignee: Brandon Williams
 Fix For: 2.1.3

 Attachments: cassandra-2.1-8577.txt


 Values of set types are not loading correctly from Cassandra (cql3 table, 
 Native protocol v3) into Pig using CqlNativeStorage. 
 When using Cassandra version 2.1.0 only empty values are loaded, and for 
 newer versions (2.1.1 and 2.1.2) the following error is received: 
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
 at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)
 Steps to reproduce:
 {code}cqlsh:socialdata CREATE TABLE test (
  key varchar PRIMARY KEY,
  tags setvarchar
);
 cqlsh:socialdata insert into test (key, tags) values ('key', {'Running', 
 'onestep4red', 'running'});
 cqlsh:socialdata select * from test;
  key | tags
 -+---
  key | {'Running', 'onestep4red', 'running'}
 (1 rows){code}
 With version 2.1.0:
 {code}grunt data = load 'cql://socialdata/test' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt dump data;
 (key,()){code}
 With version 2.1.2:
 {code}grunt data = load 'cql://socialdata/test' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt dump data;
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
   at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)
   at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:27)
   at 
 org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:796)
   at 
 org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195)
   at 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage.getNext(CqlNativeStorage.java:106)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
   at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
   at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212){code}
 Expected result:
 {code}(key,(Running,onestep4red,running)){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8577) Values of set types not loading correctly into Pig

2015-01-09 Thread Artem Aliev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270942#comment-14270942
 ] 

Artem Aliev edited comment on CASSANDRA-8577 at 1/9/15 12:08 PM:
-

to reproduce the bug with unit tests:
1 replace ./build/lib/jars/cassandra-driver-core-2.0.5.jar with 
cassandra-driver-core-2.1.3.jar
2 run pig unit tests 
 ant pig-test -Dtest.name=CqlTableDataTypeTest
{code}
….
   [junit] org.apache.cassandra.serializers.MarshalException: Unexpected 
extraneous bytes after list value
[junit] at 
org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:104)
[junit] at 
org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:27)
[junit] at 
org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:796)
[junit] at 
org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195)
[junit] at 
org.apache.cassandra.hadoop.pig.CqlNativeStorage.getNext(CqlNativeStorage.java:106)
[junit] at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
[junit] at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
[junit] at 
org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
[junit] at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
[junit] at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
[junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
[junit] at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
….
{code}

Cassandra 2.1 is shipped with java driver 2.0, that used V2 native protocol. 
The java driver 2.1 is available and it use V3 native protocol.
The collection serialisation is changed in V3. Current implementation of pig 
reader has harcoded version 1 for deserialisation, as result of incomplete fix 
of CASSANDRA-7287.
The version 1 should be used in cql-over-thrift deprecated API only. 
CqlNativeStorage use java driver protocol. So the patch passes the negotiated 
by java driver serialisation protocol to deserialiser in case CqlNativeStorage 
is used. I also add optional ‘cassandra.input.native.protocol.version’ 
parameter to force the protocol version, just in case.



was (Author: artem.aliev):
to reproduce the bug with unit tests:
1 replace ./build/lib/jars/cassandra-driver-core-2.0.5.jar with 
cassandra-driver-core-2.0.5.jar
2 run pig unit tests 
 ant pig-test -Dtest.name=CqlTableDataTypeTest
{code}
….
   [junit] org.apache.cassandra.serializers.MarshalException: Unexpected 
extraneous bytes after list value
[junit] at 
org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:104)
[junit] at 
org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:27)
[junit] at 
org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:796)
[junit] at 
org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195)
[junit] at 
org.apache.cassandra.hadoop.pig.CqlNativeStorage.getNext(CqlNativeStorage.java:106)
[junit] at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
[junit] at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
[junit] at 
org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
[junit] at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
[junit] at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
[junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
[junit] at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
….
{code}

Cassandra 2.1 is shipped with java driver 2.0, that used V2 native protocol. 
The java driver 2.1 is available and it use V3 native protocol.
The collection serialisation is changed in V3. Current implementation of pig 
reader has harcoded version 1 for deserialisation, as result of incomplete fix 
of CASSANDRA-7287.
The version 1 should be used in cql-over-thrift deprecated API only. 
CqlNativeStorage use java driver protocol. So the patch passes the negotiated 
by java driver serialisation protocol to deserialiser in case CqlNativeStorage 
is used. I also add optional ‘cassandra.input.native.protocol.version’ 
parameter to force the protocol version, just in case.


 Values of set types not loading correctly into Pig
 --

 Key: CASSANDRA-8577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8577
 Project: Cassandra
  Issue 

[jira] [Commented] (CASSANDRA-8582) Descriptor.fromFilename seems broken for BIG format

2015-01-09 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271055#comment-14271055
 ] 

T Jake Luciani commented on CASSANDRA-8582:
---

The difference is the tempDataDir needs to include the ks and cf sub 
directories:

{code}
@Test
public void testFromFileNameWithBIGFormat()
{
File dir = new File(tempDataDir.getAbsolutePath() + File.separator + 
ksname + File.separator + cfname);
checkFromFilename(new Descriptor(dir, ksname, cfname, 1, 
Descriptor.Type.TEMP, SSTableFormat.Type.BIG), false);
}
{code}

 Descriptor.fromFilename seems broken for BIG format
 ---

 Key: CASSANDRA-8582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8582
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: T Jake Luciani

 The problem can be reproduced in {{DescriptorTest}} by adding the following 
 unit test:
 {code}
 @Test
 public void testFromFileNameWithBIGFormat()
 {
 checkFromFilename(new Descriptor(tempDataDir, ksname, cfname, 1, 
 Descriptor.Type.TEMP, SSTableFormat.Type.BIG), false);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8479) Timeout Exception on Node Failure in Remote Data Center

2015-01-09 Thread Anuj (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270893#comment-14270893
 ] 

Anuj commented on CASSANDRA-8479:
-

I have attached TRACE level logs. You can find multiple ReadTimeoutException in 
System.log.3 . Once we killed Cassandra on one of the nodes in DC2, around 7 
read requests failed for around 17 seconds on DC1 and then everything was back 
to normal. We need to understand why these reads failed when we are using 
LOCAL_QUORUM in our application. Also, in another Cassandra log file 
System.log.2, we saw java.nio.file.NoSuchFileException. 

We got Hector's HTimeoutException in our application logs during these 17 
seconds. 
Stack Trace from application logs:
com.ericsson.rm.service.voucher.InternalServerException: Internal server error, 
me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
at 
com.ericsson.rm.voucher.traffic.reservation.cassandra.CassandraReservation.getReservationSlice(CassandraReservation.java:552)
 ~[na:na]
at 
com.ericsson.rm.voucher.traffic.reservation.cassandra.CassandraReservation.lookup(CassandraReservation.java:499)
 ~[na:na]
at 
com.ericsson.rm.voucher.traffic.VoucherTraffic.getReservedOrPendingVoucher(VoucherTraffic.java:764)
 ~[na:na]
at 
com.ericsson.rm.voucher.traffic.VoucherTraffic.commit(VoucherTraffic.java:686) 
~[na:na]
... 6 common frames omitted
Caused by: com.ericsson.rm.service.cassandra.xa.ConnectionException: 
me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
at 
com.ericsson.rm.cassandra.xa.keyspace.row.KeyedRowQuery.execute(KeyedRowQuery.java:93)
 ~[na:na]
at 
com.ericsson.rm.voucher.traffic.reservation.cassandra.CassandraReservation.getReservationSlice(CassandraReservation.java:548)
 ~[na:na]
... 9 common frames omitted
Caused by: me.prettyprint.hector.api.exceptions.HTimedOutException: 
TimedOutException()
at 
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:42)
 ~[na:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:286)
 ~[na:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:269)
 ~[na:na]
at 
me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104)
 ~[na:na]
at 
me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)
 ~[na:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:132)
 ~[na:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSlice(KeyspaceServiceImpl.java:290)
 ~[na:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:53)
 ~[na:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery$1.doInKeyspace(ThriftSliceQuery.java:49)
 ~[na:na]
at 
me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
 ~[na:na]
at 
me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:101)
 ~[na:na]
at 
me.prettyprint.cassandra.model.thrift.ThriftSliceQuery.execute(ThriftSliceQuery.java:48)
 ~[na:na]
at 
com.ericsson.rm.cassandra.xa.keyspace.row.KeyedRowQuery.execute(KeyedRowQuery.java:77)
 ~[na:na]
... 10 common frames omitted
Caused by: org.apache.cassandra.thrift.TimedOutException: null
at 
org.apache.cassandra.thrift.Cassandra$get_slice_result$get_slice_resultStandardScheme.read(Cassandra.java:11504)
 ~[na:na]
at 
org.apache.cassandra.thrift.Cassandra$get_slice_result$get_slice_resultStandardScheme.read(Cassandra.java:11453)
 ~[na:na]
at 
org.apache.cassandra.thrift.Cassandra$get_slice_result.read(Cassandra.java:11379)
 ~[na:na]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) 
~[na:na]
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(Cassandra.java:653) 
~[na:na]
at 
org.apache.cassandra.thrift.Cassandra$Client.get_slice(Cassandra.java:637) 
~[na:na]
at 
me.prettyprint.cassandra.service.KeyspaceServiceImpl$7.execute(KeyspaceServiceImpl.java:274)
 ~[na:na]
... 21 common frames omitted

Please have a look at https://issues.apache.org/jira/browse/CASSANDRA-8352 for 
more details about the issue.




 Timeout Exception on Node Failure in Remote Data Center
 ---

 Key: CASSANDRA-8479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8479
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core, Tools
 Environment: Unix, Cassandra 2.0.11
Reporter: Amit Singh 

[jira] [Updated] (CASSANDRA-8479) Timeout Exception on Node Failure in Remote Data Center

2015-01-09 Thread Anuj (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj updated CASSANDRA-8479:

Attachment: TRACE_LOGS.zip

Trace level logs for the issue.Please see ReadTimeoutException in System.log.3.

 Timeout Exception on Node Failure in Remote Data Center
 ---

 Key: CASSANDRA-8479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8479
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core, Tools
 Environment: Unix, Cassandra 2.0.11
Reporter: Amit Singh Chowdhery
Assignee: Ryan McGuire
Priority: Minor
 Attachments: TRACE_LOGS.zip


 Issue Faced :
 We have a Geo-red setup with 2 Data centers having 3 nodes each. When we 
 bring down a single Cassandra node down in DC2 by kill -9 Cassandra-pid, 
 reads fail on DC1 with TimedOutException for a brief amount of time (15-20 
 sec~).
 Reference :
 Already a ticket has been opened/resolved and link is provided below :
 https://issues.apache.org/jira/browse/CASSANDRA-8352
 Activity Done as per Resolution Provided :
 Upgraded to Cassandra 2.0.11 .
 We have two 3 node clusters in two different DCs and if one or more of the 
 nodes go down in one Data Center , ~5-10% traffic failure is observed on the 
 other.
 CL: LOCAL_QUORUM
 RF=3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8548) Nodetool Cleanup - java.lang.AssertionError

2015-01-09 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8548:

Reviewer: Yuki Morishita

 Nodetool Cleanup - java.lang.AssertionError
 ---

 Key: CASSANDRA-8548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8548
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebastian Estevez
Assignee: Marcus Eriksson
 Fix For: 2.0.12

 Attachments: 0001-make-sure-we-unmark-compacting.patch


 Needed to free up some space on a node but getting the dump below when 
 running nodetool cleanup.
 Tried turning on debug to try to obtain additional details in the logs but 
 nothing gets added to the logs when running cleanup. Added: 
 log4j.logger.org.apache.cassandra.db=DEBUG 
 in log4j-server.properties
 See the stack trace below:
 root@cassandra-019:~# nodetool cleanup
 {code}Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:188)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:228)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:266)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1112)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2162)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.IllegalArgumentException
 at java.nio.Buffer.limit(Buffer.java:267)
 at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:108)
 at