[jira] [Commented] (CASSANDRA-6311) Add CqlRecordReader to take advantage of native CQL pagination

2014-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922162#comment-13922162
 ] 

Piotr Kołaczkowski commented on CASSANDRA-6311:
---

1. ok, I understand; that was a nice-to-have 
3. ok

2:
count is defined in the outer scope and is not local to the Iterator instance. 
Therefore creating two iterators for the same LB policy is going to mess it up:
{noformat}
@Override
+return new AbstractIteratorHost()
+{
+protected Host computeNext()
+{
+count ++;
{noformat}

A policy should assign a LOCAL distance to nodes that are susceptible to be 
returned first by newQueryPlan and it is useless for newQueryPlan to return 
hosts to which it assigns an IGNORED distance. Now that you may return other 
(remote) hosts from newQueryPlan, you should not return IGNORED in the distance:

{noformat}
+@Override
+public HostDistance distance(Host host)
+{
+if (host.getAddress().getHostName().equals(stickHost))
+return HostDistance.LOCAL;
+else
+return HostDistance.IGNORED;
+}
{noformat}

 Add CqlRecordReader to take advantage of native CQL pagination
 --

 Key: CASSANDRA-6311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.0.6

 Attachments: 6311-v3-2.0-branch.txt, 6311-v4.txt, 
 6311-v5-2.0-branch.txt, 6311-v6-2.0-branch.txt, 6331-2.0-branch.txt, 
 6331-v2-2.0-branch.txt


 Since the latest Cql pagination is done and it should be more efficient, so 
 we need update CqlPagingRecordReader to use it instead of the custom thrift 
 paging.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6311) Add CqlRecordReader to take advantage of native CQL pagination

2014-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922162#comment-13922162
 ] 

Piotr Kołaczkowski edited comment on CASSANDRA-6311 at 3/6/14 8:38 AM:
---

1. ok, I understand; that was a nice-to-have 
3. ok

2:
count is defined in the outer scope and is not local to the Iterator instance. 
Therefore creating two iterators for the same LB policy is going to mess it up:
{noformat}

+return new AbstractIteratorHost()
+{
+protected Host computeNext()
+{
+count ++;
{noformat}

A policy should assign a LOCAL distance to nodes that are susceptible to be 
returned first by newQueryPlan and it is useless for newQueryPlan to return 
hosts to which it assigns an IGNORED distance. Now that you may return other 
(remote) hosts from newQueryPlan, you should not return IGNORED in the distance:

{noformat}
+@Override
+public HostDistance distance(Host host)
+{
+if (host.getAddress().getHostName().equals(stickHost))
+return HostDistance.LOCAL;
+else
+return HostDistance.IGNORED;
+}
{noformat}


was (Author: pkolaczk):
1. ok, I understand; that was a nice-to-have 
3. ok

2:
count is defined in the outer scope and is not local to the Iterator instance. 
Therefore creating two iterators for the same LB policy is going to mess it up:
{noformat}
@Override
+return new AbstractIteratorHost()
+{
+protected Host computeNext()
+{
+count ++;
{noformat}

A policy should assign a LOCAL distance to nodes that are susceptible to be 
returned first by newQueryPlan and it is useless for newQueryPlan to return 
hosts to which it assigns an IGNORED distance. Now that you may return other 
(remote) hosts from newQueryPlan, you should not return IGNORED in the distance:

{noformat}
+@Override
+public HostDistance distance(Host host)
+{
+if (host.getAddress().getHostName().equals(stickHost))
+return HostDistance.LOCAL;
+else
+return HostDistance.IGNORED;
+}
{noformat}

 Add CqlRecordReader to take advantage of native CQL pagination
 --

 Key: CASSANDRA-6311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.0.6

 Attachments: 6311-v3-2.0-branch.txt, 6311-v4.txt, 
 6311-v5-2.0-branch.txt, 6311-v6-2.0-branch.txt, 6331-2.0-branch.txt, 
 6331-v2-2.0-branch.txt


 Since the latest Cql pagination is done and it should be more efficient, so 
 we need update CqlPagingRecordReader to use it instead of the custom thrift 
 paging.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/2] git commit: FBUtilities.singleton() should use the CF comparator

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 4cf8a8a6c - 5d67e852e


FBUtilities.singleton() should use the CF comparator

patch by slebresne; reviewed by thobbs for CASSANDRA-6778


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/773fade9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/773fade9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/773fade9

Branch: refs/heads/cassandra-2.1
Commit: 773fade9aee009170c7062d174f2b78211061fce
Parents: 2492308
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 08:54:32 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 08:56:08 2014 +0100

--
 CHANGES.txt |  1 +
 .../cql3/statements/ColumnGroupMap.java |  4 +-
 .../cql3/statements/SelectStatement.java|  7 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |  2 +-
 .../cassandra/db/filter/NamesQueryFilter.java   |  4 +-
 .../apache/cassandra/db/filter/QueryFilter.java |  8 ---
 .../org/apache/cassandra/utils/FBUtilities.java |  6 +-
 .../apache/cassandra/db/LongKeyspaceTest.java   |  3 +-
 .../unit/org/apache/cassandra/SchemaLoader.java |  3 +-
 .../org/apache/cassandra/config/DefsTest.java   |  7 +-
 .../cassandra/db/CollationControllerTest.java   |  5 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 67 +---
 .../org/apache/cassandra/db/KeyspaceTest.java   |  7 +-
 .../apache/cassandra/db/ReadMessageTest.java|  4 +-
 .../db/RecoveryManagerTruncateTest.java |  3 +-
 .../apache/cassandra/db/RemoveColumnTest.java   |  3 +-
 .../cassandra/io/sstable/LegacySSTableTest.java |  4 +-
 .../cassandra/tools/SSTableExportTest.java  |  8 ++-
 18 files changed, 102 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/773fade9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 19cedd8..d697e3f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -33,6 +33,7 @@
  * Fix UPDATE updating PRIMARY KEY columns implicitly (CASSANDRA-6782)
  * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
(CASSANDRA-6733)
+ * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
 Merged from 1.2:
  * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
  * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/773fade9/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java 
b/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java
index 5c3fcb9..1c9a346 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java
@@ -25,6 +25,7 @@ import java.util.List;
 import java.util.Map;
 
 import org.apache.cassandra.db.Column;
+import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.CompositeType;
 import org.apache.cassandra.utils.Pair;
 
@@ -155,7 +156,8 @@ public class ColumnGroupMap
 {
 for (int i = 0; i  idx; i++)
 {
-if (!c[i].equals(previous[i]))
+AbstractType? comp = composite.types.get(i);
+if (comp.compare(c[i], previous[i]) != 0)
 return false;
 }
 return true;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/773fade9/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 5a9d3d9..100383f 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -717,7 +717,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 {
 if (cfDef.isCompact)
 {
-return FBUtilities.singleton(builder.build());
+return FBUtilities.singleton(builder.build(), 
cfDef.cfm.comparator);
 }
 else
 {
@@ -994,10 +994,11 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 }
 else if (sliceRestriction != null)
 {
+ComparatorByteBuffer comp = cfDef.cfm.comparator;
 // For dynamic CF, the column could be out of the 

[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-06 Thread slebresne
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/800c62f3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/800c62f3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/800c62f3

Branch: refs/heads/trunk
Commit: 800c62f3b24c84e9cdded7630b35bffa7d665a6d
Parents: b173ce2 5d67e85
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:05:49 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:05:49 2014 +0100

--
 CHANGES.txt |  1 +
 .../db/composites/AbstractCellNameType.java | 10 +++--
 .../cassandra/db/composites/CellName.java   |  2 +-
 .../db/composites/CompoundDenseCellName.java|  4 +-
 .../db/composites/CompoundSparseCellName.java   |  4 +-
 .../composites/CompoundSparseCellNameType.java  |  2 +-
 .../db/composites/SimpleDenseCellName.java  |  4 +-
 .../db/composites/SimpleSparseCellName.java |  2 +-
 .../db/composites/SimpleSparseCellNameType.java |  2 +-
 .../apache/cassandra/db/filter/ColumnSlice.java |  2 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 44 
 11 files changed, 62 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/800c62f3/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/800c62f3/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--



[1/3] git commit: FBUtilities.singleton() should use the CF comparator

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk b173ce207 - 800c62f3b


FBUtilities.singleton() should use the CF comparator

patch by slebresne; reviewed by thobbs for CASSANDRA-6778


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/773fade9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/773fade9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/773fade9

Branch: refs/heads/trunk
Commit: 773fade9aee009170c7062d174f2b78211061fce
Parents: 2492308
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 08:54:32 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 08:56:08 2014 +0100

--
 CHANGES.txt |  1 +
 .../cql3/statements/ColumnGroupMap.java |  4 +-
 .../cql3/statements/SelectStatement.java|  7 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |  2 +-
 .../cassandra/db/filter/NamesQueryFilter.java   |  4 +-
 .../apache/cassandra/db/filter/QueryFilter.java |  8 ---
 .../org/apache/cassandra/utils/FBUtilities.java |  6 +-
 .../apache/cassandra/db/LongKeyspaceTest.java   |  3 +-
 .../unit/org/apache/cassandra/SchemaLoader.java |  3 +-
 .../org/apache/cassandra/config/DefsTest.java   |  7 +-
 .../cassandra/db/CollationControllerTest.java   |  5 +-
 .../cassandra/db/ColumnFamilyStoreTest.java | 67 +---
 .../org/apache/cassandra/db/KeyspaceTest.java   |  7 +-
 .../apache/cassandra/db/ReadMessageTest.java|  4 +-
 .../db/RecoveryManagerTruncateTest.java |  3 +-
 .../apache/cassandra/db/RemoveColumnTest.java   |  3 +-
 .../cassandra/io/sstable/LegacySSTableTest.java |  4 +-
 .../cassandra/tools/SSTableExportTest.java  |  8 ++-
 18 files changed, 102 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/773fade9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 19cedd8..d697e3f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -33,6 +33,7 @@
  * Fix UPDATE updating PRIMARY KEY columns implicitly (CASSANDRA-6782)
  * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
(CASSANDRA-6733)
+ * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
 Merged from 1.2:
  * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
  * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/773fade9/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java 
b/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java
index 5c3fcb9..1c9a346 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ColumnGroupMap.java
@@ -25,6 +25,7 @@ import java.util.List;
 import java.util.Map;
 
 import org.apache.cassandra.db.Column;
+import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.CompositeType;
 import org.apache.cassandra.utils.Pair;
 
@@ -155,7 +156,8 @@ public class ColumnGroupMap
 {
 for (int i = 0; i  idx; i++)
 {
-if (!c[i].equals(previous[i]))
+AbstractType? comp = composite.types.get(i);
+if (comp.compare(c[i], previous[i]) != 0)
 return false;
 }
 return true;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/773fade9/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 5a9d3d9..100383f 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -717,7 +717,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 {
 if (cfDef.isCompact)
 {
-return FBUtilities.singleton(builder.build());
+return FBUtilities.singleton(builder.build(), 
cfDef.cfm.comparator);
 }
 else
 {
@@ -994,10 +994,11 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 }
 else if (sliceRestriction != null)
 {
+ComparatorByteBuffer comp = cfDef.cfm.comparator;
 // For dynamic CF, the column could be out of the 
requested bounds, 

git commit: Fix CQLSSTableWriter.addRow(MapString, Object)

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 773fade9a - ba95ca0db


Fix CQLSSTableWriter.addRow(MapString, Object)

patch by slebresne; reviewed by thobbs for CASSANDRA-6526


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ba95ca0d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ba95ca0d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ba95ca0d

Branch: refs/heads/cassandra-2.0
Commit: ba95ca0db52567b875e5be0d10f8523b706385c5
Parents: 773fade
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:07:51 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:07:51 2014 +0100

--
 CHANGES.txt   |  1 +
 .../org/apache/cassandra/io/sstable/CQLSSTableWriter.java | 10 --
 .../apache/cassandra/io/sstable/CQLSSTableWriterTest.java |  9 -
 3 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d697e3f..6ef9025 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,7 @@
  * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
(CASSANDRA-6733)
  * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
+ * Fix CQLSStableWriter.addRow(MapString, Object) (CASSANDRA-6526)
 Merged from 1.2:
  * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
  * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
index 86348aa..a7ece70 100644
--- a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
@@ -141,6 +141,11 @@ public class CQLSSTableWriter
  * keys are the names of the columns to add instead of taking a list of the
  * values in the order of the insert statement used during construction of
  * this write.
+ * p
+ * Please note that the column names in the map keys must be in lowercase 
unless
+ * the declared column name is a
+ * a 
href=http://cassandra.apache.org/doc/cql3/CQL.html#identifiers;case-sensitive 
quoted identifier/a
+ * (in which case the map key must use the exact case of the column).
  *
  * @param values a map of colum name to column values representing the new
  * row to add. Note that if a column is not part of the map, it's value 
will
@@ -152,11 +157,12 @@ public class CQLSSTableWriter
 public CQLSSTableWriter addRow(MapString, Object values)
 throws InvalidRequestException, IOException
 {
-int size = Math.min(values.size(), boundNames.size());
+int size = boundNames.size();
 ListByteBuffer rawValues = new ArrayList(size);
 for (int i = 0; i  size; i++) {
 ColumnSpecification spec = boundNames.get(i);
-
rawValues.add(((AbstractType)spec.type).decompose(values.get(spec.name.toString(;
+Object value = values.get(spec.name.toString());
+rawValues.add(value == null ? null : 
((AbstractType)spec.type).decompose(value));
 }
 return rawAddRow(rawValues);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java 
b/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
index 0e38e16..bdc4b94 100644
--- a/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.io.sstable;
 import java.io.File;
 import java.util.Iterator;
 
+import com.google.common.collect.ImmutableMap;
 import com.google.common.io.Files;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -72,6 +73,7 @@ public class CQLSSTableWriterTest
 writer.addRow(0, test1, 24);
 writer.addRow(1, test2, null);
 writer.addRow(2, test3, 42);
+writer.addRow(ImmutableMap.String, Objectof(k, 3, v2, 12));
 writer.close();
 
 SSTableLoader loader = new SSTableLoader(dataDir, new 
SSTableLoader.Client()
@@ -92,7 +94,7 @@ public class CQLSSTableWriterTest
 

[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-06 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/872eef3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/872eef3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/872eef3d

Branch: refs/heads/cassandra-2.1
Commit: 872eef3dc6822ef20c137016d92d2fe962f62101
Parents: 5d67e85 ba95ca0
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:15:19 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:15:19 2014 +0100

--
 CHANGES.txt   |  1 +
 .../org/apache/cassandra/io/sstable/CQLSSTableWriter.java | 10 --
 .../apache/cassandra/io/sstable/CQLSSTableWriterTest.java |  9 -
 3 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/872eef3d/CHANGES.txt
--
diff --cc CHANGES.txt
index 098ecbc,6ef9025..b933bad
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -45,44 -34,24 +45,45 @@@ Merged from 2.0
   * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
 (CASSANDRA-6733)
   * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
+  * Fix CQLSStableWriter.addRow(MapString, Object) (CASSANDRA-6526)
 -Merged from 1.2:
 - * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
 - * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)
 - * Fix broken streams when replacing with same IP (CASSANDRA-6622)
 - * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
 - * Fix partition and range deletes not triggering flush (CASSANDRA-6655)
 - * Fix mean cells and mean row size per sstable calculations (CASSANDRA-6667)
 - * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
 - * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
 - * Don't exchange schema between nodes with different versions 
(CASSANDRA-6695)
 - * Use real node messaging versions for schema exchange decisions 
(CASSANDRA-6700)
 - * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
 - * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
 - * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
 - * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
 - * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
 - * Fix bootstrapping when there is no schema (CASSANDRA-6685)
 +
 +
 +2.1.0-beta1
 + * Add flush directory distinct from compaction directories (CASSANDRA-6357)
 + * Require JNA by default (CASSANDRA-6575)
 + * add listsnapshots command to nodetool (CASSANDRA-5742)
 + * Introduce AtomicBTreeColumns (CASSANDRA-6271, 6692)
 + * Multithreaded commitlog (CASSANDRA-3578)
 + * allocate fixed index summary memory pool and resample cold index summaries 
 +   to use less memory (CASSANDRA-5519)
 + * Removed multithreaded compaction (CASSANDRA-6142)
 + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337)
 + * change logging from log4j to logback (CASSANDRA-5883)
 + * switch to LZ4 compression for internode communication (CASSANDRA-5887)
 + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971)
 + * Remove 1.2 network compatibility code (CASSANDRA-5960)
 + * Remove leveled json manifest migration code (CASSANDRA-5996)
 + * Remove CFDefinition (CASSANDRA-6253)
 + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278)
 + * User-defined types for CQL3 (CASSANDRA-5590)
 + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871, 6406)
 + * Batch read from OTC's queue and cleanup (CASSANDRA-1632)
 + * Secondary index support for collections (CASSANDRA-4511, 6383)
 + * SSTable metadata(Stats.db) format change (CASSANDRA-6356)
 + * Push composites support in the storage engine
 +   (CASSANDRA-5417, CASSANDRA-6520)
 + * Add snapshot space used to cfstats (CASSANDRA-6231)
 + * Add cardinality estimator for key count estimation (CASSANDRA-5906)
 + * CF id is changed to be non-deterministic. Data dir/key cache are created
 +   uniquely for CF id (CASSANDRA-5202)
 + * New counters implementation (CASSANDRA-6504)
 + * Replace UnsortedColumns, EmptyColumns, TreeMapBackedSortedColumns with new
 +   ArrayBackedSortedColumns (CASSANDRA-6630, CASSANDRA-6662, CASSANDRA-6690)
 + * Add option to use row cache with a given amount of rows (CASSANDRA-5357)
 + * Avoid repairing already repaired data (CASSANDRA-5351)
 + * Reject counter updates with USING TTL/TIMESTAMP (CASSANDRA-6649)
 + * Replace index_interval with min/max_index_interval (CASSANDRA-6379)
 + * Lift limitation that order by columns 

[1/3] git commit: Fix CQLSSTableWriter.addRow(MapString, Object)

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 800c62f3b - 35e5721f8


Fix CQLSSTableWriter.addRow(MapString, Object)

patch by slebresne; reviewed by thobbs for CASSANDRA-6526


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ba95ca0d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ba95ca0d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ba95ca0d

Branch: refs/heads/trunk
Commit: ba95ca0db52567b875e5be0d10f8523b706385c5
Parents: 773fade
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:07:51 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:07:51 2014 +0100

--
 CHANGES.txt   |  1 +
 .../org/apache/cassandra/io/sstable/CQLSSTableWriter.java | 10 --
 .../apache/cassandra/io/sstable/CQLSSTableWriterTest.java |  9 -
 3 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d697e3f..6ef9025 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,7 @@
  * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
(CASSANDRA-6733)
  * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
+ * Fix CQLSStableWriter.addRow(MapString, Object) (CASSANDRA-6526)
 Merged from 1.2:
  * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
  * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
index 86348aa..a7ece70 100644
--- a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
@@ -141,6 +141,11 @@ public class CQLSSTableWriter
  * keys are the names of the columns to add instead of taking a list of the
  * values in the order of the insert statement used during construction of
  * this write.
+ * p
+ * Please note that the column names in the map keys must be in lowercase 
unless
+ * the declared column name is a
+ * a 
href=http://cassandra.apache.org/doc/cql3/CQL.html#identifiers;case-sensitive 
quoted identifier/a
+ * (in which case the map key must use the exact case of the column).
  *
  * @param values a map of colum name to column values representing the new
  * row to add. Note that if a column is not part of the map, it's value 
will
@@ -152,11 +157,12 @@ public class CQLSSTableWriter
 public CQLSSTableWriter addRow(MapString, Object values)
 throws InvalidRequestException, IOException
 {
-int size = Math.min(values.size(), boundNames.size());
+int size = boundNames.size();
 ListByteBuffer rawValues = new ArrayList(size);
 for (int i = 0; i  size; i++) {
 ColumnSpecification spec = boundNames.get(i);
-
rawValues.add(((AbstractType)spec.type).decompose(values.get(spec.name.toString(;
+Object value = values.get(spec.name.toString());
+rawValues.add(value == null ? null : 
((AbstractType)spec.type).decompose(value));
 }
 return rawAddRow(rawValues);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java 
b/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
index 0e38e16..bdc4b94 100644
--- a/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.io.sstable;
 import java.io.File;
 import java.util.Iterator;
 
+import com.google.common.collect.ImmutableMap;
 import com.google.common.io.Files;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -72,6 +73,7 @@ public class CQLSSTableWriterTest
 writer.addRow(0, test1, 24);
 writer.addRow(1, test2, null);
 writer.addRow(2, test3, 42);
+writer.addRow(ImmutableMap.String, Objectof(k, 3, v2, 12));
 writer.close();
 
 SSTableLoader loader = new SSTableLoader(dataDir, new 
SSTableLoader.Client()
@@ -92,7 +94,7 @@ public class CQLSSTableWriterTest
 loader.stream().get();
 
 

[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-06 Thread slebresne
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35e5721f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35e5721f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35e5721f

Branch: refs/heads/trunk
Commit: 35e5721f8139a68aa5bba185c862395290f242e0
Parents: 800c62f 872eef3
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:15:28 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:15:28 2014 +0100

--
 CHANGES.txt   |  1 +
 .../org/apache/cassandra/io/sstable/CQLSSTableWriter.java | 10 --
 .../apache/cassandra/io/sstable/CQLSSTableWriterTest.java |  9 -
 3 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/35e5721f/CHANGES.txt
--



[1/2] git commit: Fix CQLSSTableWriter.addRow(MapString, Object)

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 5d67e852e - 872eef3dc


Fix CQLSSTableWriter.addRow(MapString, Object)

patch by slebresne; reviewed by thobbs for CASSANDRA-6526


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ba95ca0d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ba95ca0d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ba95ca0d

Branch: refs/heads/cassandra-2.1
Commit: ba95ca0db52567b875e5be0d10f8523b706385c5
Parents: 773fade
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:07:51 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:07:51 2014 +0100

--
 CHANGES.txt   |  1 +
 .../org/apache/cassandra/io/sstable/CQLSSTableWriter.java | 10 --
 .../apache/cassandra/io/sstable/CQLSSTableWriterTest.java |  9 -
 3 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d697e3f..6ef9025 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,7 @@
  * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
(CASSANDRA-6733)
  * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
+ * Fix CQLSStableWriter.addRow(MapString, Object) (CASSANDRA-6526)
 Merged from 1.2:
  * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
  * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
index 86348aa..a7ece70 100644
--- a/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/CQLSSTableWriter.java
@@ -141,6 +141,11 @@ public class CQLSSTableWriter
  * keys are the names of the columns to add instead of taking a list of the
  * values in the order of the insert statement used during construction of
  * this write.
+ * p
+ * Please note that the column names in the map keys must be in lowercase 
unless
+ * the declared column name is a
+ * a 
href=http://cassandra.apache.org/doc/cql3/CQL.html#identifiers;case-sensitive 
quoted identifier/a
+ * (in which case the map key must use the exact case of the column).
  *
  * @param values a map of colum name to column values representing the new
  * row to add. Note that if a column is not part of the map, it's value 
will
@@ -152,11 +157,12 @@ public class CQLSSTableWriter
 public CQLSSTableWriter addRow(MapString, Object values)
 throws InvalidRequestException, IOException
 {
-int size = Math.min(values.size(), boundNames.size());
+int size = boundNames.size();
 ListByteBuffer rawValues = new ArrayList(size);
 for (int i = 0; i  size; i++) {
 ColumnSpecification spec = boundNames.get(i);
-
rawValues.add(((AbstractType)spec.type).decompose(values.get(spec.name.toString(;
+Object value = values.get(spec.name.toString());
+rawValues.add(value == null ? null : 
((AbstractType)spec.type).decompose(value));
 }
 return rawAddRow(rawValues);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba95ca0d/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java 
b/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
index 0e38e16..bdc4b94 100644
--- a/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/CQLSSTableWriterTest.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.io.sstable;
 import java.io.File;
 import java.util.Iterator;
 
+import com.google.common.collect.ImmutableMap;
 import com.google.common.io.Files;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -72,6 +73,7 @@ public class CQLSSTableWriterTest
 writer.addRow(0, test1, 24);
 writer.addRow(1, test2, null);
 writer.addRow(2, test3, 42);
+writer.addRow(ImmutableMap.String, Objectof(k, 3, v2, 12));
 writer.close();
 
 SSTableLoader loader = new SSTableLoader(dataDir, new 
SSTableLoader.Client()
@@ -92,7 +94,7 @@ public class CQLSSTableWriterTest
 

[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-06 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/872eef3d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/872eef3d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/872eef3d

Branch: refs/heads/trunk
Commit: 872eef3dc6822ef20c137016d92d2fe962f62101
Parents: 5d67e85 ba95ca0
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:15:19 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:15:19 2014 +0100

--
 CHANGES.txt   |  1 +
 .../org/apache/cassandra/io/sstable/CQLSSTableWriter.java | 10 --
 .../apache/cassandra/io/sstable/CQLSSTableWriterTest.java |  9 -
 3 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/872eef3d/CHANGES.txt
--
diff --cc CHANGES.txt
index 098ecbc,6ef9025..b933bad
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -45,44 -34,24 +45,45 @@@ Merged from 2.0
   * Fix IllegalArgumentException when updating from 1.2 with SuperColumns
 (CASSANDRA-6733)
   * FBUtilities.singleton() should use the CF comparator (CASSANDRA-6778)
+  * Fix CQLSStableWriter.addRow(MapString, Object) (CASSANDRA-6526)
 -Merged from 1.2:
 - * Add CMSClassUnloadingEnabled JVM option (CASSANDRA-6541)
 - * Catch memtable flush exceptions during shutdown (CASSANDRA-6735)
 - * Fix broken streams when replacing with same IP (CASSANDRA-6622)
 - * Fix upgradesstables NPE for non-CF-based indexes (CASSANDRA-6645)
 - * Fix partition and range deletes not triggering flush (CASSANDRA-6655)
 - * Fix mean cells and mean row size per sstable calculations (CASSANDRA-6667)
 - * Compact hints after partial replay to clean out tombstones (CASSANDRA-)
 - * Log USING TTL/TIMESTAMP in a counter update warning (CASSANDRA-6649)
 - * Don't exchange schema between nodes with different versions 
(CASSANDRA-6695)
 - * Use real node messaging versions for schema exchange decisions 
(CASSANDRA-6700)
 - * IN on the last clustering columns + ORDER BY DESC yield no results 
(CASSANDRA-6701)
 - * Fix SecondaryIndexManager#deleteFromIndexes() (CASSANDRA-6711)
 - * Fix snapshot repair not snapshotting coordinator itself (CASSANDRA-6713)
 - * Support negative timestamps for CQL3 dates in query string (CASSANDRA-6718)
 - * Avoid NPEs when receiving table changes for an unknown keyspace 
(CASSANDRA-5631)
 - * Fix bootstrapping when there is no schema (CASSANDRA-6685)
 +
 +
 +2.1.0-beta1
 + * Add flush directory distinct from compaction directories (CASSANDRA-6357)
 + * Require JNA by default (CASSANDRA-6575)
 + * add listsnapshots command to nodetool (CASSANDRA-5742)
 + * Introduce AtomicBTreeColumns (CASSANDRA-6271, 6692)
 + * Multithreaded commitlog (CASSANDRA-3578)
 + * allocate fixed index summary memory pool and resample cold index summaries 
 +   to use less memory (CASSANDRA-5519)
 + * Removed multithreaded compaction (CASSANDRA-6142)
 + * Parallelize fetching rows for low-cardinality indexes (CASSANDRA-1337)
 + * change logging from log4j to logback (CASSANDRA-5883)
 + * switch to LZ4 compression for internode communication (CASSANDRA-5887)
 + * Stop using Thrift-generated Index* classes internally (CASSANDRA-5971)
 + * Remove 1.2 network compatibility code (CASSANDRA-5960)
 + * Remove leveled json manifest migration code (CASSANDRA-5996)
 + * Remove CFDefinition (CASSANDRA-6253)
 + * Use AtomicIntegerFieldUpdater in RefCountedMemory (CASSANDRA-6278)
 + * User-defined types for CQL3 (CASSANDRA-5590)
 + * Use of o.a.c.metrics in nodetool (CASSANDRA-5871, 6406)
 + * Batch read from OTC's queue and cleanup (CASSANDRA-1632)
 + * Secondary index support for collections (CASSANDRA-4511, 6383)
 + * SSTable metadata(Stats.db) format change (CASSANDRA-6356)
 + * Push composites support in the storage engine
 +   (CASSANDRA-5417, CASSANDRA-6520)
 + * Add snapshot space used to cfstats (CASSANDRA-6231)
 + * Add cardinality estimator for key count estimation (CASSANDRA-5906)
 + * CF id is changed to be non-deterministic. Data dir/key cache are created
 +   uniquely for CF id (CASSANDRA-5202)
 + * New counters implementation (CASSANDRA-6504)
 + * Replace UnsortedColumns, EmptyColumns, TreeMapBackedSortedColumns with new
 +   ArrayBackedSortedColumns (CASSANDRA-6630, CASSANDRA-6662, CASSANDRA-6690)
 + * Add option to use row cache with a given amount of rows (CASSANDRA-5357)
 + * Avoid repairing already repaired data (CASSANDRA-5351)
 + * Reject counter updates with USING TTL/TIMESTAMP (CASSANDRA-6649)
 + * Replace index_interval with min/max_index_interval (CASSANDRA-6379)
 + * Lift limitation that order by columns must be 

git commit: Fix timestamp scaling issue for 6623

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 ba95ca0db - 5ef53e6f7


Fix timestamp scaling issue for 6623


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ef53e6f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ef53e6f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ef53e6f

Branch: refs/heads/cassandra-2.0
Commit: 5ef53e6f7cc64585e93a84311f58fc62b781379d
Parents: ba95ca0
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:26:20 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:26:20 2014 +0100

--
 .../org/apache/cassandra/cql3/statements/CQL3CasConditions.java  | 3 ++-
 .../apache/cassandra/cql3/statements/ModificationStatement.java  | 4 +---
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef53e6f/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java 
b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
index 194ff0c..668f98f 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
@@ -43,7 +43,8 @@ public class CQL3CasConditions implements CASConditions
 public CQL3CasConditions(CFMetaData cfm, long now)
 {
 this.cfm = cfm;
-this.now = now;
+// We will use now for Column.isLive() which expects milliseconds but 
the argument is in microseconds.
+this.now = now / 1000;
 this.conditions = new TreeMap(cfm.comparator);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef53e6f/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index ecefcb9..154c01c 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -534,9 +534,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 
 ByteBuffer key = keys.get(0);
 
-// It's cleaner to use the query timestamp below, but it's in seconds 
while the conditions expects microseconds, so just
-// put it back in millis (we don't really lose precision because the 
ultimate consumer, Column.isLive, re-divide it).
-CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp() * 1000);
+CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp());
 ColumnNameBuilder prefix = createClusteringPrefixBuilder(variables);
 ColumnFamily updates = UnsortedColumns.factory.create(cfm);
 addUpdatesAndConditions(key, prefix, updates, conditions, variables, 
getTimestamp(queryState.getTimestamp(), variables));



[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-06 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2cb811a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2cb811a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2cb811a2

Branch: refs/heads/cassandra-2.1
Commit: 2cb811a2cbc310db709b68ac289541b2f424c046
Parents: 872eef3 5ef53e6
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:27:48 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:27:48 2014 +0100

--
 .../org/apache/cassandra/cql3/statements/CQL3CasConditions.java  | 3 ++-
 .../apache/cassandra/cql3/statements/ModificationStatement.java  | 4 +---
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2cb811a2/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2cb811a2/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index f90293b,154c01c..160eb74
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@@ -488,11 -534,9 +488,9 @@@ public abstract class ModificationState
  
  ByteBuffer key = keys.get(0);
  
- // It's cleaner to use the query timestamp below, but it's in seconds 
while the conditions expects microseconds, so just
- // put it back in millis (we don't really lose precision because the 
ultimate consumer, Column.isLive, re-divide it).
- CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp() * 1000);
+ CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp());
 -ColumnNameBuilder prefix = createClusteringPrefixBuilder(variables);
 -ColumnFamily updates = UnsortedColumns.factory.create(cfm);
 +Composite prefix = createClusteringPrefix(variables);
 +ColumnFamily updates = ArrayBackedSortedColumns.factory.create(cfm);
  addUpdatesAndConditions(key, prefix, updates, conditions, variables, 
getTimestamp(queryState.getTimestamp(), variables));
  
  ColumnFamily result = StorageProxy.cas(keyspace(),



[1/2] git commit: Fix timestamp scaling issue for 6623

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 872eef3dc - 2cb811a2c


Fix timestamp scaling issue for 6623


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ef53e6f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ef53e6f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ef53e6f

Branch: refs/heads/cassandra-2.1
Commit: 5ef53e6f7cc64585e93a84311f58fc62b781379d
Parents: ba95ca0
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:26:20 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:26:20 2014 +0100

--
 .../org/apache/cassandra/cql3/statements/CQL3CasConditions.java  | 3 ++-
 .../apache/cassandra/cql3/statements/ModificationStatement.java  | 4 +---
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef53e6f/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java 
b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
index 194ff0c..668f98f 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
@@ -43,7 +43,8 @@ public class CQL3CasConditions implements CASConditions
 public CQL3CasConditions(CFMetaData cfm, long now)
 {
 this.cfm = cfm;
-this.now = now;
+// We will use now for Column.isLive() which expects milliseconds but 
the argument is in microseconds.
+this.now = now / 1000;
 this.conditions = new TreeMap(cfm.comparator);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef53e6f/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index ecefcb9..154c01c 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -534,9 +534,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 
 ByteBuffer key = keys.get(0);
 
-// It's cleaner to use the query timestamp below, but it's in seconds 
while the conditions expects microseconds, so just
-// put it back in millis (we don't really lose precision because the 
ultimate consumer, Column.isLive, re-divide it).
-CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp() * 1000);
+CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp());
 ColumnNameBuilder prefix = createClusteringPrefixBuilder(variables);
 ColumnFamily updates = UnsortedColumns.factory.create(cfm);
 addUpdatesAndConditions(key, prefix, updates, conditions, variables, 
getTimestamp(queryState.getTimestamp(), variables));



[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-06 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2cb811a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2cb811a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2cb811a2

Branch: refs/heads/trunk
Commit: 2cb811a2cbc310db709b68ac289541b2f424c046
Parents: 872eef3 5ef53e6
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:27:48 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:27:48 2014 +0100

--
 .../org/apache/cassandra/cql3/statements/CQL3CasConditions.java  | 3 ++-
 .../apache/cassandra/cql3/statements/ModificationStatement.java  | 4 +---
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2cb811a2/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2cb811a2/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index f90293b,154c01c..160eb74
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@@ -488,11 -534,9 +488,9 @@@ public abstract class ModificationState
  
  ByteBuffer key = keys.get(0);
  
- // It's cleaner to use the query timestamp below, but it's in seconds 
while the conditions expects microseconds, so just
- // put it back in millis (we don't really lose precision because the 
ultimate consumer, Column.isLive, re-divide it).
- CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp() * 1000);
+ CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp());
 -ColumnNameBuilder prefix = createClusteringPrefixBuilder(variables);
 -ColumnFamily updates = UnsortedColumns.factory.create(cfm);
 +Composite prefix = createClusteringPrefix(variables);
 +ColumnFamily updates = ArrayBackedSortedColumns.factory.create(cfm);
  addUpdatesAndConditions(key, prefix, updates, conditions, variables, 
getTimestamp(queryState.getTimestamp(), variables));
  
  ColumnFamily result = StorageProxy.cas(keyspace(),



git commit: Fix comment post-merge

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 2cb811a2c - a052a912e


Fix comment post-merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a052a912
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a052a912
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a052a912

Branch: refs/heads/cassandra-2.1
Commit: a052a912ef6feb84b1421d23d0db4df0dbbb1e58
Parents: 2cb811a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:28:31 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:28:31 2014 +0100

--
 .../org/apache/cassandra/cql3/statements/CQL3CasConditions.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a052a912/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java 
b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
index 1749806..21b1de6 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
@@ -44,7 +44,7 @@ public class CQL3CasConditions implements CASConditions
 public CQL3CasConditions(CFMetaData cfm, long now)
 {
 this.cfm = cfm;
-// We will use now for Column.isLive() which expects milliseconds but 
the argument is in microseconds.
+// We will use now for Cell.isLive() which expects milliseconds but 
the argument is in microseconds.
 this.now = now / 1000;
 this.conditions = new TreeMap(cfm.comparator);
 }



[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-06 Thread slebresne
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e30d6dca
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e30d6dca
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e30d6dca

Branch: refs/heads/trunk
Commit: e30d6dca52cc33ab04f9296cb6961afdbf1e9c2b
Parents: 35e5721 a052a91
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:28:48 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:28:48 2014 +0100

--
 .../org/apache/cassandra/cql3/statements/CQL3CasConditions.java  | 3 ++-
 .../apache/cassandra/cql3/statements/ModificationStatement.java  | 4 +---
 2 files changed, 3 insertions(+), 4 deletions(-)
--




[3/4] git commit: Fix comment post-merge

2014-03-06 Thread slebresne
Fix comment post-merge


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a052a912
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a052a912
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a052a912

Branch: refs/heads/trunk
Commit: a052a912ef6feb84b1421d23d0db4df0dbbb1e58
Parents: 2cb811a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:28:31 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:28:31 2014 +0100

--
 .../org/apache/cassandra/cql3/statements/CQL3CasConditions.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a052a912/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java 
b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
index 1749806..21b1de6 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
@@ -44,7 +44,7 @@ public class CQL3CasConditions implements CASConditions
 public CQL3CasConditions(CFMetaData cfm, long now)
 {
 this.cfm = cfm;
-// We will use now for Column.isLive() which expects milliseconds but 
the argument is in microseconds.
+// We will use now for Cell.isLive() which expects milliseconds but 
the argument is in microseconds.
 this.now = now / 1000;
 this.conditions = new TreeMap(cfm.comparator);
 }



[1/4] git commit: Fix timestamp scaling issue for 6623

2014-03-06 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 35e5721f8 - e30d6dca5


Fix timestamp scaling issue for 6623


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ef53e6f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ef53e6f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ef53e6f

Branch: refs/heads/trunk
Commit: 5ef53e6f7cc64585e93a84311f58fc62b781379d
Parents: ba95ca0
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Mar 6 10:26:20 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Mar 6 10:26:20 2014 +0100

--
 .../org/apache/cassandra/cql3/statements/CQL3CasConditions.java  | 3 ++-
 .../apache/cassandra/cql3/statements/ModificationStatement.java  | 4 +---
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef53e6f/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java 
b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
index 194ff0c..668f98f 100644
--- a/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
+++ b/src/java/org/apache/cassandra/cql3/statements/CQL3CasConditions.java
@@ -43,7 +43,8 @@ public class CQL3CasConditions implements CASConditions
 public CQL3CasConditions(CFMetaData cfm, long now)
 {
 this.cfm = cfm;
-this.now = now;
+// We will use now for Column.isLive() which expects milliseconds but 
the argument is in microseconds.
+this.now = now / 1000;
 this.conditions = new TreeMap(cfm.comparator);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef53e6f/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index ecefcb9..154c01c 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -534,9 +534,7 @@ public abstract class ModificationStatement implements 
CQLStatement, MeasurableF
 
 ByteBuffer key = keys.get(0);
 
-// It's cleaner to use the query timestamp below, but it's in seconds 
while the conditions expects microseconds, so just
-// put it back in millis (we don't really lose precision because the 
ultimate consumer, Column.isLive, re-divide it).
-CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp() * 1000);
+CQL3CasConditions conditions = new CQL3CasConditions(cfm, 
queryState.getTimestamp());
 ColumnNameBuilder prefix = createClusteringPrefixBuilder(variables);
 ColumnFamily updates = UnsortedColumns.factory.create(cfm);
 addUpdatesAndConditions(key, prefix, updates, conditions, variables, 
getTimestamp(queryState.getTimestamp(), variables));



[jira] [Commented] (CASSANDRA-6623) Null in a cell caused by expired TTL does not work with IF clause (in CQL3)

2014-03-06 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922211#comment-13922211
 ] 

Sylvain Lebresne commented on CASSANDRA-6623:
-

That's correct, pushed that fix, thanks (the worst part is that there is a 
dtest but it was currently skipped, so activated it too).

 Null in a cell caused by expired TTL does not work with IF clause (in CQL3)
 ---

 Key: CASSANDRA-6623
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6623
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: One cluster with two nodes on a Linux and a Windows 
 system. cqlsh 4.1.0 | Cassandra 2.0.4 | CQL spec 3.1.1 | Thrift protocol 
 19.39.0. CQL3 Column Family
Reporter: Csaba Seres
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.6

 Attachments: 
 0001-Fix-for-expiring-columns-used-in-cas-conditions.patch, 6623.txt


 IF onecell=null clause does not work if the onecell has got its null value 
 from an expired TTL. If onecell is updated with null value (UPDATE) then IF 
 onecell=null works fine.
 This bug is not present when you create a table with COMPACT STORAGE 
 directive.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6801) INSERT with IF NOT EXISTS fails when row is an expired ttl

2014-03-06 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922246#comment-13922246
 ] 

Sylvain Lebresne commented on CASSANDRA-6801:
-

bq. Then add some data and flush it to ensure the sstables exist (didn't 
reproduce in memtables for some reason).

More detail on those steps would help because currently I'm not able to 
reproduce this. Truly, a dtest (something along the lines of 
https://github.com/riptano/cassandra-dtest/blob/master/cql_tests.py#L3778 for 
instance) would really be perfect, but short of that, it would be nice to 
reduce the repro steps to the minimum that still allow to reproduce. For 
instance, why 2 DCs with 3 nodes each? Are you not able to reproduce with a 
single node (which, because it would be extremely surprising would be an 
important information to have)? Also, have you made sure that you wait enough 
before testing the 2nd insert (typically, waiting 5 seconds would make it 
likely for the insert to fail, but for a following select to return nothing)? 
Ideally, testing against the current 2.0 branch would be nice just to make sure 
CASSANDRA-6623 didn't solve that either (though I did try to reproduce against 
2.0.5 with no luck so I'm not saying that's the case).

 INSERT with IF NOT EXISTS fails when row is an expired ttl
 --

 Key: CASSANDRA-6801
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6801
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Hattrell

 I ran this on a 2 DC cluster with 3 nodes each.  
 CREATE KEYSPACE test WITH replication = {
 'class': 'NetworkTopologyStrategy',
 'DC1': '3',
 'DC2': '3'
 };
 CREATE TABLE clusterlock (
 name text,
 hostname text,
 lockid text,
 PRIMARY KEY (name)
 ) ;
 Then add some data and flush it to ensure the sstables exist (didn't 
 reproduce in memtables for some reason).
 Then
  insert into clusterlock (name, lockid, hostname) values  ( 'adam', 'tt', 
 '111') IF NOT EXISTS USING TTL 5;
 Wait for ttl to be reached then try again:
  insert into clusterlock (name, lockid, hostname) values  ( 'adam', 'tt', 
 '111') IF NOT EXISTS USING TTL 5;
  
 [applied]
 ---
  False
 select * shows no rows in table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922259#comment-13922259
 ] 

Benedict commented on CASSANDRA-6689:
-

bq. Well, it seems like you never operated a real Cassandra cluster, did you? 

You seem to have interpreted my query as an attack on the veracity of your 
statement. It was not. I only wanted more specific facts that could be used to 
target a solution, and preferably a new ticket on which to discuss them.

This discussion has all the hallmarks of approaching unproductivity, so after 
this I do not think I have anything useful to add to the discussion, and will 
leave the committers to decide whether or not to include this work or to wait 
for you to produce your alternative:

# Any scheme that copies data will inherently incur larger GC pressure, as we 
then copy for memtable reads as well as disk reads. Object overhead is in fact 
_larger_ than the payload for many workloads, so even if we have arenas this 
effect is not eliminated or even appreciably ameliorated.
# Temporary reader space (and hence your approach) is *not* predictable: it is 
not proportional to the number of readers, but to the number and size of 
columns the readers read. In fact it is larger than this, as we probably have 
to copy anything we *might* want to use (given the way the code is 
encapsulated, this is what I do currently when copying on-heap - anything else 
would introduce notable complexity), not just columns that end up in the result 
set.
# We appear to be in agreement that your approach has higher costs associated 
with it. Further, copying potentially GB/s of (randomly located) data around 
destroys the CPU cache, reduces peak memory bandwidth by inducing strobes, 
consumes bandwidth directly, wastes CPU cycles waiting for the random lookups; 
all to no good purpose. We should be reducing these costs, not introducing more.
# It is simply not clear, despite your assertion of clarity, how you would 
reclaim any freed memory without separate GC (what else is GC but this 
reclamation?), however you want to call it, when it will be interspersed with 
non-freed memory, nor how you would guard the non-atomic copying (ref-counting, 
OpOrder, Lock: what?). Without this information it is not clear to me that it 
would be any simpler either.
# Your approach is currently (still poorly defined) vaporware.

Some further advantages specific to my approach:
# Pauseless operation, so improved predictability
# Absolute bound on memory utilisation, that can be rolled out to other data 
structures, further improving overall performance predictability
# Lock-freedom and low overhead, so we move closer to being able to answer 
queries directly from the messaging threads themselves, improving latency and 
throughput

An alternative approach needs, IMO, to demonstrate a clear superiority to the 
patch that is already available, especially when it will incur further work to 
produce. It is not clear to me that your solution is superior in any regard, 
nor any simpler. It also seems to be demonstrably less predictable and more 
costly, so I struggle to see how it could be considered preferable.

Also: 
bq. would that keep memtable around longer than expected

I'm not sure why you suppose this would be so. We can already happily reclaim 
any subportion of a region or memtable, so there is no reason to think this 
would be necessary, even if they resided in the same structure.

bq. there seems to be a low once off-heap feature is enabled which is no 
surprise once you look at how much complexity does it actually add.

This is certainly addressable. The off-heap feature by itself I have 
performance tested somewhat, and competes with Java GC for throughput (beating 
it as number of live objects increases), whilst being _pauseless_, so the 
complexity you refer to is no slouch and highly unlikely to be the culprit. 
There are issues with the way we manage IO for direct byte buffers, but I have 
addressed these in CASSANDRA-6781.


 Partially Off Heap Memtables
 

 Key: CASSANDRA-6689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2

 Attachments: CASSANDRA-6689-small-changes.patch


 Move the contents of ByteBuffers off-heap for records written to a memtable.
 (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6804) Consolidate on-disk and NativeCell layouts so that reads from disk require less memory

2014-03-06 Thread Benedict (JIRA)
Benedict created CASSANDRA-6804:
---

 Summary: Consolidate on-disk and NativeCell layouts so that reads 
from disk require less memory
 Key: CASSANDRA-6804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6804
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


If the on-disk Cell representation were the same as we use for NativeCell, we 
could easily allocate a NativeCell instead of a BufferCell, immediately 
reducing the amount of garbage generated on reads. With further work we may 
also be able to reach a zero-copy allocation as well, reducing further the read 
costs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4733) Last written key = current key exception when streaming

2014-03-06 Thread Serj Veras (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922300#comment-13922300
 ] 

Serj Veras commented on CASSANDRA-4733:
---

I have the same error using Cassandra 2.0.5.22 (DataStax package). 
I use 3 DC with 3 nodes in each of them. Error is occurred during massive 
insert workload in one of the DCs. Target CF has replication factor 2 in each 
of the DCs.

{code}
ERROR [CompactionExecutor:26] 2014-03-06 10:18:45,760 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:26,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-3718191715883699976, 
36633732653439302d303730632d343139352d386461342d333736383265393965316335) = 
current key DecoratedKey(-7629226534008815744, 
62306334323161342d663662362d346364632d383965382d306563343832376639316536) 
writing into /data/db/cassandra/data/Sync/sy/Sync-sy-tmp-jb-41-Data.db
at 
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{code}

Here is the state of my cluster after error has occurred. DC3 is the 
destination of workload writes.
{code}
Datacenter: DC1
==
Address   RackStatus State   LoadOwns   
 Token   

3074457345618258602 
10.0.0.163  RAC1Up Normal  14.43 GB33.33%  
-9223372036854775808
10.0.0.166  RAC0Up Normal  14.41 GB33.33%  
-3074457345618258603
10.0.0.167  RAC2Up Normal  14.33 GB33.33%  
3074457345618258602 

Datacenter: DC2
==
Address  RackStatus State   LoadOwns
Token   

  3074457345618258603 
10.0.1.145  RAC0Up Normal  14.46 GB0.00%   
-9223372036854775807
10.0.1.147  RAC1Up Normal  14.39 GB0.00%   
-3074457345618258602
10.0.1.149  RAC2Up Normal  14.43 GB0.00%   
3074457345618258603 

Datacenter: DC3
==
Address Rack   Status State   LoadOwns  
  Token   

3074457345618258604 
10.0.2.47   RAC0Down   Normal  12.84 GB0.00%   
-9223372036854775806
10.0.2.49   RAC1Down   Normal  13.69 GB0.00%   
-3074457345618258601
10.0.2.51   RAC2Down   Normal  12.34 GB0.00%   
3074457345618258604
{code} 

 Last written key = current key exception when streaming
 

 Key: CASSANDRA-4733
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4733
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.2.0 beta 2


 {noformat}
 ERROR 16:52:56,260 Exception in thread Thread[Streaming to 
 /10.179.111.137:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Connection reset by peer
 at 

[jira] [Comment Edited] (CASSANDRA-4733) Last written key = current key exception when streaming

2014-03-06 Thread Serj Veras (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922300#comment-13922300
 ] 

Serj Veras edited comment on CASSANDRA-4733 at 3/6/14 10:57 AM:


I have the same error using Cassandra 2.0.5.22 (DataStax package). 
I use 3 DC with 3 nodes in each of them. Error is occurred during massive 
insert workload in one of the DCs. Target CF has replication factor 2 in each 
of the DCs.

{code}
ERROR [CompactionExecutor:26] 2014-03-06 10:18:45,760 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:26,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-3718191715883699976, 
36633732653439302d303730632d343139352d386461342d333736383265393965316335) = 
current key DecoratedKey(-7629226534008815744, 
62306334323161342d663662362d346364632d383965382d306563343832376639316536) 
writing into /data/db/cassandra/data/Sync/sy/Sync-sy-tmp-jb-41-Data.db
at 
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{code}

Here is the state of my cluster after error has occurred. DC3 is the 
destination of workload writes.
{code}
Datacenter: DC1
==
Address   RackStatus State   LoadOwns   
 Token

   3074457345618258602
10.0.0.163  RAC1Up Normal  14.43 GB33.33%  
-9223372036854775808
10.0.0.166  RAC0Up Normal  14.41 GB33.33%  
-3074457345618258603
10.0.0.167  RAC2Up Normal  14.33 GB33.33%  
3074457345618258602 

Datacenter: DC2
==
Address  RackStatus State   LoadOwns
Token

  3074457345618258603
10.0.1.145  RAC0Up Normal  14.46 GB0.00%   
-9223372036854775807
10.0.1.147  RAC1Up Normal  14.39 GB0.00%   
-3074457345618258602
10.0.1.149  RAC2Up Normal  14.43 GB0.00%   
3074457345618258603 

Datacenter: DC3
==
Address Rack   Status State   LoadOwns  
  Token

3074457345618258604
10.0.2.47   RAC0Down   Normal  12.84 GB0.00%   
-9223372036854775806
10.0.2.49   RAC1Down   Normal  13.69 GB0.00%   
-3074457345618258601
10.0.2.51   RAC2Down   Normal  12.34 GB0.00%   
3074457345618258604
{code} 


was (Author: sivikt):
I have the same error using Cassandra 2.0.5.22 (DataStax package). 
I use 3 DC with 3 nodes in each of them. Error is occurred during massive 
insert workload in one of the DCs. Target CF has replication factor 2 in each 
of the DCs.

{code}
ERROR [CompactionExecutor:26] 2014-03-06 10:18:45,760 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:26,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-3718191715883699976, 
36633732653439302d303730632d343139352d386461342d333736383265393965316335) = 
current key DecoratedKey(-7629226534008815744, 
62306334323161342d663662362d346364632d383965382d306563343832376639316536) 
writing into /data/db/cassandra/data/Sync/sy/Sync-sy-tmp-jb-41-Data.db
at 

[jira] [Commented] (CASSANDRA-6801) INSERT with IF NOT EXISTS fails when row is an expired ttl

2014-03-06 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922308#comment-13922308
 ] 

Sam Tunnicliffe commented on CASSANDRA-6801:


I was able to reproduce this on a single node using the schema detailed above 
(but with SimpleStategy). Flush is also unnecessary, the problem manifests 
without it. The additional commit added for CASSANDRA-6623 (5ef53e6f7) fixes 
this.

 INSERT with IF NOT EXISTS fails when row is an expired ttl
 --

 Key: CASSANDRA-6801
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6801
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Hattrell

 I ran this on a 2 DC cluster with 3 nodes each.  
 CREATE KEYSPACE test WITH replication = {
 'class': 'NetworkTopologyStrategy',
 'DC1': '3',
 'DC2': '3'
 };
 CREATE TABLE clusterlock (
 name text,
 hostname text,
 lockid text,
 PRIMARY KEY (name)
 ) ;
 Then add some data and flush it to ensure the sstables exist (didn't 
 reproduce in memtables for some reason).
 Then
  insert into clusterlock (name, lockid, hostname) values  ( 'adam', 'tt', 
 '111') IF NOT EXISTS USING TTL 5;
 Wait for ttl to be reached then try again:
  insert into clusterlock (name, lockid, hostname) values  ( 'adam', 'tt', 
 '111') IF NOT EXISTS USING TTL 5;
  
 [applied]
 ---
  False
 select * shows no rows in table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-4733) Last written key = current key exception when streaming

2014-03-06 Thread Serj Veras (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922300#comment-13922300
 ] 

Serj Veras edited comment on CASSANDRA-4733 at 3/6/14 11:10 AM:


I have the same error using Cassandra 2.0.5.22 (DataStax package). 
I use 3 DC with 3 nodes in each of them. Error is occurred during massive 
insert workload in one of the DCs. Target CF has replication factor 2 in each 
of the DCs.
Attached Cassandra settings as Serj_Veras_cassandra.yaml.

{code}
ERROR [CompactionExecutor:26] 2014-03-06 10:18:45,760 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:26,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-3718191715883699976, 
36633732653439302d303730632d343139352d386461342d333736383265393965316335) = 
current key DecoratedKey(-7629226534008815744, 
62306334323161342d663662362d346364632d383965382d306563343832376639316536) 
writing into /data/db/cassandra/data/Sync/sy/Sync-sy-tmp-jb-41-Data.db
at 
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{code}

Here is the state of my cluster after error has occurred. DC3 is the 
destination of workload writes.
{code}
Datacenter: DC1
==
Address   RackStatus State   LoadOwns   
 Token

   3074457345618258602
10.0.0.163  RAC1Up Normal  14.43 GB33.33%  
-9223372036854775808
10.0.0.166  RAC0Up Normal  14.41 GB33.33%  
-3074457345618258603
10.0.0.167  RAC2Up Normal  14.33 GB33.33%  
3074457345618258602 

Datacenter: DC2
==
Address  RackStatus State   LoadOwns
Token

  3074457345618258603
10.0.1.145  RAC0Up Normal  14.46 GB0.00%   
-9223372036854775807
10.0.1.147  RAC1Up Normal  14.39 GB0.00%   
-3074457345618258602
10.0.1.149  RAC2Up Normal  14.43 GB0.00%   
3074457345618258603 

Datacenter: DC3
==
Address Rack   Status State   LoadOwns  
  Token

3074457345618258604
10.0.2.47   RAC0Down   Normal  12.84 GB0.00%   
-9223372036854775806
10.0.2.49   RAC1Down   Normal  13.69 GB0.00%   
-3074457345618258601
10.0.2.51   RAC2Down   Normal  12.34 GB0.00%   
3074457345618258604
{code} 


was (Author: sivikt):
I have the same error using Cassandra 2.0.5.22 (DataStax package). 
I use 3 DC with 3 nodes in each of them. Error is occurred during massive 
insert workload in one of the DCs. Target CF has replication factor 2 in each 
of the DCs.

{code}
ERROR [CompactionExecutor:26] 2014-03-06 10:18:45,760 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:26,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-3718191715883699976, 
36633732653439302d303730632d343139352d386461342d333736383265393965316335) = 
current key DecoratedKey(-7629226534008815744, 
62306334323161342d663662362d346364632d383965382d306563343832376639316536) 
writing into /data/db/cassandra/data/Sync/sy/Sync-sy-tmp-jb-41-Data.db
at 

[jira] [Updated] (CASSANDRA-4733) Last written key = current key exception when streaming

2014-03-06 Thread Serj Veras (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Serj Veras updated CASSANDRA-4733:
--

Attachment: Serj_Veras_cassandra.yaml

 Last written key = current key exception when streaming
 

 Key: CASSANDRA-4733
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4733
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.2.0 beta 2

 Attachments: Serj_Veras_cassandra.yaml


 {noformat}
 ERROR 16:52:56,260 Exception in thread Thread[Streaming to 
 /10.179.111.137:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Connection reset by peer
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcher.write0(Native Method)
 at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)
 at sun.nio.ch.IOUtil.write(IOUtil.java:43)
 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
 at java.nio.channels.Channels.writeFullyImpl(Channels.java:59)
 at java.nio.channels.Channels.writeFully(Channels.java:81)
 at java.nio.channels.Channels.access$000(Channels.java:47)
 at java.nio.channels.Channels$1.write(Channels.java:155)
 at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
 at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
 at 
 com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
 at 
 org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:218)
 at 
 org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:164)
 at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ... 3 more
 ERROR 16:53:03,951 Exception in thread Thread[Thread-11,5,main]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(113424593524874987650593774422007331058, 3036303936343535) = 
 current key DecoratedKey(59229538317742990547810678738983628664, 
 3036313133373139) writing into 
 /var/lib/cassandra/data/Keyspace1-Standard1-tmp-ia-95-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:132)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:208)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:164)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:107)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:220)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:165)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:65)
 {noformat}
 I didn't do anything fancy here, just inserted about 6M keys at rf=2, then 
 ran repair and got this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6801) INSERT with IF NOT EXISTS fails when row is an expired ttl

2014-03-06 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6801.
-

Resolution: Duplicate

Ok, well, let's close that as a duplicate of CASSANDRA-6623. If it turns out 
someone is still able to reproduce on the current 2.0 branch, feels free to 
re-open with the exact steps.

 INSERT with IF NOT EXISTS fails when row is an expired ttl
 --

 Key: CASSANDRA-6801
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6801
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Hattrell

 I ran this on a 2 DC cluster with 3 nodes each.  
 CREATE KEYSPACE test WITH replication = {
 'class': 'NetworkTopologyStrategy',
 'DC1': '3',
 'DC2': '3'
 };
 CREATE TABLE clusterlock (
 name text,
 hostname text,
 lockid text,
 PRIMARY KEY (name)
 ) ;
 Then add some data and flush it to ensure the sstables exist (didn't 
 reproduce in memtables for some reason).
 Then
  insert into clusterlock (name, lockid, hostname) values  ( 'adam', 'tt', 
 '111') IF NOT EXISTS USING TTL 5;
 Wait for ttl to be reached then try again:
  insert into clusterlock (name, lockid, hostname) values  ( 'adam', 'tt', 
 '111') IF NOT EXISTS USING TTL 5;
  
 [applied]
 ---
  False
 select * shows no rows in table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4733) Last written key = current key exception when streaming

2014-03-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922433#comment-13922433
 ] 

Marcus Eriksson commented on CASSANDRA-4733:


[~sivikt] i guess you are seeing CASSANDRA-6285

 Last written key = current key exception when streaming
 

 Key: CASSANDRA-4733
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4733
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.2.0 beta 2

 Attachments: Serj_Veras_cassandra.yaml


 {noformat}
 ERROR 16:52:56,260 Exception in thread Thread[Streaming to 
 /10.179.111.137:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Connection reset by peer
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcher.write0(Native Method)
 at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)
 at sun.nio.ch.IOUtil.write(IOUtil.java:43)
 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
 at java.nio.channels.Channels.writeFullyImpl(Channels.java:59)
 at java.nio.channels.Channels.writeFully(Channels.java:81)
 at java.nio.channels.Channels.access$000(Channels.java:47)
 at java.nio.channels.Channels$1.write(Channels.java:155)
 at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
 at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
 at 
 com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
 at 
 org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:218)
 at 
 org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:164)
 at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ... 3 more
 ERROR 16:53:03,951 Exception in thread Thread[Thread-11,5,main]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(113424593524874987650593774422007331058, 3036303936343535) = 
 current key DecoratedKey(59229538317742990547810678738983628664, 
 3036313133373139) writing into 
 /var/lib/cassandra/data/Keyspace1-Standard1-tmp-ia-95-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:132)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:208)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:164)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:107)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:220)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:165)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:65)
 {noformat}
 I didn't do anything fancy here, just inserted about 6M keys at rf=2, then 
 ran repair and got this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6805) When something goes wrong the `nodetool` command prints the whole Java stacktrace instead of a simple error message to stderr

2014-03-06 Thread JIRA
Ondřej Černoš created CASSANDRA-6805:


 Summary: When something goes wrong the `nodetool` command prints 
the whole Java stacktrace instead of a simple error message to stderr
 Key: CASSANDRA-6805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6805
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ondřej Černoš
Priority: Minor


{noformat}
$ nodetool snapshot XXX -t YYY
Requested creating snapshot for: XXX
Exception in thread main java.io.IOException: Table XXX does not exist
at 
org.apache.cassandra.service.StorageService.getValidTable(StorageService.java:2267)
at 
org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{noformat}

Please change the `nodetool` command so that it does not print the stacktrace 
by default, it makes using it from other scripts a PITA. You can possibly add a 
`--debug` parameter that can be used to print the stacktrace if the user really 
wants it.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4733) Last written key = current key exception when streaming

2014-03-06 Thread Serj Veras (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922515#comment-13922515
 ] 

Serj Veras commented on CASSANDRA-4733:
---

Marcus Eriksson, yes, thank you. 
It's worth to say that with the thread per client model everything is OK. So 
I changed rpc server type to sync. 

 Last written key = current key exception when streaming
 

 Key: CASSANDRA-4733
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4733
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Assignee: Yuki Morishita
 Fix For: 1.2.0 beta 2

 Attachments: Serj_Veras_cassandra.yaml


 {noformat}
 ERROR 16:52:56,260 Exception in thread Thread[Streaming to 
 /10.179.111.137:1,5,main]
 java.lang.RuntimeException: java.io.IOException: Connection reset by peer
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcher.write0(Native Method)
 at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)
 at sun.nio.ch.IOUtil.write(IOUtil.java:43)
 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
 at java.nio.channels.Channels.writeFullyImpl(Channels.java:59)
 at java.nio.channels.Channels.writeFully(Channels.java:81)
 at java.nio.channels.Channels.access$000(Channels.java:47)
 at java.nio.channels.Channels$1.write(Channels.java:155)
 at 
 com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
 at 
 com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
 at 
 com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
 at 
 org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:218)
 at 
 org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:164)
 at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ... 3 more
 ERROR 16:53:03,951 Exception in thread Thread[Thread-11,5,main]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(113424593524874987650593774422007331058, 3036303936343535) = 
 current key DecoratedKey(59229538317742990547810678738983628664, 
 3036313133373139) writing into 
 /var/lib/cassandra/data/Keyspace1-Standard1-tmp-ia-95-Data.db
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:132)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:208)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:164)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:107)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:220)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:165)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:65)
 {noformat}
 I didn't do anything fancy here, just inserted about 6M keys at rf=2, then 
 ran repair and got this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922529#comment-13922529
 ] 

Benedict commented on CASSANDRA-6694:
-

Quick update: I realised I had accidentally included a partial image of the 
changes I was making for CASSANDRA-6781 in the offheap2c I uploaded. I've fixed 
the repository by rolling FastByteComparisons back, since that shouldn't have 
been included in this ticket.

I've also uploaded another 
[tree|https://github.com/belliottsmith/cassandra/tree/offheap2c+6781] which 
includes 6781 for anyone who wants to performance test. I'm in the process of 
looking at this now.

 Slightly More Off-Heap Memtables
 

 Key: CASSANDRA-6694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Fix For: 2.1 beta2


 The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
 the on-heap overhead is still very large. It should not be tremendously 
 difficult to extend these changes so that we allocate entire Cells off-heap, 
 instead of multiple BBs per Cell (with all their associated overhead).
 The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
 bytes per cell on average for the btree overhead, for a total overhead of 
 around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
 address (we will do alignment tricks like the VM to allow us to address a 
 reasonably large memory space, although this trick is unlikely to last us 
 forever, at which point we will have to bite the bullet and accept a 24-byte 
 per cell overhead), and 4-byte object reference for maintaining our internal 
 list of allocations, which is unfortunately necessary since we cannot safely 
 (and cheaply) walk the object graph we allocate otherwise, which is necessary 
 for (allocation-) compaction and pointer rewriting.
 The ugliest thing here is going to be implementing the various CellName 
 instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6804) Consolidate on-disk and NativeCell layouts so that reads from disk require less memory

2014-03-06 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922598#comment-13922598
 ] 

Jonathan Ellis commented on CASSANDRA-6804:
---

bq. With further work we may also be able to reach a zero-copy allocation as 
well

Assuming we can get CASSANDRA-6045 to work?

 Consolidate on-disk and NativeCell layouts so that reads from disk require 
 less memory
 --

 Key: CASSANDRA-6804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6804
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 If the on-disk Cell representation were the same as we use for NativeCell, we 
 could easily allocate a NativeCell instead of a BufferCell, immediately 
 reducing the amount of garbage generated on reads. With further work we may 
 also be able to reach a zero-copy allocation as well, reducing further the 
 read costs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6804) Consolidate on-disk and NativeCell layouts so that reads from disk require less memory

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922605#comment-13922605
 ] 

Benedict commented on CASSANDRA-6804:
-

Right. Well, CASSANDRA-5863 :-)

 Consolidate on-disk and NativeCell layouts so that reads from disk require 
 less memory
 --

 Key: CASSANDRA-6804
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6804
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


 If the on-disk Cell representation were the same as we use for NativeCell, we 
 could easily allocate a NativeCell instead of a BufferCell, immediately 
 reducing the amount of garbage generated on reads. With further work we may 
 also be able to reach a zero-copy allocation as well, reducing further the 
 read costs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922619#comment-13922619
 ] 

Benedict commented on CASSANDRA-5863:
-

I wonder if storing decompressed chunks on SSD really makes much sense? It's 
likely the decompression is still faster than IO to even a fast flash drive.

Moving hot compressed chunks onto an SSD makes a lot of sense, but think that 
maybe these are two different tickets? One to bring the page/buffer cache in 
process, and store it uncompressed, the other to track hot file regions and 
store them on a cache drive.

One possible neat thing to try doing as well might be to have a mixed 
compressed/uncompressed in-memory page cache. i.e. have a smaller uncompressed 
page cache, into which pages are moved from a larger compressed page cache, at 
which point it's removed from the compressed page cache, and when evicted from 
the uncompressed page cache the data is recompressed and placed in the 
compressed page cache. This is probably only helpful for non-SSD boxes, though, 
as reading may be faster than recompressing.

 Create a Decompressed Chunk [block] Cache
 -

 Key: CASSANDRA-5863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
  Labels: performance
 Fix For: 2.1 beta2


 Currently, for every read, the CRAR reads each compressed chunk into a 
 byte[], sends it to ICompressor, gets back another byte[] and verifies a 
 checksum.  
 This process is where the majority of time is spent in a read request.  
 Before compression, we would have zero-copy of data and could respond 
 directly from the page-cache.
 It would be useful to have some kind of Chunk cache that could speed up this 
 process for hot data. Initially this could be a off heap cache but it would 
 be great to put these decompressed chunks onto a SSD so the hot data lives on 
 a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922619#comment-13922619
 ] 

Benedict edited comment on CASSANDRA-5863 at 3/6/14 3:08 PM:
-

I wonder if storing decompressed chunks on SSD really makes much sense? It's 
likely the decompression is still faster than IO to even a fast flash drive.

Moving hot compressed chunks onto an SSD makes a lot of sense, but think that 
maybe these are two different tickets? One to bring the page/buffer cache in 
process, and store it uncompressed, the other to track hot file regions and 
store them on a cache drive.

One possible neat thing to try doing as well might be to have a mixed 
compressed/uncompressed in-memory page cache. i.e. have a smaller uncompressed 
page cache, into which pages are moved from a larger compressed page cache 
(being removed from their at the same time), and when evicted the data is 
recompressed and moved back. This is probably only helpful for non-SSD boxes, 
though, as reading may be faster than recompressing.


was (Author: benedict):
I wonder if storing decompressed chunks on SSD really makes much sense? It's 
likely the decompression is still faster than IO to even a fast flash drive.

Moving hot compressed chunks onto an SSD makes a lot of sense, but think that 
maybe these are two different tickets? One to bring the page/buffer cache in 
process, and store it uncompressed, the other to track hot file regions and 
store them on a cache drive.

One possible neat thing to try doing as well might be to have a mixed 
compressed/uncompressed in-memory page cache. i.e. have a smaller uncompressed 
page cache, into which pages are moved from a larger compressed page cache, at 
which point it's removed from the compressed page cache, and when evicted from 
the uncompressed page cache the data is recompressed and placed in the 
compressed page cache. This is probably only helpful for non-SSD boxes, though, 
as reading may be faster than recompressing.

 Create a Decompressed Chunk [block] Cache
 -

 Key: CASSANDRA-5863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
  Labels: performance
 Fix For: 2.1 beta2


 Currently, for every read, the CRAR reads each compressed chunk into a 
 byte[], sends it to ICompressor, gets back another byte[] and verifies a 
 checksum.  
 This process is where the majority of time is spent in a read request.  
 Before compression, we would have zero-copy of data and could respond 
 directly from the page-cache.
 It would be useful to have some kind of Chunk cache that could speed up this 
 process for hot data. Initially this could be a off heap cache but it would 
 be great to put these decompressed chunks onto a SSD so the hot data lives on 
 a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922619#comment-13922619
 ] 

Benedict edited comment on CASSANDRA-5863 at 3/6/14 3:09 PM:
-

I wonder if storing decompressed chunks on SSD really makes much sense? It's 
likely the decompression is still faster than IO to even a fast flash drive.

Moving hot compressed chunks onto an SSD makes a lot of sense, but think that 
maybe these are two different tickets? One to bring the page/buffer cache in 
process, and store it uncompressed, the other to track hot file regions and 
store them on a cache drive.

One possible neat thing to try doing as well might be to have a mixed 
compressed/uncompressed in-memory page cache. i.e. have a smaller uncompressed 
page cache, into which pages are moved from a larger compressed page cache 
(being removed from there at the same time), and when evicted the data is 
recompressed and moved back. This is probably only helpful for non-SSD boxes, 
though, as reading may be faster than recompressing.


was (Author: benedict):
I wonder if storing decompressed chunks on SSD really makes much sense? It's 
likely the decompression is still faster than IO to even a fast flash drive.

Moving hot compressed chunks onto an SSD makes a lot of sense, but think that 
maybe these are two different tickets? One to bring the page/buffer cache in 
process, and store it uncompressed, the other to track hot file regions and 
store them on a cache drive.

One possible neat thing to try doing as well might be to have a mixed 
compressed/uncompressed in-memory page cache. i.e. have a smaller uncompressed 
page cache, into which pages are moved from a larger compressed page cache 
(being removed from their at the same time), and when evicted the data is 
recompressed and moved back. This is probably only helpful for non-SSD boxes, 
though, as reading may be faster than recompressing.

 Create a Decompressed Chunk [block] Cache
 -

 Key: CASSANDRA-5863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
  Labels: performance
 Fix For: 2.1 beta2


 Currently, for every read, the CRAR reads each compressed chunk into a 
 byte[], sends it to ICompressor, gets back another byte[] and verifies a 
 checksum.  
 This process is where the majority of time is spent in a read request.  
 Before compression, we would have zero-copy of data and could respond 
 directly from the page-cache.
 It would be useful to have some kind of Chunk cache that could speed up this 
 process for hot data. Initially this could be a off heap cache but it would 
 be great to put these decompressed chunks onto a SSD so the hot data lives on 
 a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6699) NPE in migration stage on trunk

2014-03-06 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov resolved CASSANDRA-6699.
---

Resolution: Cannot Reproduce

Tested on OSX and CentOS with [ccm 
script|https://gist.github.com/lyubent/9391939] to create a cluster, run stress 
on it and repeat. Failed to recreate after 1k iterations.  

 NPE in migration stage on trunk
 ---

 Key: CASSANDRA-6699
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6699
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Lyuben Todorov
 Fix For: 2.1 beta2


 Simple to reproduce, start a cluster and run legacy stress against it:
 {noformat}
 ERROR 12:56:12 Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.NullPointerException
  at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:411) 
 ~[main/:na]
  at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:281)
  ~[main/:na]
  at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:211)
  ~[main/:na]
  at 
 org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:105)
  ~[main/:na]
  at 
 org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:71)
  ~[main/:na]
  at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:180)
  ~[main/:na]
  at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:214) 
 ~[main/:na]
  at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:204) 
 ~[main/:na]
  at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1973)
  ~[main/:na]
  at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486)
  ~[thrift/:na]
  at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470)
  ~[thrift/:na]
  at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
  at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
  at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194)
  ~[main/:na]
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
  at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.NullPointerException
  at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.7.0_51]
  at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[na:1.7.0_51]
  at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:407) 
 ~[main/:na]
  ... 16 common frames omitted
 Caused by: java.lang.NullPointerException: null
  at org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:167) 
 ~[main/:na]
  at 
 org.apache.cassandra.serializers.AbstractTextSerializer.deserialize(AbstractTextSerializer.java:39)
  ~[main/:na]
  at 
 org.apache.cassandra.serializers.AbstractTextSerializer.deserialize(AbstractTextSerializer.java:26)
  ~[main/:na]
  at 
 org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:66) 
 ~[main/:na]
  at 
 org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:150)
  ~[main/:na]
  at 
 org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:373)
  ~[main/:na]
  at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1712)
  ~[main/:na]
  at org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1832) 
 ~[main/:na]
  at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:320)
  ~[main/:na]
  at 
 org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:306) 
 ~[main/:na]
  at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:181) 
 ~[main/:na]
  at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:299)
  ~[main/:na]
  at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_51]
  at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
  ... 3 common frames omitted
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5483) Repair tracing

2014-03-06 Thread Ben Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Chan updated CASSANDRA-5483:


Attachment: ccm-repair-test
5483-v06-06-Fix-interruption-in-tracestate-propagation.patch
5483-v06-05-Add-a-command-column-to-system_traces.events.patch
5483-v06-04-Allow-tracing-ttl-to-be-configured.patch

 Repair tracing
 --

 Key: CASSANDRA-5483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Yuki Morishita
Assignee: Ben Chan
Priority: Minor
  Labels: repair
 Attachments: 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
 ccm-repair-test, test-5483-system_traces-events.txt, 
 trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
 trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
 tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
 v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
 v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch


 I think it would be nice to log repair stats and results like query tracing 
 stores traces to system keyspace. With it, you don't have to lookup each log 
 file to see what was the status and how it performed the repair you invoked. 
 Instead, you can query the repair log with session ID to see the state and 
 stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922650#comment-13922650
 ] 

Benedict commented on CASSANDRA-5863:
-

bq. Moving hot compressed chunks onto an SSD makes a lot of sense, but think 
that maybe these are two different tickets? One to bring the page/buffer cache 
in process, and store it uncompressed, the other to track hot file regions and 
store them on a cache drive.

Just realised this is exactly what [~jbellis] already suggested.

I would quite like to have a crack at this for 3.0 using CASSANDRA-6694 as a 
basis, so that we can move the cache off-heap and retain zero copy behaviour.

 Create a Decompressed Chunk [block] Cache
 -

 Key: CASSANDRA-5863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
  Labels: performance
 Fix For: 2.1 beta2


 Currently, for every read, the CRAR reads each compressed chunk into a 
 byte[], sends it to ICompressor, gets back another byte[] and verifies a 
 checksum.  
 This process is where the majority of time is spent in a read request.  
 Before compression, we would have zero-copy of data and could respond 
 directly from the page-cache.
 It would be useful to have some kind of Chunk cache that could speed up this 
 process for hot data. Initially this could be a off heap cache but it would 
 be great to put these decompressed chunks onto a SSD so the hot data lives on 
 a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6806) In AtomicBTreeColumns, construct list of unwinds after a race lazily

2014-03-06 Thread Benedict (JIRA)
Benedict created CASSANDRA-6806:
---

 Summary: In AtomicBTreeColumns, construct list of unwinds after a 
race lazily
 Key: CASSANDRA-6806
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6806
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0


Currently we store these in a List, but this is wasteful. We can construct them 
lazily from a diff between the original and partially constructed replacement 
BTree. The UpdaterFunction could define a method to be passed such a collection 
in the event of an early abort.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/3] git commit: update cobertura version patch by Ed Capriolo for CASSANDRA-6800

2014-03-06 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 5ef53e6f7 - c2ec94b35
  refs/heads/cassandra-2.1 a052a912e - 728f677b0


update cobertura version
patch by Ed Capriolo for CASSANDRA-6800


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2ec94b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2ec94b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2ec94b3

Branch: refs/heads/cassandra-2.0
Commit: c2ec94b3548772d55fd70736c9523cad3b68c438
Parents: 5ef53e6
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Mar 6 09:40:40 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Mar 6 09:40:46 2014 -0600

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ec94b3/build.xml
--
diff --git a/build.xml b/build.xml
index 9972aa2..70f514c 100644
--- a/build.xml
+++ b/build.xml
@@ -92,7 +92,7 @@
 property name=test.long.timeout value=60 /
 
 !-- http://cobertura.sourceforge.net/ --
-property name=cobertura.version value=1.9.4.1/
+property name=cobertura.version value=2.0.2/
 property name=cobertura.build.dir value=${build.dir}/cobertura/
 property name=cobertura.report.dir 
value=${cobertura.build.dir}/report/
 property name=cobertura.classes.dir 
value=${cobertura.build.dir}/classes/



[2/3] git commit: update cobertura version patch by Ed Capriolo for CASSANDRA-6800

2014-03-06 Thread jbellis
update cobertura version
patch by Ed Capriolo for CASSANDRA-6800


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2ec94b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2ec94b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2ec94b3

Branch: refs/heads/cassandra-2.1
Commit: c2ec94b3548772d55fd70736c9523cad3b68c438
Parents: 5ef53e6
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Mar 6 09:40:40 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Mar 6 09:40:46 2014 -0600

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ec94b3/build.xml
--
diff --git a/build.xml b/build.xml
index 9972aa2..70f514c 100644
--- a/build.xml
+++ b/build.xml
@@ -92,7 +92,7 @@
 property name=test.long.timeout value=60 /
 
 !-- http://cobertura.sourceforge.net/ --
-property name=cobertura.version value=1.9.4.1/
+property name=cobertura.version value=2.0.2/
 property name=cobertura.build.dir value=${build.dir}/cobertura/
 property name=cobertura.report.dir 
value=${cobertura.build.dir}/report/
 property name=cobertura.classes.dir 
value=${cobertura.build.dir}/classes/



[3/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-06 Thread jbellis
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/728f677b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/728f677b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/728f677b

Branch: refs/heads/cassandra-2.1
Commit: 728f677b0fcc31e843c0b808f4201e32dcf2c216
Parents: a052a91 c2ec94b
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Mar 6 09:40:56 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Mar 6 09:40:56 2014 -0600

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/728f677b/build.xml
--



[jira] [Commented] (CASSANDRA-6800) ant codecoverage no longer works due jdk 1.7

2014-03-06 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922660#comment-13922660
 ] 

Jonathan Ellis commented on CASSANDRA-6800:
---

I've committed the version change since it's clearly an improvement, but 
leaving this open until it actually works. :)

 ant codecoverage no longer works due jdk 1.7
 

 Key: CASSANDRA-6800
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6800
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor
 Fix For: 2.1 beta2


 Code coverage does not run currently due to cobertura jdk incompatibility. 
 Fix is coming. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache

2014-03-06 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922662#comment-13922662
 ] 

Jonathan Ellis commented on CASSANDRA-5863:
---

(I think you mean [~tjake].)

 Create a Decompressed Chunk [block] Cache
 -

 Key: CASSANDRA-5863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
  Labels: performance
 Fix For: 2.1 beta2


 Currently, for every read, the CRAR reads each compressed chunk into a 
 byte[], sends it to ICompressor, gets back another byte[] and verifies a 
 checksum.  
 This process is where the majority of time is spent in a read request.  
 Before compression, we would have zero-copy of data and could respond 
 directly from the page-cache.
 It would be useful to have some kind of Chunk cache that could speed up this 
 process for hot data. Initially this could be a off heap cache but it would 
 be great to put these decompressed chunks onto a SSD so the hot data lives on 
 a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6805) When something goes wrong the `nodetool` command prints the whole Java stacktrace instead of a simple error message to stderr

2014-03-06 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922672#comment-13922672
 ] 

Brandon Williams edited comment on CASSANDRA-6805 at 3/6/14 3:56 PM:
-

bq. it makes using it from other scripts a PITA

How? You can just check the return code for success or failure.


was (Author: brandon.williams):
.bq it makes using it from other scripts a PITA

How? You can just check the return code for success or failure.

 When something goes wrong the `nodetool` command prints the whole Java 
 stacktrace instead of a simple error message to stderr
 -

 Key: CASSANDRA-6805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6805
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ondřej Černoš
Priority: Minor

 {noformat}
 $ nodetool snapshot XXX -t YYY
 Requested creating snapshot for: XXX
 Exception in thread main java.io.IOException: Table XXX does not exist
 at 
 org.apache.cassandra.service.StorageService.getValidTable(StorageService.java:2267)
 at 
 org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}
 Please change the `nodetool` command so that it does not print the stacktrace 
 by default, it makes using it from other scripts a PITA. You can possibly add 
 a `--debug` parameter that can be used to print the stacktrace if the user 
 really wants it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6805) When something goes wrong the `nodetool` command prints the whole Java stacktrace instead of a simple error message to stderr

2014-03-06 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922672#comment-13922672
 ] 

Brandon Williams commented on CASSANDRA-6805:
-

.bq it makes using it from other scripts a PITA

How? You can just check the return code for success or failure.

 When something goes wrong the `nodetool` command prints the whole Java 
 stacktrace instead of a simple error message to stderr
 -

 Key: CASSANDRA-6805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6805
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ondřej Černoš
Priority: Minor

 {noformat}
 $ nodetool snapshot XXX -t YYY
 Requested creating snapshot for: XXX
 Exception in thread main java.io.IOException: Table XXX does not exist
 at 
 org.apache.cassandra.service.StorageService.getValidTable(StorageService.java:2267)
 at 
 org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}
 Please change the `nodetool` command so that it does not print the stacktrace 
 by default, it makes using it from other scripts a PITA. You can possibly add 
 a `--debug` parameter that can be used to print the stacktrace if the user 
 really wants it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6805) When something goes wrong the `nodetool` command prints the whole Java stacktrace instead of a simple error message to stderr

2014-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922682#comment-13922682
 ] 

Ondřej Černoš commented on CASSANDRA-6805:
--

We'd love to reuse the message, but we don't want to parse the stack trace.

 When something goes wrong the `nodetool` command prints the whole Java 
 stacktrace instead of a simple error message to stderr
 -

 Key: CASSANDRA-6805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6805
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ondřej Černoš
Priority: Minor

 {noformat}
 $ nodetool snapshot XXX -t YYY
 Requested creating snapshot for: XXX
 Exception in thread main java.io.IOException: Table XXX does not exist
 at 
 org.apache.cassandra.service.StorageService.getValidTable(StorageService.java:2267)
 at 
 org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}
 Please change the `nodetool` command so that it does not print the stacktrace 
 by default, it makes using it from other scripts a PITA. You can possibly add 
 a `--debug` parameter that can be used to print the stacktrace if the user 
 really wants it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Peter (JIRA)
Peter created CASSANDRA-6807:


 Summary: Thrift with CQL3 doesn't return key
 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
 Fix For: 2.0.5
 Attachments: thrift-missing-key.png

I'm working on adding support for CQL3 to Hector and came across an odd issue. 
I explicitly include the key in the statement, but the key isn't returned. I've 
attached a screenshot. Hector's CqlQuery class is doing the following to issue 
the cql3 call. I'm hoping it's a simple configuration detail I'm missing or 
parameter I need to set.

result = cassandra.execute_cql3_query(query, useCompression ? Compression.GZIP 
: Compression.NONE, getConsistency());

Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
obvious that would tell me how to tell Cassandra to return the key in the 
CqlResult or CqlRow. The queries I tried look like this

select key from myColFamily;





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6807.
---

   Resolution: Not A Problem
Fix Version/s: (was: 2.0.5)

partition key is not special cased in cql3 resultsets.  CqlRow.key will always 
be empty; it's only there as cql2 baggage.

i would definitely recommend the native protocol over thrift for cql3.

 Thrift with CQL3 doesn't return key
 ---

 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
 Attachments: thrift-missing-key.png


 I'm working on adding support for CQL3 to Hector and came across an odd 
 issue. I explicitly include the key in the statement, but the key isn't 
 returned. I've attached a screenshot. Hector's CqlQuery class is doing the 
 following to issue the cql3 call. I'm hoping it's a simple configuration 
 detail I'm missing or parameter I need to set.
 result = cassandra.execute_cql3_query(query, useCompression ? 
 Compression.GZIP : Compression.NONE, getConsistency());
 Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
 obvious that would tell me how to tell Cassandra to return the key in the 
 CqlResult or CqlRow. The queries I tried look like this
 select key from myColFamily;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Peter (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922706#comment-13922706
 ] 

Peter commented on CASSANDRA-6807:
--

I am using native for some things, but I also want hector to support CQL3.

I am explicitly including the key in the select statement, shouldn't it return 
the key when I ask for it? If a user explicitly asks for the key, does it make 
sense for Cassandra to say no you can't have it?

I'm happy to enhance thrift to return the key, if someone points me in the 
right direction.

thanks

 Thrift with CQL3 doesn't return key
 ---

 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
 Attachments: thrift-missing-key.png


 I'm working on adding support for CQL3 to Hector and came across an odd 
 issue. I explicitly include the key in the statement, but the key isn't 
 returned. I've attached a screenshot. Hector's CqlQuery class is doing the 
 following to issue the cql3 call. I'm hoping it's a simple configuration 
 detail I'm missing or parameter I need to set.
 result = cassandra.execute_cql3_query(query, useCompression ? 
 Compression.GZIP : Compression.NONE, getConsistency());
 Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
 obvious that would tell me how to tell Cassandra to return the key in the 
 CqlResult or CqlRow. The queries I tried look like this
 select key from myColFamily;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Peter (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter reopened CASSANDRA-6807:
--


I'm happy to make the enhancement and submit a patch so that CQL3 over thrift 
gets the KEY when the user explicitly asks for it in the statement.

To not return it would be like Oracle not returning the primary key column when 
the user explicitly asks for it.

 Thrift with CQL3 doesn't return key
 ---

 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
 Attachments: thrift-missing-key.png


 I'm working on adding support for CQL3 to Hector and came across an odd 
 issue. I explicitly include the key in the statement, but the key isn't 
 returned. I've attached a screenshot. Hector's CqlQuery class is doing the 
 following to issue the cql3 call. I'm hoping it's a simple configuration 
 detail I'm missing or parameter I need to set.
 result = cassandra.execute_cql3_query(query, useCompression ? 
 Compression.GZIP : Compression.NONE, getConsistency());
 Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
 obvious that would tell me how to tell Cassandra to return the key in the 
 CqlResult or CqlRow. The queries I tried look like this
 select key from myColFamily;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Peter (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter updated CASSANDRA-6807:
-

Issue Type: Improvement  (was: Bug)

 Thrift with CQL3 doesn't return key
 ---

 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
 Attachments: thrift-missing-key.png


 I'm working on adding support for CQL3 to Hector and came across an odd 
 issue. I explicitly include the key in the statement, but the key isn't 
 returned. I've attached a screenshot. Hector's CqlQuery class is doing the 
 following to issue the cql3 call. I'm hoping it's a simple configuration 
 detail I'm missing or parameter I need to set.
 result = cassandra.execute_cql3_query(query, useCompression ? 
 Compression.GZIP : Compression.NONE, getConsistency());
 Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
 obvious that would tell me how to tell Cassandra to return the key in the 
 CqlResult or CqlRow. The queries I tried look like this
 select key from myColFamily;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6808) Possibly repairing with verbose nodes

2014-03-06 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-6808:
-

 Summary: Possibly repairing with verbose nodes
 Key: CASSANDRA-6808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6808
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Priority: Minor
 Fix For: 2.1 beta2


Incremental repair first sends prepare message to replica(endpoint) of all 
ranges repairing. Following to that, each repair session starts with replica of 
certain range but it is given replica of all ranges.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6808) Possibly repairing with verbose nodes

2014-03-06 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-6808:
--

Attachment: 6808-2.1.txt

Attaching simple patch to fix.

 Possibly repairing with verbose nodes
 -

 Key: CASSANDRA-6808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6808
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Priority: Minor
 Fix For: 2.1 beta2

 Attachments: 6808-2.1.txt


 Incremental repair first sends prepare message to replica(endpoint) of all 
 ranges repairing. Following to that, each repair session starts with replica 
 of certain range but it is given replica of all ranges.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache

2014-03-06 Thread Chris Burroughs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922720#comment-13922720
 ] 

Chris Burroughs commented on CASSANDRA-5863:


FWIW for design comparison, the ZFS L2ARC compression is enabled whenever 
compression is enabled for the dataset on disk, the rational being along the 
lines of LZ4 is wicked fast so why not?.  
http://wiki.illumos.org/display/illumos/L2ARC+Compression 

 Create a Decompressed Chunk [block] Cache
 -

 Key: CASSANDRA-5863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
  Labels: performance
 Fix For: 2.1 beta2


 Currently, for every read, the CRAR reads each compressed chunk into a 
 byte[], sends it to ICompressor, gets back another byte[] and verifies a 
 checksum.  
 This process is where the majority of time is spent in a read request.  
 Before compression, we would have zero-copy of data and could respond 
 directly from the page-cache.
 It would be useful to have some kind of Chunk cache that could speed up this 
 process for hot data. Initially this could be a off heap cache but it would 
 be great to put these decompressed chunks onto a SSD so the hot data lives on 
 a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-03-06 Thread Ben Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922721#comment-13922721
 ] 

Ben Chan commented on CASSANDRA-5483:
-

It was more involved than I thought, partly because of heisenbugs and the trace 
state mysteriously not propagating (see {{v06-05}}).

Note: changing JMX can cause mysterious errors if you don't {{ant clean  
ant}}. I ran into the same kinds of stack traces as you did. It's not 
consistent. Sometimes I can make a JMX change and {{ant}} with no problem.

To make patches simpler, I'm posting full repro code. I also tried to simplify 
the naming. Unfortunately, all the previous patches are in jumbled order due to 
a naming convention that doesn't sort. Fortunately, JIRA seems to have an 
easter egg where you can choose the attachment name by changing the url.

{noformat}
# Uncomment to exactly reproduce state.
#git checkout -b 5483-e30d6dc e30d6dc

# Download all needed patches with consistent names, apply patches, build.
W=https://issues.apache.org/jira/secure/attachment
for url in \
  $W/12630490/5483-v02-01-Trace-filtering-and-tracestate-propagation.patch \
  $W/12630491/5483-v02-02-Put-a-few-traces-parallel-to-the-repair-logging.patch 
\
  $W/12631967/5483-v03-03-Make-repair-tracing-controllable-via-nodetool.patch \
  $W/12633153/5483-v06-04-Allow-tracing-ttl-to-be-configured.patch \
  $W/12633154/5483-v06-05-Add-a-command-column-to-system_traces.events.patch \
  $W/12633155/5483-v06-06-Fix-interruption-in-tracestate-propagation.patch \
  $W/12633156/ccm-repair-test
do [ -e $(basename $url) ] || curl -sO $url; done 
git apply 5483-v0[236]-*.patch 
ant clean  ant

# put on a separate line because you should at least minimally inspect
# arbitrary code before running.
chmod +x ./ccm-repair-test  ./ccm-repair-test
{noformat}

{{ccm-repair-test}} has some options for convenience:
{noformat}
-k keep (don't delete) the created cluster after successful exit.
-r repair only
-R don't repair
-t do traced repair only
-T don't do traced repair (if neither, then do both traced and untraced repair)
{noformat}

The output of a test run:

{noformat}
Current cluster is now: test-5483-QiR
[2014-03-06 10:46:13,617] Nothing to repair for keyspace 'system'
[2014-03-06 10:46:13,646] Starting repair command #1, repairing 2 ranges for 
keyspace s1 (seq=true, full=true)
[2014-03-06 10:46:16,999] Repair session 72648190-a546-11e3-a5f4-f94811c7b860 
for range (-3074457345618258603,3074457345618258602] finished
[2014-03-06 10:46:17,465] Repair session 73ee2ed0-a546-11e3-a5f4-f94811c7b860 
for range (3074457345618258602,-9223372036854775808] finished
[2014-03-06 10:46:17,465] Repair command #1 finished
[2014-03-06 10:46:17,485] Starting repair command #2, repairing 2 ranges for 
keyspace system_traces (seq=true, full=true)
[2014-03-06 10:46:18,782] Repair session 74aaef20-a546-11e3-a5f4-f94811c7b860 
for range (-3074457345618258603,3074457345618258602] finished
[2014-03-06 10:46:18,816] Repair session 74ff0290-a546-11e3-a5f4-f94811c7b860 
for range (3074457345618258602,-9223372036854775808] finished
[2014-03-06 10:46:18,816] Repair command #2 finished
0 rows exported in 0.015 seconds.
test-5483-QiR-system_traces-events.txt
ok
[2014-03-06 10:46:24,128] Nothing to repair for keyspace 'system'
[2014-03-06 10:46:24,166] Starting repair command #3, repairing 2 ranges for 
keyspace s1 (seq=true, full=true)
[2014-03-06 10:46:25,366] Repair session 78a6d4e0-a546-11e3-a5f4-f94811c7b860 
for range (-3074457345618258603,3074457345618258602] finished
[2014-03-06 10:46:25,415] Repair session 79263e10-a546-11e3-a5f4-f94811c7b860 
for range (3074457345618258602,-9223372036854775808] finished
[2014-03-06 10:46:25,415] Repair command #3 finished
[2014-03-06 10:46:25,485] Starting repair command #4, repairing 2 ranges for 
keyspace system_traces (seq=true, full=true)
[2014-03-06 10:46:27,077] Repair session 796f7c10-a546-11e3-a5f4-f94811c7b860 
for range (-3074457345618258603,3074457345618258602] finished
[2014-03-06 10:46:27,120] Repair session 79f240a0-a546-11e3-a5f4-f94811c7b860 
for range (3074457345618258602,-9223372036854775808] finished
[2014-03-06 10:46:27,120] Repair command #4 finished
48 rows exported in 0.104 seconds.
test-5483-QiR-system_traces-events-tr.txt
found source: 127.0.0.1
found thread: Thread-15
found thread: AntiEntropySessions:1
found thread: RepairJobTask:1
found source: 127.0.0.2
found thread: AntiEntropyStage:1
found source: 127.0.0.3
found thread: AntiEntropySessions:2
found thread: Thread-16
found thread: AntiEntropySessions:3
found thread: AntiEntropySessions:4
unique sources traced: 3
unique threads traced: 8
All thread categories accounted for
ok
{noformat}

---

Patch comments:

- {{v06-04}} I did something similar to {{v03-03}}, (almost) no refactoring. 
The implementation is a little messy architecturally.
- {{v06-05}} This is the suggestion you had to add a command 

[jira] [Commented] (CASSANDRA-6311) Add CqlRecordReader to take advantage of native CQL pagination

2014-03-06 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922728#comment-13922728
 ] 

Alex Liu commented on CASSANDRA-6311:
-

Move count inside Iterator for safe play. Change IGNORED to REMOTE. v7 is 
attached.

 Add CqlRecordReader to take advantage of native CQL pagination
 --

 Key: CASSANDRA-6311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.0.6

 Attachments: 6311-v3-2.0-branch.txt, 6311-v4.txt, 
 6311-v5-2.0-branch.txt, 6311-v6-2.0-branch.txt, 6311-v7.txt, 
 6331-2.0-branch.txt, 6331-v2-2.0-branch.txt


 Since the latest Cql pagination is done and it should be more efficient, so 
 we need update CqlPagingRecordReader to use it instead of the custom thrift 
 paging.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6311) Add CqlRecordReader to take advantage of native CQL pagination

2014-03-06 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6311:


Attachment: 6311-v7.txt

 Add CqlRecordReader to take advantage of native CQL pagination
 --

 Key: CASSANDRA-6311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6311
 Project: Cassandra
  Issue Type: New Feature
  Components: Hadoop
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.0.6

 Attachments: 6311-v3-2.0-branch.txt, 6311-v4.txt, 
 6311-v5-2.0-branch.txt, 6311-v6-2.0-branch.txt, 6311-v7.txt, 
 6331-2.0-branch.txt, 6331-v2-2.0-branch.txt


 Since the latest Cql pagination is done and it should be more efficient, so 
 we need update CqlPagingRecordReader to use it instead of the custom thrift 
 paging.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922729#comment-13922729
 ] 

Sylvain Lebresne commented on CASSANDRA-6807:
-

I think you misunderstood, if you explicitely ask for a column in the query, 
that column will be included in the result set, but will just be one of the 
column in the CqlRow of the CqlResult.rows field. It's there, you can use it, 
but CqlRow.key is always null as far as CQL3 is concerned.

 Thrift with CQL3 doesn't return key
 ---

 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
 Attachments: thrift-missing-key.png


 I'm working on adding support for CQL3 to Hector and came across an odd 
 issue. I explicitly include the key in the statement, but the key isn't 
 returned. I've attached a screenshot. Hector's CqlQuery class is doing the 
 following to issue the cql3 call. I'm hoping it's a simple configuration 
 detail I'm missing or parameter I need to set.
 result = cassandra.execute_cql3_query(query, useCompression ? 
 Compression.GZIP : Compression.NONE, getConsistency());
 Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
 obvious that would tell me how to tell Cassandra to return the key in the 
 CqlResult or CqlRow. The queries I tried look like this
 select key from myColFamily;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Peter (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922741#comment-13922741
 ] 

Peter commented on CASSANDRA-6807:
--

Just to make sure I don't mis-interpret again. The impression I got from 
Jonathan's response is that CqlRow.key is always null and was a special case 
in the past. In other words it is deprecated going forward.

The change with CQL3 is that KEY is just another column, so thrift does get the 
column. I can live with that and will give it a try. Reading the source code, 
it wasn't at all obvious, which is why I asked. I will add a comment and 
javadoc to ResultSet and submit a patch. Atleast that way, if anyone else 
writing drivers looks they can see the explanation and ask the same question 
again.

thanks for taking time to clarify.

 Thrift with CQL3 doesn't return key
 ---

 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
 Attachments: thrift-missing-key.png


 I'm working on adding support for CQL3 to Hector and came across an odd 
 issue. I explicitly include the key in the statement, but the key isn't 
 returned. I've attached a screenshot. Hector's CqlQuery class is doing the 
 following to issue the cql3 call. I'm hoping it's a simple configuration 
 detail I'm missing or parameter I need to set.
 result = cassandra.execute_cql3_query(query, useCompression ? 
 Compression.GZIP : Compression.NONE, getConsistency());
 Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
 obvious that would tell me how to tell Cassandra to return the key in the 
 CqlResult or CqlRow. The queries I tried look like this
 select key from myColFamily;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6591) un-deprecate cache recentHitRate and expose in o.a.c.metrics

2014-03-06 Thread Ian Barfield (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922755#comment-13922755
 ] 

Ian Barfield commented on CASSANDRA-6591:
-

The hit mva / rate mva and hit rate mva as you define them are indeed very 
different things. This is most likely intentional and I would say desirable. 
The red line hit rate mva seems to be applying the idea of recency to a 
metric that already has a recency window (in this case the window is 'all 
time'). This strikes me as both unintuitive and unhelpful; after the metrics 
have been going for a while it is unlikely to change very fast, and by making 
it a moving average it will change even slower. You could easily have a 100% 
miss ratio for several minutes and never see that line move. That seems to 
defeat the purpose of having a recent hitRate metric.

As for EWMAs being 'comparable': I'm not certain on the exact mathematical 
implications of dividing two estimates after sampling, but I strongly suspect 
it would be more than accurate enough for this purpose.

 un-deprecate cache recentHitRate and expose in o.a.c.metrics
 

 Key: CASSANDRA-6591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6591
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Burroughs
Assignee: Chris Burroughs
Priority: Minor
 Attachments: j6591-1.2-v1.txt, j6591-1.2-v2.txt, j6591-1.2-v3.txt


 recentHitRate metrics were not added as part of CASSANDRA-4009 because there 
 is not an obvious way to do it with the Metrics library.  Instead hitRate was 
 added as an all time measurement since node restart.
 This does allow changes in cache rate (aka production performance problems)  
 to be detected.  Ideally there would be 1/5/15 moving averages for the hit 
 rate, but I'm not sure how to calculate that.  Instead I propose updating 
 recentHitRate on a fixed interval and exposing that as a Gauge.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Peter (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922777#comment-13922777
 ] 

Peter commented on CASSANDRA-6807:
--

In the interest of helping other people that work on drivers. I will add 
javadocs and comments to ResultSet class and submit a patch over the weekend. 
Once I have the patch, I will close the issue.

To get the KEY over thrift, driver implementors need to find the KEY column to 
extract the value. In the case of hector, I find the index for the KEY column 
once and reuse it for all subsequent rows to avoid iterating over the columns 
again. Every query will find the index for the Key column once, since users 
might not always put the KEY column at the front of the select statement.

 Thrift with CQL3 doesn't return key
 ---

 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
 Attachments: thrift-missing-key.png


 I'm working on adding support for CQL3 to Hector and came across an odd 
 issue. I explicitly include the key in the statement, but the key isn't 
 returned. I've attached a screenshot. Hector's CqlQuery class is doing the 
 following to issue the cql3 call. I'm hoping it's a simple configuration 
 detail I'm missing or parameter I need to set.
 result = cassandra.execute_cql3_query(query, useCompression ? 
 Compression.GZIP : Compression.NONE, getConsistency());
 Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
 obvious that would tell me how to tell Cassandra to return the key in the 
 CqlResult or CqlRow. The queries I tried look like this
 select key from myColFamily;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Auto reload GossipingPropertyFileSnitch config

2014-03-06 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 728f677b0 - f67b7a477


Auto reload GossipingPropertyFileSnitch config

Patch by Danield Shelepov, reviewed by Tyler Hobbs for CASSANDRA-5897


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f67b7a47
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f67b7a47
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f67b7a47

Branch: refs/heads/cassandra-2.1
Commit: f67b7a477ae08fe7c8be2bb61eeecb0a7cc55e62
Parents: 728f677
Author: Tyler Hobbs ty...@datastax.com
Authored: Thu Mar 6 11:22:47 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Thu Mar 6 11:22:47 2014 -0600

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/locator/Ec2Snitch.java |   2 +-
 .../locator/GossipingPropertyFileSnitch.java| 104 ---
 .../cassandra/locator/PropertyFileSnitch.java   |   2 +-
 .../cassandra/locator/SnitchProperties.java |  10 +-
 test/conf/cassandra-rackdc.properties.mod   |  17 +++
 .../GossipingPropertyFileSnitchTest.java|  59 +++
 .../YamlFileNetworkTopologySnitchTest.java  |   2 +-
 8 files changed, 175 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f67b7a47/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b933bad..af7f2fd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-beta2
+ * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
  * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
  * Fix ABTC NPE (CASSANDRA-6692)
  * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f67b7a47/src/java/org/apache/cassandra/locator/Ec2Snitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/Ec2Snitch.java 
b/src/java/org/apache/cassandra/locator/Ec2Snitch.java
index 216a224..59eb27b 100644
--- a/src/java/org/apache/cassandra/locator/Ec2Snitch.java
+++ b/src/java/org/apache/cassandra/locator/Ec2Snitch.java
@@ -62,7 +62,7 @@ public class Ec2Snitch extends AbstractNetworkTopologySnitch
 if (ec2region.endsWith(1))
 ec2region = az.substring(0, az.length() - 3);
 
-String datacenterSuffix = SnitchProperties.get(dc_suffix, );
+String datacenterSuffix = (new SnitchProperties()).get(dc_suffix, 
);
 ec2region = ec2region.concat(datacenterSuffix);
 logger.info(EC2Snitch using region: {}, zone: {}., ec2region, 
ec2zone);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f67b7a47/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java 
b/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java
index 83c1efe..720e804 100644
--- a/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java
+++ b/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java
@@ -19,6 +19,7 @@
 package org.apache.cassandra.locator;
 
 import java.net.InetAddress;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.Map;
 
 import org.slf4j.Logger;
@@ -29,8 +30,10 @@ import 
org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.gms.ApplicationState;
 import org.apache.cassandra.gms.EndpointState;
 import org.apache.cassandra.gms.Gossiper;
-import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.service.StorageService;
+import org.apache.cassandra.utils.FBUtilities;
+import org.apache.cassandra.utils.ResourceWatcher;
+import org.apache.cassandra.utils.WrappedRunnable;
 
 
 public class GossipingPropertyFileSnitch extends 
AbstractNetworkTopologySnitch// implements IEndpointStateChangeSubscriber
@@ -38,23 +41,30 @@ public class GossipingPropertyFileSnitch extends 
AbstractNetworkTopologySnitch//
 private static final Logger logger = 
LoggerFactory.getLogger(GossipingPropertyFileSnitch.class);
 
 private PropertyFileSnitch psnitch;
-private String myDC;
-private String myRack;
+
+private volatile String myDC;
+private volatile String myRack;
+private volatile boolean preferLocal;
+private AtomicReferenceReconnectableSnitchHelper snitchHelperReference;
+private volatile boolean gossipStarted;
+
 private MapInetAddress, MapString, String savedEndpoints;
-private String DEFAULT_DC = UNKNOWN_DC;
-private String DEFAULT_RACK = UNKNOWN_RACK;
-private final boolean preferLocal;
+private static final 

[jira] [Commented] (CASSANDRA-6808) Possibly repairing with verbose nodes

2014-03-06 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922790#comment-13922790
 ] 

Marcus Eriksson commented on CASSANDRA-6808:


+1

 Possibly repairing with verbose nodes
 -

 Key: CASSANDRA-6808
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6808
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1 beta2

 Attachments: 6808-2.1.txt


 Incremental repair first sends prepare message to replica(endpoint) of all 
 ranges repairing. Following to that, each repair session starts with replica 
 of certain range but it is given replica of all ranges.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6807) Thrift with CQL3 doesn't return key

2014-03-06 Thread Peter (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922799#comment-13922799
 ] 

Peter commented on CASSANDRA-6807:
--

I've checked in the javadoc to my github https://github.com/woolfel/cassandra

the change is to cql3.ResultSet class
https://github.com/woolfel/cassandra/commit/e35a3ffed64654124dd7059a3e3ee55fcb5bc112

hopefully other driver developers will find it useful.

 Thrift with CQL3 doesn't return key
 ---

 Key: CASSANDRA-6807
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6807
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: windows 7 64bit, jdk 1.7, cassandra 2.0.5
Reporter: Peter
  Labels: javadoc
 Fix For: 2.0.6

 Attachments: thrift-missing-key.png


 I'm working on adding support for CQL3 to Hector and came across an odd 
 issue. I explicitly include the key in the statement, but the key isn't 
 returned. I've attached a screenshot. Hector's CqlQuery class is doing the 
 following to issue the cql3 call. I'm hoping it's a simple configuration 
 detail I'm missing or parameter I need to set.
 result = cassandra.execute_cql3_query(query, useCompression ? 
 Compression.GZIP : Compression.NONE, getConsistency());
 Looking at org.apache.cassandra.thrift.Cassandra.Client, I don't see anything 
 obvious that would tell me how to tell Cassandra to return the key in the 
 CqlResult or CqlRow. The queries I tried look like this
 select key from myColFamily;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/4] git commit: update cobertura version patch by Ed Capriolo for CASSANDRA-6800

2014-03-06 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk e30d6dca5 - 937189237


update cobertura version
patch by Ed Capriolo for CASSANDRA-6800


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2ec94b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2ec94b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2ec94b3

Branch: refs/heads/trunk
Commit: c2ec94b3548772d55fd70736c9523cad3b68c438
Parents: 5ef53e6
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Mar 6 09:40:40 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Mar 6 09:40:46 2014 -0600

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2ec94b3/build.xml
--
diff --git a/build.xml b/build.xml
index 9972aa2..70f514c 100644
--- a/build.xml
+++ b/build.xml
@@ -92,7 +92,7 @@
 property name=test.long.timeout value=60 /
 
 !-- http://cobertura.sourceforge.net/ --
-property name=cobertura.version value=1.9.4.1/
+property name=cobertura.version value=2.0.2/
 property name=cobertura.build.dir value=${build.dir}/cobertura/
 property name=cobertura.report.dir 
value=${cobertura.build.dir}/report/
 property name=cobertura.classes.dir 
value=${cobertura.build.dir}/classes/



[3/4] git commit: Auto reload GossipingPropertyFileSnitch config

2014-03-06 Thread tylerhobbs
Auto reload GossipingPropertyFileSnitch config

Patch by Danield Shelepov, reviewed by Tyler Hobbs for CASSANDRA-5897


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f67b7a47
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f67b7a47
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f67b7a47

Branch: refs/heads/trunk
Commit: f67b7a477ae08fe7c8be2bb61eeecb0a7cc55e62
Parents: 728f677
Author: Tyler Hobbs ty...@datastax.com
Authored: Thu Mar 6 11:22:47 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Thu Mar 6 11:22:47 2014 -0600

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/locator/Ec2Snitch.java |   2 +-
 .../locator/GossipingPropertyFileSnitch.java| 104 ---
 .../cassandra/locator/PropertyFileSnitch.java   |   2 +-
 .../cassandra/locator/SnitchProperties.java |  10 +-
 test/conf/cassandra-rackdc.properties.mod   |  17 +++
 .../GossipingPropertyFileSnitchTest.java|  59 +++
 .../YamlFileNetworkTopologySnitchTest.java  |   2 +-
 8 files changed, 175 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f67b7a47/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b933bad..af7f2fd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-beta2
+ * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
  * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
  * Fix ABTC NPE (CASSANDRA-6692)
  * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f67b7a47/src/java/org/apache/cassandra/locator/Ec2Snitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/Ec2Snitch.java 
b/src/java/org/apache/cassandra/locator/Ec2Snitch.java
index 216a224..59eb27b 100644
--- a/src/java/org/apache/cassandra/locator/Ec2Snitch.java
+++ b/src/java/org/apache/cassandra/locator/Ec2Snitch.java
@@ -62,7 +62,7 @@ public class Ec2Snitch extends AbstractNetworkTopologySnitch
 if (ec2region.endsWith(1))
 ec2region = az.substring(0, az.length() - 3);
 
-String datacenterSuffix = SnitchProperties.get(dc_suffix, );
+String datacenterSuffix = (new SnitchProperties()).get(dc_suffix, 
);
 ec2region = ec2region.concat(datacenterSuffix);
 logger.info(EC2Snitch using region: {}, zone: {}., ec2region, 
ec2zone);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f67b7a47/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java 
b/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java
index 83c1efe..720e804 100644
--- a/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java
+++ b/src/java/org/apache/cassandra/locator/GossipingPropertyFileSnitch.java
@@ -19,6 +19,7 @@
 package org.apache.cassandra.locator;
 
 import java.net.InetAddress;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.Map;
 
 import org.slf4j.Logger;
@@ -29,8 +30,10 @@ import 
org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.gms.ApplicationState;
 import org.apache.cassandra.gms.EndpointState;
 import org.apache.cassandra.gms.Gossiper;
-import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.service.StorageService;
+import org.apache.cassandra.utils.FBUtilities;
+import org.apache.cassandra.utils.ResourceWatcher;
+import org.apache.cassandra.utils.WrappedRunnable;
 
 
 public class GossipingPropertyFileSnitch extends 
AbstractNetworkTopologySnitch// implements IEndpointStateChangeSubscriber
@@ -38,23 +41,30 @@ public class GossipingPropertyFileSnitch extends 
AbstractNetworkTopologySnitch//
 private static final Logger logger = 
LoggerFactory.getLogger(GossipingPropertyFileSnitch.class);
 
 private PropertyFileSnitch psnitch;
-private String myDC;
-private String myRack;
+
+private volatile String myDC;
+private volatile String myRack;
+private volatile boolean preferLocal;
+private AtomicReferenceReconnectableSnitchHelper snitchHelperReference;
+private volatile boolean gossipStarted;
+
 private MapInetAddress, MapString, String savedEndpoints;
-private String DEFAULT_DC = UNKNOWN_DC;
-private String DEFAULT_RACK = UNKNOWN_RACK;
-private final boolean preferLocal;
+private static final String DEFAULT_DC = UNKNOWN_DC;
+private static final String DEFAULT_RACK = UNKNOWN_RACK;
 
+   

[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-06 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/728f677b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/728f677b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/728f677b

Branch: refs/heads/trunk
Commit: 728f677b0fcc31e843c0b808f4201e32dcf2c216
Parents: a052a91 c2ec94b
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Mar 6 09:40:56 2014 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Mar 6 09:40:56 2014 -0600

--
 build.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/728f677b/build.xml
--



[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-06 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93718923
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93718923
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93718923

Branch: refs/heads/trunk
Commit: 9371892374ddb8cc1d495ec4d38554603ec78a3a
Parents: e30d6dc f67b7a4
Author: Tyler Hobbs ty...@datastax.com
Authored: Thu Mar 6 11:55:59 2014 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Thu Mar 6 11:55:59 2014 -0600

--
 CHANGES.txt |   1 +
 build.xml   |   2 +-
 .../org/apache/cassandra/locator/Ec2Snitch.java |   2 +-
 .../locator/GossipingPropertyFileSnitch.java| 104 ---
 .../cassandra/locator/PropertyFileSnitch.java   |   2 +-
 .../cassandra/locator/SnitchProperties.java |  10 +-
 test/conf/cassandra-rackdc.properties.mod   |  17 +++
 .../GossipingPropertyFileSnitchTest.java|  59 +++
 .../YamlFileNetworkTopologySnitchTest.java  |   2 +-
 9 files changed, 176 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/93718923/CHANGES.txt
--
diff --cc CHANGES.txt
index 7cb3e97,af7f2fd..09ad14b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.0
 + * Remove CQL2 (CASSANDRA-5918)
 + * add Thrift get_multi_slice call (CASSANDRA-6757)
 +
 +
  2.1.0-beta2
+  * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
   * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
   * Fix ABTC NPE (CASSANDRA-6692)
   * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93718923/build.xml
--



[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-06 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922830#comment-13922830
 ] 

Pavel Yaskevich commented on CASSANDRA-6689:


You probably still don't understand my point so let me clarify, I only care 
about 3 things: maintainability, consistency, performance. This is a big chunk 
of code which somebody has to maintain, which allows of inconsistent style (can 
do it with refferer = null or maybe somehow else situation, maybe put RefAction 
as null in the argument or maybe RefAction.impossible() but it really need to 
look throught it to make sure that it check for null everywhere and so on), 
and brings it's own assumptions e.g. _ in front, also adding a poor 
performance.  Before that is addressed, I'm -1 of this. vnodes were a big chunk 
of work but people were able to split into roadmap and successfully finish, so 
I don't see any reason why we can't do it here.

bq. Any scheme that copies data will inherently incur larger GC pressure, as we 
then copy for memtable reads as well as disk reads. Object overhead is in fact 
larger than the payload for many workloads, so even if we have arenas this 
effect is not eliminated or even appreciably ameliorated.

For disk reads we have to copy even for mmap, so we don't keep any references 
on deletion time and files can be safely deallocated. So why don't copy 
directly to the memory allocated by the pool?... Object overhead would stay 
inside ParNew bounds (for ( p999)) so object allocation is relatively cheap 
comparing to everything else, that's the goal of JVM as a whole.

bq. Temporary reader space (and hence your approach) is not predictable: it is 
not proportional to the number of readers, but to the number and size of 
columns the readers read. In fact it is larger than this, as we probably have 
to copy anything we might want to use (given the way the code is encapsulated, 
this is what I do currently when copying on-heap - anything else would 
introduce notable complexity), not just columns that end up in the result set.

Doesn't matter how many emphasises you put here it won't make it this argument 
stronger because, as the main idea is to have those pools of a fixed size which 
would create back-pressure to client in the situations of heavy load which is 
exactly what operators want - go gradually slower without extreme latency 
disturbance.

bq. We appear to be in agreement that your approach has higher costs associated 
with it. Further, copying potentially GB/s of (randomly located) data around 
destroys the CPU cache, reduces peak memory bandwidth by inducing strobes, 
consumes bandwidth directly, wastes CPU cycles waiting for the random lookups; 
all to no good purpose. We should be reducing these costs, not introducing more.

Let's say we live in the modern NUMA world, so we are going to do the following 
pin the group threads to CPU cores so we have fixed scope of allocation of 
different things, that why there is no significant bus pressure for copy among 
other things JVM/Cassandra does with memory (not even significant cache 
coherency traffic).

bq. It is simply not clear, despite your assertion of clarity, how you would 
reclaim any freed memory without separate GC (what else is GC but this 
reclamation?), however you want to call it, when it will be interspersed with 
non-freed memory, nor how you would guard the non-atomic copying (ref-counting, 
OpOrder, Lock: what?). Without this information it is not clear to me that it 
would be any simpler either.

The same way as jemalloc or any other allocator does it, it least that is not 
reinventing the wheel.

bq. Pauseless operation, so improved predictability

What do you mean by this, we still leave on the JVM, do we not? Also what would 
it do in the low memory situation? allocate from heap? wait? This is not 
pauseless operation.

bq. Lock-freedom and low overhead, so we move closer to being able to answer 
queries directly from the messaging threads themselves, improving latency and 
throughput

We won't be able to answer queries directly from the messaging threads for the 
number of reasons not even indirectly related to your approach, at least for 
not breaking SEDA, which also supposed to be a safe guide for over utilization.

bq. An alternative approach needs, IMO, to demonstrate a clear superiority to 
the patch that is already available, especially when it will incur further work 
to produce. It is not clear to me that your solution is superior in any regard, 
nor any simpler. It also seems to be demonstrably less predictable and more 
costly, so I struggle to see how it could be considered preferable.

Overall, I'm not questioning the idea of being able to track what goes where 
would be great, I'm questioning implementation and trade-offs comparing to 
other approaches.




 Partially Off Heap Memtables
 

  

[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922861#comment-13922861
 ] 

Benedict commented on CASSANDRA-6689:
-

bq. Before that is addressed, I'm -1 of this

These are already addressed in CASSANDRA-6694.

bq. Object overhead would stay inside ParNew bounds (for ( p999)) 

The more we rely on staying within ParNew, the more often we are going to 
exceed it; and reducing the number of ParNew runs is also a good thing. You 
said you have 300ms ParNew pauses, occuring every second? So reducing the max 
latency and total latency is surely a good thing?

bq.  as the main idea is to have those pools of a fixed size

How does this work without knowing the maximum size of a result set? We can't 
have a client block forever because we didn't provide enough room in the pools. 
Potentially we could have it error, but this seems inelegant to me, when it can 
be avoided. It also seems a suboptimal way to introduce back pressure, since it 
only affects concurrent reads / large reads. We should raise a ticket 
specifically to address back pressure, IMO, and try to come up with a good all 
round solution to the problem.

bq. Let's say we live in the modern NUMA world, so we are going to do the 
following pin the group threads to CPU cores so we have fixed scope of 
allocation of different things, that why there is no significant bus pressure 
for copy among other things JVM/Cassandra does with memory

It would be great to be more NUMA aware, but this is not about traffic over the 
interconnect, but simply with the arrays/memory banks themselves, and doesn't 
address any of the other negative consequences. You'll struggle to get more 
than a few GB/s bandwidth out of a modern CPU given that we are copying object 
trees (even shallow ones - they're still randomly distributed), and we don't 
want to waste any of that if we can avoid it

bq. What do you mean by this, we still leave on the JVM, do we not? Also what 
would it do in the low memory situation? allocate from heap? wait? This is not 
pauseless operation.

I did not mean to imply pauseless globally, but the memory reclaim operations 
introduced here are pauseless, thus reducing pauses overall, as whenever we 
would have had a pause from ParNew/FullGC to reclaim, we would not here.

bq. We won't be able to answer queries directly from the messaging threads for 
the number of reasons not even indirectly related to your approach, at least 
for not breaking SEDA, which also supposed to be a safe guide for over 
utilization.

I'm not sure why you think this would be a bad thing. It would only help for 
CL=1, but we are often benchmarked using this, so it's an important thing to be 
fast on if possible, and there are definitely a number of our users who are 
okay with CL=1 for whom faster responses would be great. Faster query answering 
should reduce over-utilisation, assuming some back-pressure built in to 
MessagingService or the co-ordinator managing its outstanding proxied requests 
to ensure it isn't overwhelmed by the responses.

bq. The same way as jemalloc or any other allocator does it, it least that is 
not reinventing the wheel.

Do you mean you would use jemalloc for every allocation? In which case there 
are further costs incurred for crossing the JNA barrier so frequently, almost 
certainly outweighing any benefit to using jemalloc. Otherwise we would need to 
maintain free-lists ourselves, or perform compacting GC. Personally I think 
compacting GC is actually much simpler.



 Partially Off Heap Memtables
 

 Key: CASSANDRA-6689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2

 Attachments: CASSANDRA-6689-small-changes.patch


 Move the contents of ByteBuffers off-heap for records written to a memtable.
 (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922897#comment-13922897
 ] 

Jonathan Ellis commented on CASSANDRA-6285:
---

[~mshuler] Can you test hsha with Viktor's jar above?  
(https://issues.apache.org/jira/browse/CASSANDRA-6285?focusedCommentId=13917950page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13917950)

I want to know if
# you can reproduce with a single node
# if not, if you can reproduce with multiple nodes
# assuming either 1 or 2, if you can still reproduce after applying Pavel's 
heap allocation path

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.6

 Attachments: CASSANDRA-6285-disruptor-heap.patch, compaction_test.py


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-6285:
--

Tester: Michael Shuler

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.6

 Attachments: CASSANDRA-6285-disruptor-heap.patch, compaction_test.py


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922903#comment-13922903
 ] 

Michael Shuler commented on CASSANDRA-6285:
---

Sure - let me see what I can find out.

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.6

 Attachments: CASSANDRA-6285-disruptor-heap.patch, compaction_test.py


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[Cassandra Wiki] Update of Committers by TylerHobbs

2014-03-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Committers page has been changed by TylerHobbs:
https://wiki.apache.org/cassandra/Committers?action=diffrev1=38rev2=39

  ||Aleksey Yeschenko||Nov 2012||Datastax|| ||
  ||Jason Brown||Feb 2013||Netflix|| ||
  ||Marcus Eriksson||April 2013||Datastax|| ||
+ ||Tyler Hobbs||March 2014||Datastax|| ||
  
  {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}}
  


[jira] [Assigned] (CASSANDRA-6065) Use CQL3 internally in schema code and HHOM

2014-03-06 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura reassigned CASSANDRA-6065:
--

Assignee: Mikhail Stepura

 Use CQL3 internally in schema code and HHOM
 ---

 Key: CASSANDRA-6065
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6065
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.1 beta2


 We mostly use CQL3 internally everywhere now, except HHOM and schema-related 
 code. We should switch to CQL3+the new paging for HHOM to replace the current 
 ugliness and to CQL3 for all schema-related serialization and deserialization.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[Cassandra Wiki] Update of Committers by AlekseyYeschenko

2014-03-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Committers page has been changed by AlekseyYeschenko:
https://wiki.apache.org/cassandra/Committers?action=diffrev1=39rev2=40

  ||Dave Brosius||May 2012||Independent||Also a 
[[http://commons.apache.org|Commons]] committer||
  ||Yuki Morishita||May 2012||Datastax
  ||Aleksey Yeschenko||Nov 2012||Datastax|| ||
- ||Jason Brown||Feb 2013||Netflix|| ||
+ ||Jason Brown||Feb 2013||Apple|| ||
  ||Marcus Eriksson||April 2013||Datastax|| ||
+ ||Mikhail Stepura||March 2014||nScaled|| ||
  ||Tyler Hobbs||March 2014||Datastax|| ||
  
  {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}}


[Cassandra Wiki] Update of Committers by AlekseyYeschenko

2014-03-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Committers page has been changed by AlekseyYeschenko:
https://wiki.apache.org/cassandra/Committers?action=diffrev1=40rev2=41

  ||Aleksey Yeschenko||Nov 2012||Datastax|| ||
  ||Jason Brown||Feb 2013||Apple|| ||
  ||Marcus Eriksson||April 2013||Datastax|| ||
- ||Mikhail Stepura||March 2014||nScaled|| ||
+ ||Mikhail Stepura||January 2014||nScaled|| ||
  ||Tyler Hobbs||March 2014||Datastax|| ||
  
  {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}}


[jira] [Created] (CASSANDRA-6809) Compressed Commit Log

2014-03-06 Thread Benedict (JIRA)
Benedict created CASSANDRA-6809:
---

 Summary: Compressed Commit Log
 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 3.0


It seems an unnecessary oversight that we don't compress the commit log. Doing 
so should improve throughput, but some care will need to be taken to ensure we 
use as much of a segment as possible. I propose decoupling the writing of the 
records from the segments. Basically write into a (queue of) DirectByteBuffer, 
and have the sync thread compress, say, ~64K chunks every X MB written to the 
CL (where X is ordinarily CLS size), and then pack as many of the compressed 
chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-06 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13922983#comment-13922983
 ] 

Pavel Yaskevich commented on CASSANDRA-6689:


bq. These are already addressed in CASSANDRA-6694.

Is there a branch/patch to see all of the changes involved?

bq. The more we rely on staying within ParNew, the more often we are going to 
exceed it; and reducing the number of ParNew runs is also a good thing. You 
said you have 300ms ParNew pauses, occuring every second? So reducing the max 
latency and total latency is surely a good thing?

I'm not trying to imply that we should rely on the ParNew, I'm just saying that 
all of the read/write requests are short lived enough to stay inside young 
generation region, even if we slip the effect are masked to all other long term 
allocations that we do which get promoted.

bq. How does this work without knowing the maximum size of a result set? We 
can't have a client block forever because we didn't provide enough room in the 
pools. Potentially we could have it error, but this seems inelegant to me, when 
it can be avoided. It also seems a suboptimal way to introduce back pressure, 
since it only affects concurrent reads / large reads. We should raise a ticket 
specifically to address back pressure, IMO, and try to come up with a good all 
round solution to the problem.

Let the users specify directly or if not specified just take a guess based on 
total system memory, plus we can add an option to extend in the run time, of 
any problem that uses database there is a capacity planing stage and use-case 
spec or at least experimentation which would allow to size pools correctly.

bq. I did not mean to imply pauseless globally, but the memory reclaim 
operations introduced here are pauseless, thus reducing pauses overall, as 
whenever we would have had a pause from ParNew/FullGC to reclaim, we would not 
here.

Sorry but I still don't get it, do you mean lock-free/non-blocking or that it 
does no syscalls or something similar? But that doesn't matter for pauses as 
much as allocation throughput and fragmentation of Java GC.

bq. I'm not sure why you think this would be a bad thing. It would only help 
for CL=1, but we are often benchmarked using this, so it's an important thing 
to be fast on if possible, and there are definitely a number of our users who 
are okay with CL=1 for whom faster responses would be great. Faster query 
answering should reduce over-utilisation, assuming some back-pressure built in 
to MessagingService or the co-ordinator managing its outstanding proxied 
requests to ensure it isn't overwhelmed by the responses.

The fact is that we have SEDA at least as a first line of defense for 
over-utilization, so local reads a scheduled directly a different stage, we 
shouldn't be trying to do anything directly in the messaging stage, it adds 
another complications not related to this very ticket.

bq. Do you mean you would use jemalloc for every allocation? In which case 
there are further costs incurred for crossing the JNA barrier so frequently, 
almost certainly outweighing any benefit to using jemalloc. Otherwise we would 
need to maintain free-lists ourselves, or perform compacting GC. Personally I 
think compacting GC is actually much simpler.

As I mentioned, there is a jemalloc implementation in Netty project already 
which is pure Java, so we at least should consider it before trying to 
re-invent.

bq. It would be great to be more NUMA aware, but this is not about traffic over 
the interconnect, but simply with the arrays/memory banks themselves, and 
doesn't address any of the other negative consequences. You'll struggle to get 
more than a few GB/s bandwidth out of a modern CPU given that we are copying 
object trees (even shallow ones - they're still randomly distributed), and we 
don't want to waste any of that if we can avoid it

I'm still not sure how worse it would make the things, Java is the worst of 
cache locality with it's object placement anyway but we are not going to be 
copying deep trees. Let me outline the steps that I want to see to be taken to 
make incremental, which is how we usually do things for Cassandra project: 

# code an off-heap allocator or use existing one like on of the ByteBufAlloc 
implementations (evaluate new vs. existing);
# Change memtables to use allocator for step #1 and copy data to heap buffers 
when it's read from memtable so it's easy to track the life time of buffers;
# Do an extensive testing to check how horrible the copy really is for 
performance, find ways to optimize;
# If everything is bad, switch from copy to reference tracking (in all of the 
commands, native protocol etc.);
# Do an extensive testing to check if it improves the situation;
# Change serialization/deserialization to use new allocator (pooled buffer 
instead of always allocating on heap);
# Same as 

[jira] [Created] (CASSANDRA-6810) SSTable and Index Layout Improvements/Modifications

2014-03-06 Thread Benedict (JIRA)
Benedict created CASSANDRA-6810:
---

 Summary: SSTable and Index Layout Improvements/Modifications
 Key: CASSANDRA-6810
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6810
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
 Fix For: 3.0


Right now SSTables are somewhat inefficient in their storage of composite keys. 
I propose resolving this by merging (some of) the index functionality with the 
storage of keys, through introducing a composite btree/trie structure (e.g. 
string b-tree) to represent the key, and for this structure to index into the 
cell position in the file. This structure can then serve as both an efficient 
index and the key data itself. 

If we then offer the option of (possibly automatically decided for you at 
flush) storing this either packed into the same file directly prepending the 
data, or in a separate key file (with small pages), with an uncompressed page 
cache we can get good performance for wide rows by storing it separately and 
relying on the page cache for CQL row index lookups, whereas storing it inline 
will allow very efficient lookups of small rows where index lookups aren't 
particularly helpful. This removal of extra data from the index file, however, 
will allow CASSANDRA-6709 to massively scale up the efficiency of the key 
cache, whilst also reducing the total disk footprint of sstables and (most 
likely) offering better indexing capability in similar space



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6811) nodetool no longer shows node joining

2014-03-06 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-6811:
---

 Summary: nodetool no longer shows node joining
 Key: CASSANDRA-6811
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6811
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Vijay
Priority: Minor
 Fix For: 1.2.16


When we added effective ownership output to nodetool ring/status, we 
accidentally began excluding joining nodes because we iterate the ownership 
maps instead of the the endpoint to token map when printing the output, and the 
joining nodes don't have any ownership.  The simplest thing to do is probably 
iterate the token map instead, and not output any ownership info for joining 
nodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-6285:
--

Attachment: 6285_testnotes1.txt

6285_testnotes1.txt attached.

Neither a single node with hsha, nor a 3 node ccm cluster with hsha gave me any 
interesting errors with the attack jar.  Should I go back and try some of the 
previous repro steps and check yay/nay on the patch fixing this for those?

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.6

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, compaction_test.py


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[Cassandra Wiki] Update of FrontPage by BrandonWilliams

2014-03-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The FrontPage page has been changed by BrandonWilliams:
https://wiki.apache.org/cassandra/FrontPage?action=diffrev1=100rev2=101

Comment:
stop using broken wayback link for dynamo paper

  If you would like to contribute to this wiki, please send an email to the 
mailing list dev.at.cassandra.apache-dot-org and we will be happy to add you. 
Contributions welcome!
  }}}
  
- Cassandra is a highly scalable, eventually consistent, distributed, 
structured key-value store. Cassandra brings together the distributed systems 
technologies from 
[[http://web.archive.org/web/20120221222718/http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf|Dynamo]]
 and the data model from Google's 
[[http://research.google.com/archive/bigtable-osdi06.pdf|BigTable]]. Like 
Dynamo, Cassandra is 
[[http://www.allthingsdistributed.com/2008/12/eventually_consistent.html|eventually
 consistent]]. Like BigTable, Cassandra provides a ColumnFamily-based data 
model richer than typical key/value systems.
+ Cassandra is a highly scalable, eventually consistent, distributed, 
structured key-value store. Cassandra brings together the distributed systems 
technologies from 
[[http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf|Dynamo]]
 and the data model from Google's 
[[http://research.google.com/archive/bigtable-osdi06.pdf|BigTable]]. Like 
Dynamo, Cassandra is 
[[http://www.allthingsdistributed.com/2008/12/eventually_consistent.html|eventually
 consistent]]. Like BigTable, Cassandra provides a ColumnFamily-based data 
model richer than typical key/value systems.
  
  Cassandra was open sourced by Facebook in 2008, where it was designed by 
Avinash Lakshman (one of the authors of Amazon's Dynamo) and Prashant Malik ( 
Facebook Engineer ). In a lot of ways you can think of Cassandra as Dynamo 2.0 
or a marriage of Dynamo and BigTable. Cassandra is in production use at 
Facebook but is still under heavy development.
  


[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923022#comment-13923022
 ] 

Pavel Yaskevich commented on CASSANDRA-6285:


[~mshuler] Can you try the same on the machine running Linux (if you haven't 
done that yet)?

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.6

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, compaction_test.py


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923022#comment-13923022
 ] 

Pavel Yaskevich edited comment on CASSANDRA-6285 at 3/6/14 8:39 PM:


[~mshuler] Can you try the same on the machine running Linux (if you haven't 
done that yet)? 

Edit: from the log it looks like Disruptor wasn't using the off-heap memory 
because JNA is disabled, Off-heap allocation couldn't be used as JNA is not 
present in classpath or broken, using on-heap instead. So it would be great if 
you could test this on Linux with jna enabled.

Thanks!


was (Author: xedin):
[~mshuler] Can you try the same on the machine running Linux (if you haven't 
done that yet)?

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.6

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, compaction_test.py


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923026#comment-13923026
 ] 

Michael Shuler commented on CASSANDRA-6285:
---

I'm using a linux machine  :)

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.6

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, compaction_test.py


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923026#comment-13923026
 ] 

Michael Shuler edited comment on CASSANDRA-6285 at 3/6/14 8:41 PM:
---

I'm using a linux machine  :)  - and will link in JNA - good suggestion.


was (Author: mshuler):
I'm using a linux machine  :)

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.6

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, compaction_test.py


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923029#comment-13923029
 ] 

Benedict commented on CASSANDRA-6689:
-

bq. Is there a branch/patch to see all of the changes involved?

Yes, 
[offheap2c+6781|https://github.com/belliottsmith/cassandra/tree/offheap2c+6781] 
and [offheap2c|https://github.com/belliottsmith/cassandra/tree/offheap2c+6781]

The former includes performance enhancements for DirectByteBuffer use, from a 
separate ticket.

bq.  I'm just saying that all of the read/write requests are short lived enough 
to stay inside young generation region
Well, yes and no. We have to wait until any client finishes processing the 
data, so there's no absolute guarantee they'll not survive. But either way, 
ParNew pauses are almost as bad as full GC, only they happen much more often. 
300ms pauses are not a good thing, and if we can then reduce the size of YG so 
that when these pauses do happen they're shorter (or, say, maybe even use G1GC) 
then that's even better.

bq. Sorry but I still don't get it, do you mean lock-free/non-blocking or that 
it does no syscalls or something similar? But that doesn't matter for pauses as 
much as allocation throughput and fragmentation of Java GC.

I mean no worker threads have to stop in order for memory to be reclaimed. 
There is no STW for reclaiming memory. This is unrelated to the lock freedom.

bq. Java is the worst of cache locality with it's object placement anyway 

Well, exactly. That's my point here :-)

bq.  it adds another complications not related to this very ticket.

I have no intention of doing it in this ticket, just indicating it as a very 
useful improvement.

bq. which is how we usually do things for Cassandra project

It looks like you really have two steps, if I boil it down: 1) implement 
copying approach; 2) if slow, implement this approach? 

The issue with this, really, though, is that if we simply allocate ByteBuffers 
off-heap like we have in this ticket, we get none of the benefit of increased 
memtable capacity, since the on-heap overheads are still huge. Since that's one 
of the main goals here, it seems problematic to lose out on it - we don't need 
to test to find out what the result would be. This ticket was supposed to only 
be a stepping stone. 

Possibly we could scale CASSANDRA-6694 back somewhat, removing any support for 
RefAction.refer(), and always performing a copy onto heap from the NativeCell 
implementations, and spending some time ripping out any of the GC or any of the 
code at Memtable discard for coping with RefAction.refer(). But honestly this 
seems like a waste of effort to me, as the majority of the code would remain, 
we'll just not have as good a final solution. But it could be done if that is 
the community preference. We could probably split it up into further commits, 
but each commit adds the potential for more errors in my book, when we have a 
good solution that is ready to go.

Maximally separated timeline for separated commits of 6694 would be:
# introduce concurrency primitives
# introduce .memory and .data refactor, but only for ByteBuffer allocators, and 
RefAction, but only allocateOnHeap
# introduce all .data.Native* implementations and a cut down native allocator, 
using OpOrder to guard copying like we currently do for referencing (we need to 
use something, and it is simpler than ref counting or anything else)
# introduce RefAction.refer(), GC, etc. (i.e. final patch)

I would rather not split it up as, as I say, each new patch is an opportunity 
to mess up, but it could be done. We can do performance testing to our hearts 
content at each stage, although personally I think such testing would not be 
sufficient to demonstrate no benefit to the current approach, as even with 
little benefit seen it presupposes no future performance benefit is capped by 
the slower solution. So I would push for the final patch anyway. That said, I 
would be surprised if we did not see any improvement by comparison.

 Partially Off Heap Memtables
 

 Key: CASSANDRA-6689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2

 Attachments: CASSANDRA-6689-small-changes.patch


 Move the contents of ByteBuffers off-heap for records written to a memtable.
 (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6812) Iterative Memtable-SSTable Replacement

2014-03-06 Thread Benedict (JIRA)
Benedict created CASSANDRA-6812:
---

 Summary: Iterative Memtable-SSTable Replacement
 Key: CASSANDRA-6812
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6812
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
 Fix For: 3.0


In an ideal world we wouldn't flush any memtable until we were almost 
completely out of room. The problem with this approach (and in fact whenever we 
currently *do* run out of room) is that flushing an entire memtable is a slow 
process, and so write latencies spike dramatically during this interval.

The solution to this is, in principle, quite straight forward: As we write 
chunks of the new sstable and its index, open them up immediately for reading, 
and free the memory associated with the portion of the file that has been 
written so that it can be reused immediately for writing. This way whilst 
latency will increase for the duration of the flush, the max latency 
experienced during this time should be no greater than the time taken to flush 
a few chunks, which should still be on the order of milliseconds, not seconds.

This depends on CASSANDRA-6689 and CASSANDRA-6694, so that we can reclaim 
arbitrary portions of the allocated memory prior to a complete flush.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-03-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923043#comment-13923043
 ] 

Michael Shuler edited comment on CASSANDRA-6285 at 3/6/14 8:54 PM:
---

With jna enabled, yes, on a single node, after running the attack jar and 
restarting c*, I get:
{noformat}
 INFO [main] 2014-03-06 14:46:51,272 ColumnFamilyStore.java (line 254) 
Initializing tmp.CF
 INFO [main] 2014-03-06 14:46:51,277 ColumnFamilyStore.java (line 254) 
Initializing system_traces.sessions
 INFO [main] 2014-03-06 14:46:51,280 ColumnFamilyStore.java (line 254) 
Initializing system_traces.events
 INFO [main] 2014-03-06 14:46:51,281 CassandraDaemon.java (line 291) completed 
pre-loading (5 keys) key cache.
 INFO [main] 2014-03-06 14:46:51,288 CommitLog.java (line 130) Replaying 
/var/lib/cassandra/commitlog/CommitLog-3-1394138577628.log, /var/lib/
cassandra/commitlog/CommitLog-3-1394138577629.log
 INFO [main] 2014-03-06 14:46:51,311 CommitLogReplayer.java (line 184) 
Replaying /var/lib/cassandra/commitlog/CommitLog-3-1394138577628.log (C
L version 3, messaging version 7)
ERROR [main] 2014-03-06 14:46:51,432 CommitLogReplayer.java (line 306) 
Unexpected error deserializing mutation; saved to /tmp/mutation77387084
28696995512dat and ignored.  This may be caused by replaying a mutation against 
a table with the same name but incompatible schema.  Exception
 follows: 
org.apache.cassandra.serializers.MarshalException: Invalid version for TimeUUID 
type.
at 
org.apache.cassandra.serializers.TimeUUIDSerializer.validate(TimeUUIDSerializer.java:39)
at 
org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:172)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:276)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:97)
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:151)
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:131)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:312)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:471)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:560)
{noformat}

I'll double-check a 3 node cluster, then patch and see where I get.

(edit) this looks quite different than the previously posted errors - not sure 
if I'm on the right track, here..


was (Author: mshuler):
With jna enabled, yes, on a single node, after running the attack jar and 
restarting c*, I get:
{noformat}
 INFO [main] 2014-03-06 14:46:51,272 ColumnFamilyStore.java (line 254) 
Initializing tmp.CF
 INFO [main] 2014-03-06 14:46:51,277 ColumnFamilyStore.java (line 254) 
Initializing system_traces.sessions
 INFO [main] 2014-03-06 14:46:51,280 ColumnFamilyStore.java (line 254) 
Initializing system_traces.events
 INFO [main] 2014-03-06 14:46:51,281 CassandraDaemon.java (line 291) completed 
pre-loading (5 keys) key cache.
 INFO [main] 2014-03-06 14:46:51,288 CommitLog.java (line 130) Replaying 
/var/lib/cassandra/commitlog/CommitLog-3-1394138577628.log, /var/lib/
cassandra/commitlog/CommitLog-3-1394138577629.log
 INFO [main] 2014-03-06 14:46:51,311 CommitLogReplayer.java (line 184) 
Replaying /var/lib/cassandra/commitlog/CommitLog-3-1394138577628.log (C
L version 3, messaging version 7)
ERROR [main] 2014-03-06 14:46:51,432 CommitLogReplayer.java (line 306) 
Unexpected error deserializing mutation; saved to /tmp/mutation77387084
28696995512dat and ignored.  This may be caused by replaying a mutation against 
a table with the same name but incompatible schema.  Exception
 follows: 
org.apache.cassandra.serializers.MarshalException: Invalid version for TimeUUID 
type.
at 
org.apache.cassandra.serializers.TimeUUIDSerializer.validate(TimeUUIDSerializer.java:39)
at 
org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:172)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:276)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:97)
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:151)
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:131)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:312)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:471)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:560)
{noformat}

I'll double-check a 3 node cluster, then patch and see where I get.

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: 

[jira] [Comment Edited] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923029#comment-13923029
 ] 

Benedict edited comment on CASSANDRA-6689 at 3/6/14 9:07 PM:
-

bq. Is there a branch/patch to see all of the changes involved?

Yes, 
[offheap2c+6781|https://github.com/belliottsmith/cassandra/tree/offheap2c+6781] 
and [offheap2c|https://github.com/belliottsmith/cassandra/tree/offheap2c+6781]

The former includes performance enhancements for DirectByteBuffer use, from a 
separate ticket.

bq.  I'm just saying that all of the read/write requests are short lived enough 
to stay inside young generation region
Well, yes and no. We have to wait until any client finishes processing the 
data, so there's no absolute guarantee they'll not survive. But either way, 
ParNew pauses are almost as bad as full GC, only they happen much more often. 
300ms pauses are not a good thing, and if we can then reduce the size of YG so 
that when these pauses do happen they're shorter (or, say, maybe even use G1GC) 
then that's even better.

bq. Sorry but I still don't get it, do you mean lock-free/non-blocking or that 
it does no syscalls or something similar? But that doesn't matter for pauses as 
much as allocation throughput and fragmentation of Java GC.

I mean no worker threads have to stop in order for memory to be reclaimed. 
There is no STW for reclaiming memory. This is unrelated to the lock freedom.

bq. Java is the worst of cache locality with it's object placement anyway 

Well, exactly. That's my point here :-)

bq.  it adds another complications not related to this very ticket.

I have no intention of doing it in this ticket, just indicating it as a very 
useful improvement.

bq. which is how we usually do things for Cassandra project

It looks like you really have two steps, if I boil it down: 1) implement 
copying approach; 2) if slow, implement this approach? 

The issue with this, really, though, is that if we simply allocate ByteBuffers 
off-heap like we have in this ticket, we get none of the benefit of increased 
memtable capacity, since the on-heap overheads are still huge. Since that's one 
of the main goals here, it seems problematic to lose out on it - we don't need 
to test to find out what the result would be. This ticket was supposed to only 
be a stepping stone. 

Possibly we could scale CASSANDRA-6694 back somewhat, removing any support for 
RefAction.refer(), and always performing a copy onto heap from the NativeCell 
implementations, and spending some time ripping out any of the GC or any of the 
code at Memtable discard for coping with RefAction.refer(). But honestly this 
seems like a waste of effort to me, as the majority of the code would remain, 
we'll just not have as good a final solution. But it could be done if that is 
the community preference. We could probably split it up into further commits, 
but each commit adds the potential for more errors in my book, when we have a 
good solution that is ready to go.

Maximally separated timeline for separated commits of 6694 would be:
# introduce concurrency primitives
# introduce .memory and .data refactor, but only for ByteBuffer allocators
# introduce all .data.Native* implementations and a cut down native allocator, 
using OpOrder to guard copying like we currently do for referencing (we need to 
use something, and it is simpler than ref counting or anything else)
# introduce RefAction
# introduce GC

I would rather not split it up as, as I say, each new patch is an opportunity 
to mess up, but it could be done. We can do performance testing to our hearts 
content at each stage, although personally I think such testing would not be 
sufficient to demonstrate no benefit to the current approach, as even with 
little benefit seen it presupposes no future performance benefit is capped by 
the slower solution. So I would push for the final patch anyway. That said, I 
would be surprised if we did not see any improvement by comparison.


was (Author: benedict):
bq. Is there a branch/patch to see all of the changes involved?

Yes, 
[offheap2c+6781|https://github.com/belliottsmith/cassandra/tree/offheap2c+6781] 
and [offheap2c|https://github.com/belliottsmith/cassandra/tree/offheap2c+6781]

The former includes performance enhancements for DirectByteBuffer use, from a 
separate ticket.

bq.  I'm just saying that all of the read/write requests are short lived enough 
to stay inside young generation region
Well, yes and no. We have to wait until any client finishes processing the 
data, so there's no absolute guarantee they'll not survive. But either way, 
ParNew pauses are almost as bad as full GC, only they happen much more often. 
300ms pauses are not a good thing, and if we can then reduce the size of YG so 
that when these pauses do happen they're shorter (or, say, maybe even use G1GC) 
then that's even better.


[jira] [Commented] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache

2014-03-06 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923065#comment-13923065
 ] 

T Jake Luciani commented on CASSANDRA-5863:
---

[~cburroughs] for my own edification, do you have any stats on C* performance 
with ZFS l2arc vs regular fs + lz4 compression?

 Create a Decompressed Chunk [block] Cache
 -

 Key: CASSANDRA-5863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
  Labels: performance
 Fix For: 2.1 beta2


 Currently, for every read, the CRAR reads each compressed chunk into a 
 byte[], sends it to ICompressor, gets back another byte[] and verifies a 
 checksum.  
 This process is where the majority of time is spent in a read request.  
 Before compression, we would have zero-copy of data and could respond 
 directly from the page-cache.
 It would be useful to have some kind of Chunk cache that could speed up this 
 process for hot data. Initially this could be a off heap cache but it would 
 be great to put these decompressed chunks onto a SSD so the hot data lives on 
 a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache

2014-03-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13923074#comment-13923074
 ] 

Benedict commented on CASSANDRA-5863:
-

I'm not sure that would be a useful comparison, as RAIDZ is rubbish. L2ARC 
exists to combat ZFS' inherent weaknesses, so it wouldn't give us an idea of 
what a similar feature might achieve for us.

 Create a Decompressed Chunk [block] Cache
 -

 Key: CASSANDRA-5863
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: T Jake Luciani
Assignee: Pavel Yaskevich
  Labels: performance
 Fix For: 2.1 beta2


 Currently, for every read, the CRAR reads each compressed chunk into a 
 byte[], sends it to ICompressor, gets back another byte[] and verifies a 
 checksum.  
 This process is where the majority of time is spent in a read request.  
 Before compression, we would have zero-copy of data and could respond 
 directly from the page-cache.
 It would be useful to have some kind of Chunk cache that could speed up this 
 process for hot data. Initially this could be a off heap cache but it would 
 be great to put these decompressed chunks onto a SSD so the hot data lives on 
 a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/3] git commit: Fix potentially repairing with wrong nodes

2014-03-06 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 f67b7a477 - 84626372c
  refs/heads/trunk 937189237 - 8c1e4e089


Fix potentially repairing with wrong nodes

patch by yukim; reviewed by krummas for CASSANDRA-6808


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84626372
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84626372
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84626372

Branch: refs/heads/cassandra-2.1
Commit: 84626372c0ae007bb55e0072d981d856f5a8e72c
Parents: f67b7a4
Author: Yuki Morishita yu...@apache.org
Authored: Thu Mar 6 15:24:52 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Mar 6 15:24:52 2014 -0600

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/service/StorageService.java | 15 ++-
 2 files changed, 11 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/84626372/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index af7f2fd..df19467 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,6 +8,7 @@
  * Scrub should not always clear out repaired status (CASSANDRA-5351)
  * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
  * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
+ * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 Merged from 2.0:
  * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
  * Pool CqlRecordWriter clients by inetaddress rather than Range 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/84626372/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index e358f7d..132e674 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -2577,9 +2577,14 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return;
 }
 
-SetInetAddress neighbours = new HashSet();
+SetInetAddress allNeighbors = new HashSet();
+MapRange, SetInetAddress rangeToNeighbors = new 
HashMap();
 for (RangeToken range : ranges)
-
neighbours.addAll(ActiveRepairService.getNeighbors(keyspace, range, 
dataCenters, hosts));
+{
+SetInetAddress neighbors = 
ActiveRepairService.getNeighbors(keyspace, range, dataCenters, hosts);
+rangeToNeighbors.put(range, neighbors);
+allNeighbors.addAll(neighbors);
+}
 
 ListColumnFamilyStore columnFamilyStores = new ArrayList();
 for (ColumnFamilyStore cfs : getValidColumnFamilies(false, 
false, keyspace, columnFamilies))
@@ -2587,7 +2592,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 UUID parentSession = null;
 if (!fullRepair)
-parentSession = 
ActiveRepairService.instance.prepareForRepair(neighbours, ranges, 
columnFamilyStores);
+parentSession = 
ActiveRepairService.instance.prepareForRepair(allNeighbors, ranges, 
columnFamilyStores);
 
 ListRepairFuture futures = new ArrayList(ranges.size());
 for (RangeToken range : ranges)
@@ -2595,7 +2600,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 RepairFuture future;
 try
 {
-future = forceKeyspaceRepair(parentSession, range, 
keyspace, isSequential, neighbours, columnFamilies);
+future = forceKeyspaceRepair(parentSession, range, 
keyspace, isSequential, rangeToNeighbors.get(range), columnFamilies);
 }
 catch (IllegalArgumentException e)
 {
@@ -2642,7 +2647,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 }
 if (!fullRepair)
-
ActiveRepairService.instance.finishParentSession(parentSession, neighbours);
+
ActiveRepairService.instance.finishParentSession(parentSession, allNeighbors);
 sendNotification(repair, String.format(Repair command #%d 
finished, cmd), new int[]{cmd, ActiveRepairService.Status.FINISHED.ordinal()});
 }
 }, null);



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-06 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c1e4e08
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c1e4e08
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c1e4e08

Branch: refs/heads/trunk
Commit: 8c1e4e0897ebcb04ec4277fcdc0f8b0cd74bc1fd
Parents: 9371892 8462637
Author: Yuki Morishita yu...@apache.org
Authored: Thu Mar 6 15:25:43 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Mar 6 15:25:43 2014 -0600

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/service/StorageService.java | 15 ++-
 2 files changed, 11 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c1e4e08/CHANGES.txt
--



[2/3] git commit: Fix potentially repairing with wrong nodes

2014-03-06 Thread yukim
Fix potentially repairing with wrong nodes

patch by yukim; reviewed by krummas for CASSANDRA-6808


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84626372
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84626372
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84626372

Branch: refs/heads/trunk
Commit: 84626372c0ae007bb55e0072d981d856f5a8e72c
Parents: f67b7a4
Author: Yuki Morishita yu...@apache.org
Authored: Thu Mar 6 15:24:52 2014 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Mar 6 15:24:52 2014 -0600

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/service/StorageService.java | 15 ++-
 2 files changed, 11 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/84626372/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index af7f2fd..df19467 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,6 +8,7 @@
  * Scrub should not always clear out repaired status (CASSANDRA-5351)
  * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
  * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
+ * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 Merged from 2.0:
  * Avoid race-prone second scrub of system keyspace (CASSANDRA-6797)
  * Pool CqlRecordWriter clients by inetaddress rather than Range 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/84626372/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index e358f7d..132e674 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -2577,9 +2577,14 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 return;
 }
 
-SetInetAddress neighbours = new HashSet();
+SetInetAddress allNeighbors = new HashSet();
+MapRange, SetInetAddress rangeToNeighbors = new 
HashMap();
 for (RangeToken range : ranges)
-
neighbours.addAll(ActiveRepairService.getNeighbors(keyspace, range, 
dataCenters, hosts));
+{
+SetInetAddress neighbors = 
ActiveRepairService.getNeighbors(keyspace, range, dataCenters, hosts);
+rangeToNeighbors.put(range, neighbors);
+allNeighbors.addAll(neighbors);
+}
 
 ListColumnFamilyStore columnFamilyStores = new ArrayList();
 for (ColumnFamilyStore cfs : getValidColumnFamilies(false, 
false, keyspace, columnFamilies))
@@ -2587,7 +2592,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 UUID parentSession = null;
 if (!fullRepair)
-parentSession = 
ActiveRepairService.instance.prepareForRepair(neighbours, ranges, 
columnFamilyStores);
+parentSession = 
ActiveRepairService.instance.prepareForRepair(allNeighbors, ranges, 
columnFamilyStores);
 
 ListRepairFuture futures = new ArrayList(ranges.size());
 for (RangeToken range : ranges)
@@ -2595,7 +2600,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 RepairFuture future;
 try
 {
-future = forceKeyspaceRepair(parentSession, range, 
keyspace, isSequential, neighbours, columnFamilies);
+future = forceKeyspaceRepair(parentSession, range, 
keyspace, isSequential, rangeToNeighbors.get(range), columnFamilies);
 }
 catch (IllegalArgumentException e)
 {
@@ -2642,7 +2647,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 }
 }
 if (!fullRepair)
-
ActiveRepairService.instance.finishParentSession(parentSession, neighbours);
+
ActiveRepairService.instance.finishParentSession(parentSession, allNeighbors);
 sendNotification(repair, String.format(Repair command #%d 
finished, cmd), new int[]{cmd, ActiveRepairService.Status.FINISHED.ordinal()});
 }
 }, null);



  1   2   >